The legal field today is on the edge of major changes because of new artificial intelligence (AI) technology. This change is full of both exciting possibilities and big challenges. General Counsel, who are the legal guardians within a corporation, are starting to use AI to improve their work, deal with complicated legal issues, and to make important decisions.
The risk of AI bias
No doubt Artificial intelligence (AI) is making legal work for General Counsel more efficient and accurate. It can easily do time-consuming tasks like checking documents, analysing contracts, and researching case laws, which helps the lawyers focus on the big picture. AI can quickly go through a lot of information, helping with risk analysis and making legal advice better. Companies behind these AI systems are now training AI efficiently so that it becomes fully capable to predict the outcomes of legal cases and help with planning, which further will help companies handle legal risks better. However, the question is that is it to simply for General Counsel to rely on artificial intelligence or there are some major issues and risks that we have to bear in mind.
Artificial Intelligence is indeed a double edged sword and therefore one has to be extremely careful while using it. Using AI, comes with plenty of risks too. One big issue is biasness, Artificial intelligence (Generative AI) is trained on different data sets, however if the AI is trained on biased data, its decisions will also be biased too this will lead to unfair legal advice and outcomes. For example, if AI is used for hiring within a legal department of a company and the training data includes historical hiring decisions that favoured a particular gender or ethnicity, the AI might replicate these biases, leading to unfair hiring practices.
Client confidentiality and the perennial question of ethics
Over reliance on Artificial intelligence might also make lawyers less skilled in traditional legal tasks, which could affect the quality of legal advice when AI isn't used. Additionally, using AI for legal work raises some major concerns about ethics, privacy, and keeping information safe. In the legal profession, confidentiality with the client holds upmost importance and let’s say if an AI tool is being used for contract analysis and that tool is not adequately secured, there could be chances of a breach, leading to the exposure of confidential information. This will not only jeopardize client confidentiality but will also expose the company to legal and reputational risks in the public domain. If AI's decision-making process will not be transparent then that will also make it hard to understand the rationale behind AIs decisions, this can make it difficult to hold AI accountable.
A recent incident in the US is an epitome as to how the General Counsel cannot rely on AI completely, here during a court proceeding Judge P. Kevin Castel identified several red flags in the fabricated decisions, including nonsensical legal analysis, factual errors, and inconsistent procedural histories. Subsequent affidavits submitted by the attorneys contained misstatements and contradictory explanations, to his surprise he later on discovered that one of the counsels had employed ChatGPT to fabricate legal decisions. Even when the lawyers explicitly asked about the legitimacy of the cases, ChatGPT provided assurances of their existence. This is a prime example of the concept of “Hallucination by AI”, where AI produces false output but presents it in a convincing manner which can believed prima facie.
To use or not to use AI
Should we stop using AI till the time its not fully efficient to produce dependable results?
Both in-house lawyers and external counsel have been dealing with this question ever since the advent of generative AI tools and automation processes. The answer to this predicament lies in moderation. The most optimal way for General Counsel to use AI is to strike a balance. Using AI to assist and not to replace human expertise should be the mantra. Lawyers need to keep themselves updated enough about AI's strengths and weaknesses to use it smartly and avoid risks. It's also crucial to have certain ethical guidelines for using AI, including how to handle bias, protecting privacy, and keeping decisions transparent.
In the end, we just have to understand one simple thing, which is that, using AI blindly is not the solution rather the balanced coexistence of human intelligence and artificial intelligence is the only way forward.