Meeting the Challenges of Artificial Intelligence
The evolution of technology can be both exciting and foreboding. We are constantly adjusting to new innovations and those who refuse to take up the challenge will miss out on great opportunities.
We are witness to artificial intelligence (“AI”) developing at lightning speed, performing tasks far quicker than we are capable. And while it is a boon to the legal profession that allows us to serve our clients with greater efficiency, there is also a dark side. Goldman Sachs estimates that 60 per cent of the legal workforce could be lost to AI in the coming years.
Artificial intelligence has the potential to replace lawyers who will not adapt. It is a warning that should be taken seriously.
Founding Father of AI.
A busy set of dominoes has been falling since 1950 when Alan Turing, considered the founding father of AI and modern cognitive science, proposed that a computer can be said to possess artificial intelligence if it can mimic human responses.
There has been much debate about whether AI has passed the Turing test, but one thing is certain: technology is advancing at a breathtaking pace. For some, it is a concern. Geoffrey Hinton, widely regarded as the godfather of AI, resigned from Google with a chilling prediction about AI chatbots.
"Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be," he told the BBC. “Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning.
“And given the rate of progress, we expect things to get better quite fast. So, we need to worry about that."
Hinton makes the point that if advances in this technology are left unfettered for too long, there could be a point where we can't dial it back.
Growing Into Technology.
Technology has historically been known to over-promise and under-deliver. However, we are at a point in civilization where advances can occur much quicker than society can perceive.
We have always grown into our technology. The car replaced the horse and we adapted over time. In 1997 we saw a computer beat Gary Kasparov, then the world’s number one chess player. But it can be difficult to accept the notion that AI can make humans obsolete.
There will come a time, and not too far in the distant future, when people will be required by their profession to consult with their AI counterpart to ensure their decisions are sound.
Of course, AI is not something new or unique, and technology has provided some important innovations. In the legal industry, we rely on all sorts of databases to do the research that was physically draining and time-consuming.
For a fraction of what it once cost to do legal research, you can subscribe to a service such as Alexi, input your question, and get a draft of a factum overnight. In addition, we use practice management software such as Filevine to automate our tasks in a given file, and we rely upon the application’s AI Fields to instantly summarize documents.
Soon, lawyers will be able to talk to a bot on their computer that will fulfill a role similar to an assistant or junior lawyer. Eventually, computers and AI systems will mimic the skill-set of senior, experienced lawyers because they will offer a vast amount of information and analysis. Lawyers will be loath to make a decision without it.
The buzz right now is on generative AI, a form of machine learning that is able to produce text, video, images and other types of content.
ChatGPT is a natural language processing tool that allows users to have human-like conversations with a chatbot. It can answer questions and assist in tasks, such as composing emails, essays and code. It is considered by many a technological marvel but it depends on how and for what purpose you use it.
A U.S. lawyer was recently sanctioned after submitting to the Court a legal brief that contained fake judicial opinions and legal citations, all generated by ChatGPT. He wanted to put together his materials quickly and turned to AI. The result was a very persuasive-sounding brief for the court. But when opposing counsel and the judge started scrutinizing it, they could not find any of the cases cited.
The danger, as demonstrated here, is that a bot doesn't know the limits of your request. If you ask for a persuasive brief, that is what you will get. But it is important to note that the bot does not necessarily differentiate between a legally-sound brief and a work of fiction. The bot will likely produce some good creative writing, but it is not something that will necessarily be factually accurate or legally supportable.
Rules of Professional Conduct.
As lawyers, we are bound by the rules of professional conduct. If you are submitting anything to the Court under your name – whether it is written by someone else or a computer – and something is awry, it is your reputation that you are risking.
Shortly after the U.S. mishap, Manitoba Chief Justice Glenn Joyal issued a practice direction stating that while artificial intelligence might be used in Court submissions, lawyers must now disclose if they have used AI to prepare Court documents.
"While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in Court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence," the practice direction states.
Along with the concern about fictional filings or fake opinions, there is also the possibility that the general public will forgo legal advice and use AI instead to build their case. Part of a lawyer’s role is to make a judge’s life easier. But if people are self-representing using AI and filing questionable memos, briefs and claims, judges may be forced to do more work to determine whether the case submissions have legal merit and whether case, statute or other authoritative source citations are legitimate.
The Manitoba practice direction can be seen as a safeguard. If someone is going to use artificial intelligence to prepare a factum, they will be required to tell the judge how they sourced their work and whether references and the like have been ‘humanly’ verified.
However, that may be going a bit too far. We have been using technology for many years in researching legal issues, summarizing records, preparing and indexing documents, and all other manner of file management, legal work and case prosecution. Now a judge wants to know what technology you used.
Some lawyers may not want to reveal all the technology software that they are using for proprietary reasons. It is almost like Coca-Cola being required to disclose to the judge the secret recipe behind the brand. Others have in-house IT experts that have created their own specialized practice management tools.
Another concern is that identifying the particular technology that touched a particular aspect of the case or its preparation for a hearing, can become a make-work project that defeats the purpose of using the time-saving technology. It may become a deterrent to using the technology, which is a step backward.
The problems and challenges AI presents underscore the importance of learning how to use this technology properly and responsibly.
We should expect to hear more on the subject from the bench, but it takes time. Courts are not likely going to get ahead of the game. And that is to be expected. While we can anticipate certain shortfalls, it would be difficult to deal effectively with the fallout from artificial intelligence before it happens. Of course, by the time a judge rules on a particular issue, the technology will have advanced even further. Technology is moving much, much faster than we can react. Not just for the Courts but for all of society.
Facing the Future.
There is a danger in jumping on the AI bandwagon believing that this is exactly what the legal profession, the practice of law and law firms need. There is the misguided belief in some quarters that technology will reduce all our expenses and we can just rely on AI bots to do the bulk of our work.
The problem is that if someone is needed to diligently verify if not certify the finished product, it is almost like redoing the work, so you are no further ahead.
Futurists such as Ray Kurzweil predict a time that humans will have to adapt and be physically connected to their technology to avoid becoming redundant. It is exciting to imagine having unlimited brain potential, but this brings with it an equal amount of fear of what that would mean to our independence and creativity as thinkers and advocates. Hopefully, this future is many generations away from now.
Preparing for Change.
Change is constant and that is especially true with programs designed to free us of tedious, time-consuming tasks.
I believe that in 20 years from now, artificial intelligence will be so advanced that the concern then becomes what about the jobs it displaces.
I am worried for the future of young legal professionals. Senior staff have a secure place because AI cannot replace those jobs. I am thinking instead of those entry-level employees who take on repetitive and administrative tasks, research and all the other duties, that make our jobs as lawyers much easier. These responsibilities could soon be the domain of AI.
Young professionals need to be alert to that which artificial intelligence provides. They require a technology side to their position, to enhance their worth to their employers, their clients, and the Courts. If they simply attempt to compete with AI, they are likely to lose.
Tomorrow’s job candidates need to be bringing the technology to the lawyer and saying, “I can use these tools for you.” Providing that added value means keeping up with the new technology and using it, not as a replacement for human thinking and creativity but as a catalyst for both. If they are unable to do that, surviving the wave of AI will become their overriding challenge.
Subscribe to our Newsletter