Skip to main content
Podcast

2023 Mid-Year Report: Ethics of Artificial Intelligence

Details

July 24, 2023

Transcript

Alitia Faccone:

No matter the month or year, employers can count on one thing, changes in workplace law. Having reached the midway point of the year, 2023 does not look to be an exception. What follows is one of a collection of concise programs, as We Get Work™ the podcast provides the accompanying voice of the Jackson Lewis 2023 Mid-Year Report. Bringing you up-to-date legislative, regulatory, and litigation insights that have shaped the year thus far and will continue to do so. We invite you and others at your organization to experience the report in full on JacksonLewis.com, or listen to the podcast series on whichever streaming platform you turn to for compelling content. Thank you for joining us.

Rebecca Ambrose:

Hello there and welcome to our podcast on the 2023 midyear report on ethics and the use of artificial intelligence in the practice of law. My name is Rebecca Ambrose. I am a principal and the deputy general counsel here at Jackson Lewis. And Scott, do you want to introduce yourself?

Scott Ruygrok:

Yes. Hi, I’m Scott Ruygrok. I am a principal in the Orange County California office.

Rebecca Ambrose:

Great. So Scott, do you want to kick us off by maybe talking about since you’re a practicing lawyer, how you see AI being used in the current practice of law?

Scott Ruygrok:

Yeah, absolutely. I think first and foremost, I’m running into some attorneys who are, I think either underestimating the importance of AI in our industry or potentially overestimating its abilities. And I think both have dangers. One, I think in the underestimating its importance. It’s here to stay, it’s moving forward. And so this is not a matter of discussion when it comes to ethics of is it ethical, whether or not it’s used. It’s going to be a discussion of how can we use this? How are we going to be using it to best service our employees, our clients? How are clients using it to best service their employees in a legal and ethical manner, not whether or not they’re going to be using it.

Then I think on the other aspect is the overestimating its ability, and that’s where we’re running into pitfalls of attorneys maybe not giving the right amount of diligence for their clients and over relying on the intelligence of what is effectively software expecting that because we asked it a question, what is going to output is always truthful, is not hallucinated, is going to be accurate and that we can just forward it to clients. I don’t think that’s doing our due diligence as attorneys. I don’t think that’s best servicing our clients. And so that is where I am most focused in my practice in trying to use it in a way that is going to be helpful for clients, but is accurate and is useful. And that’s where I am relying on my skills as an attorney to ensure the accuracy, to ensure its ethical application, both for clients in the legal sense, but then also in the output and the product that clients are then using with their employees.

Rebecca Ambrose:

That makes sense. I mean, I agree. I think that there are pitfalls to both approaches that you just highlighted with how lawyers in particular are looking at the use of AI in the practice of law. And I think we really have to focus in on how we use these tools in a manner that is consistent and compliant with our obligations under the rules of professional conduct. And I think one area where we’ve seen lawyers kind of be struggling in this area is that how do we apply these rules of professional conduct that exist to govern our conduct as lawyers to the use of AI? And as of the date of this recording, I don’t think that there has been any ethics body like the ABA or any state bar association that’s really issued an ethics opinion yet on the application of the rules of professional conduct to the use of AI in the practice of law.

So I think lawyers are confused as to where they should look for guidance on this topic. But I think it’s important to remember that the rules of professional conduct were drafted very broadly, and so they’re meant to apply to a whole host of circumstances. Right. And in the past, as lawyers have used new and emerging technology, first with the use of email, right, and then smartphone and storage in the cloud, we’ve seen the ABA and other ethics bodies apply the rules of professional conduct to the use of emerging technologies. And I think ultimately we will see ethics opinions issued on the use of AI in the practice of law. And my guess is that many of these rules that have been highlighted in the past will be applied to the use of AI in the practice of law. And so maybe we can talk about those rules, Scott, in a little bit more detail and you can get some examples of how you think lawyers can act in accordance with their ethical obligations.

Scott Ruygrok:

Absolutely.

Rebecca Ambrose:

All right. So maybe we can start with the duty of competence, which is found in rule 1.1 of the rules of professional conduct. And it’s important to remember before I dive in that I know we’re all familiar with the model rules of professional conduct, which are promulgated by the American Bar Association. Just remember that every jurisdiction has adopted its own version of the rules of professional conduct that’s based on the model rules format. But the rules do vary between jurisdictions. So anytime you’re looking at your own ethical conduct, you should make sure you’re familiarizing yourself with the rules that are in place in the applicable jurisdiction. But going back to 1.1, which is our duty of competence, right, we have as lawyers, have an obligation to provide competent representation to our clients. That means we have to have the legal knowledge, scale, thoroughness and preparation that are reasonably necessary for the representation.

Back in 2012 or thereabouts, the ABA modified its comments to rule 1.1, to include a reference to the fact that in order to maintain the requisite knowledge and skill to meet your obligations under rule 1.1, lawyers have to keep abreast of changes in the law in its practice, including the benefits and risks associated with relevant technology. So really to meet your obligation under rule 1.1, you have to understand both what technology is available to assist you in providing legal services to your client and also the risks and benefits of using that technology. And you have to have a basic understanding of how that technology works. Right. So I don’t know, Scott, do you have some thoughts on how this may come up as lawyers use AI, maybe to help draft briefs, things along those lines?

Scott Ruygrok:

Yeah, I think here is a perfect example of the overestimating its ability. You can’t ask currently, you can’t ask AI for an answer to a question and automatically trust that it’s going to be legally accurate. And I think we’ve seen situations where lawyers have used it to write a brief, trusting that every citation is accurate and real. One of the wonders of AI is the ability to make itself look real and these cases don’t exist. I have associates who’ve looked into research for me and they keep repeating this case that doesn’t exist. And so as a supervising attorney, it’s really my responsibility to look into those cases. Does it exist? Is it accurate? And so then using my knowledge to sort of do this litmus test of, is that correct? And then not just trusting it on its face. And so I think that’s one of the true competency as a lawyer and our diligence in working with AI.

Rebecca Ambrose:

And I think you raised a really good point there too. Your obligations may differ depending on whether you’re a very junior lawyer or a more senior supervising lawyer. Right. As a junior lawyer, I think you really have to understand that you have to double check everything the AI’s telling you. You can’t rely on that, especially if you’re not familiar with a practice of law and you ask a legal research question for example, and it spits something out. You can’t just rely on that answer as being 100% accurate without doing some independent digging to make sure that what it’s telling you is correct. That goes to both competence and diligence.

But then when you’re in a position like you are Scott and supervising other attorneys, you kind of have to use your gut in a different way. If something smells off, right, like this case you’re suggesting, yeah, then you’ll have an obligation to make sure that the lawyers acting under you are compliant with your own ethical obligations as well. So you do what you’ve been doing and pushing back on that case that doesn’t seem right to make sure that they’re using the AI properly.

Scott Ruygrok:

And as a first year associate way before AI, I even had senior partners looking at me quizzically saying, are you absolutely certain about that decision or about what you’re telling me? And I think we all wish that we could just type the question into a search engine and trust the first link that popped up, but it’s really that extra step of doing the actual research, doing the underlying diligence in, whoever is doing the research at all levels to really confirm and to utilize it.

Rebecca Ambrose:

Absolutely. So I think, yeah, confidence that goes to double checking the results of the AI. Same thing with diligence. I think there are also big confidentiality concerns. So under the rules of professional conduct, rule 1.6 specifically, right, lawyers have an obligation not only to not knowingly reveal their clients’ confidential information, but also to take steps from making sure that a client’s confidential information is not inadvertently disclosed or that you don’t allow unauthorized access to a client’s confidential information. And so I think confidentiality concerns can come up in various ways in using AI, maybe both publicly available AI and then also these products that are being developed for lawyers to use specifically. Scott, do you have any thoughts on that?

Scott Ruygrok:

Yeah, I think this goes back to our obligation as attorneys to understand how these AI tools function. What am I telling the AI and what level of confidentiality is that potentially breaching? I’m asking AI a question about do I have to offer meal and rest periods in California versus does my client X, Y, Z corp have to offer meal and rest periods to its California employee named Scott? And I think those are very different questions and potentially one is feeding information to an AI that is going to learn from that information and understanding what we are telling, we are feeding into these programs is going to be important as attorneys. And I think as we see products develop, as we see new forms of AI develop, I think there’s going to be new safeguards that are built in. But until that happens, I think that there’s going to need to be extra efforts by attorneys to scrub anything that we’re using and to not be inadvertently disclosing information.

Rebecca Ambrose:

Yeah. Certainly. And also if you’re using a vendor, right, to provide an AI tool to you, and probably as a lawyer, you have an obligation to do your due diligence on that vendor and find out where is this information I’m putting in being stored? Who has access to it, what security measures has the vendor taken to make sure that this information is protected? Things along those lines. So I think it’s always a good idea to be acting in a manner that is consistent with your obligations under your duty of confidentiality, but particularly so in the use of this new technology where we’re all kind of grappling with what exactly it does, how’s the information stored, who has access to it? How’s the AI learning from it? I think just always keep those thoughts at the forefront of your mind. I think the final rule I wanted to mention, just it’s interesting and it kind of loops us back into the things we’ve already discussed, is under rule 5.3, lawyers have an obligation to supervise non-lawyer assistants.

And I always like to talk about this rule a little bit because it used to be that lawyers had an obligation to supervise non-lawyer assistance, right, assuming that’s a physical human person. But back in 2012, the ABA changed the title of this rule from assistance to assistants to suggest that the rule applies more broadly both to humans and non-humans alike. Right. And so you have an obligation to make sure that the assistance that you’re getting and providing legal services to your clients comports with your own ethical obligations under the rule. So you are going to have an ethical obligation if you’re using this type of software and these products to make sure that they’re being used and that the information that you’re getting and that you’re giving, all of that complies with your own ethical obligations. All right. Scott, where do you think we’re headed with AI? Where do you see this going?

Scott Ruygrok:

I think the initial time saving is going to be immense, but to me, I don’t think it should necessarily be seen solely as a time saver. And frankly for me, I’m finding that projects might take the exact same amount of time, but that time is better quality time. AI has sort of become this always patient, always available colleague that one, helps me with that faster initial draft, those faster initial ideas put onto paper so I’m not staring at a cursor. But then we have that time then to wordsmith and to think about that correct wording. And in the context of employment law, every word really matters if you are sending an employee a termination letter, a discipline letter, if you are denying a reasonable accommodation based on a disability. And so having that flexibility, that time that AI is providing us is allowing a better work product. More over after three years of law school, 11 years of practice developing by a level of cynicism, I am finding that AI is making me more human than I might ever be.

So finally, lawyers might actually be humans. Because if you work with AI, you can ask it for tone, you can ask it for feedback and revisions, for sensitivity and for thoughtfulness. And we’re so focused on the legal leagues. We’re so focused on the technicalities that we lose the human perspective on how to talk to employees and how we should be interacting with our workforce. And using this is now challenging me and should also be used to challenge all of us as attorneys on the language and how we’re working with third parties, how we’re working with employees. And oddly enough, it’s AI that’s making us more human, but I think that’s a valuable resource of AI in the legal field. Of course, we’re going to see it in, beyond that, we’re going to see it being used in summarizing in case research and things like that. But I think that one of its most valuable things is using us to be better attorneys, not just faster attorneys.

Rebecca Ambrose:

Love that. I love leaving on a positive note. So we’ll end there. Thanks so much for joining me today, Scott.

Scott Ruygrok:

Oh, my pleasure. Thank you.

Alitia Faccone:

Thank you for joining us on We Get Work™. Please tune into our next program where we will continue to tell you not only what’s legal, but what is effective. We Get Work™ is available to stream and subscribe on Apple Podcasts, Google Podcasts, Libsyn, Pandora, SoundCloud, Spotify, Stitcher, and YouTube. For more information on today’s topic, our presenters, and other Jackson Lewis resources, visit JacksonLewis.com. As a reminder, this material is provided for informational purposes only. It is not intended to constitute legal advice, nor does it create a client lawyer relationship between Jackson Lewis and any recipient.
 

© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome. 

Focused on labor and employment law since 1958, Jackson Lewis P.C.'s 950+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged, stable and diverse, and share our clients' goals to emphasize inclusivity and respect for the contribution of every employee. For more information, visit https://www.jacksonlewis.com.