Skip to main content
Podcast

We get AI for work™: Where to Start When Evaluating AI Tools

Details

December 2, 2025

Although it is tempting to rush to implement the newest AI tools, taking inventory of what tools your organization uses, which laws you are subject to and which obligations flow from those laws are all critical steps to maintain legal compliance.

Transcript

Joe Lazzarotti

Principal, Tampa

Hello, everyone, and welcome to our latest episode of We get AI for work. My name is Joe Lazzarotti. I have the pleasure of being here with my partner and friend, Eric Felsberg. Eric and I are leaders of the firm's AI practice, dealing with governance, bias audits, privacy and security, and a whole bunch of other issues that our clients are facing with regard to AI. 

Today is going to be a pretty interesting episode, and it ties into a lot of other episodes that we've done and topics that we've written about. Today, we're going to talk about just some high-level issues around how organizations manage the compliance obligations that they have with regard to AI. There's a complex array of federal, state, and local laws that have just really started to come online. There will be more coming down the road for sure. 

In terms of where we get started with that, what are some of the key things to think about, some strategies to tackle this behemoth? Eric, where do you think an organization might want to get started?

Eric Felsberg

Principal, New York City

Good to be with you, Joe, as always. When you're presented with an AI tool and look, a lot of times, you'll be presented with that from an internal resource. Somebody will say, Hey, I've learned about this particular AI tool. I think it could streamline our business in the following ways, and we could be much more efficient if we implement this. At that point, the answer shouldn't be great; let's implement. The first thing you need to do is fully understand what it is that particular tool is accomplishing. In other words, what type of task is it performing? How is it doing that? Again, you may not get into the details of exactly how that algorithm is operating, but it may obligate you from a due diligence perspective to have a discussion with the developer or the vendor who's providing the tool. Any documentation that they may have, technical, recording or otherwise, certainly is something that should be reviewed. 

Once we have a sense that we feel good about this tool, it's performing what we think it purports to perform, and we can see the value and the impact it will have on our workplace. Now, you have to start to think about what laws we need to worry about or concern ourselves with from a compliance perspective. Where I always like to start is at the top with the federal obligations. 

The landscape right now, as we sit here in October of 2025, is a little bit different from what we've seen in the past. One of the primary concerns with the use of AI tools is their use in selection-making roles. If you're using this as part of hiring, promotion or otherwise, from the federal perspective, you have to think initially about Title VII of the Civil Rights Act, which is one of our chief anti-discrimination laws in our country. What Title VII provides is that you, as the user of this tool, have to monitor whether the recommendations or the selections that this tool is producing as part of its output have a disparate impact on certain individuals or groups of individuals. You have to monitor the output of the tool for impact, and you're going to have to do this repeatedly for the time that you're using that tool. 

If a disparate impact is identified from a statistical perspective, there's something known as the Uniform Guidelines on Employee Selection Procedures. Again, on the federal level, it is a document written back in the 70s. The Uniform Guidelines require that once you identify that there's an impact, you have to have that tool validated, which essentially means, in a nutshell, you have somebody with the technical experience like an industrial organizational psychologist review that tool to ensure that it is assessing or taking into account qualities of different candidates that are necessary for the successful performance of a job. It's much more technical than that, but that's the general gist of it. The validation study, which is the output of that process, can be used as part of a defense if, later on, you should face a claim. It will also give you some peace of mind that the tool is actually operating as it purports to operate. In other words, it's doing what it's supposed to be doing. That's where I start, and then from there jump into the state laws. 

Before I do that, Joe, I don't know if you have any further comments on the federal landscape at the moment.

Lazzarotti

That makes a ton of sense in trying to navigate the applicable law for some tool. In your case, you used an example of making a decision about an applicant or an employee. I'm going to pull the lens back a little bit and say, well, in terms of understanding compliance, because a lot of our clients have multiple tools. They have a lot of different ways that they're trying to introduce AI, whether it's to facilitate employment decisions, streamline business practices, stall attacks on their information systems, or to try to manage their fleet. There are a lot of different applications. One of the things I'm seeing is that clients are facing a problem of how to manage all of these different technologies, whether they're some type of software, vendor app, or device that leverages AI. There might be a whole plethora of laws that might apply depending on the use case for that particular technology. 

One step might be if you're just using one AI tool in the employment context, you'd jump right to that perhaps, but if you have a lot of tools, just even understanding as an organization what HR is doing, what the marketing team is doing, and what the IT folks are doing. Let's get a handle on that and inventory those tools. Then, just figure out if we're trying to manage compliance, saying, well, who's managing it? Who's the one making the decisions? Who comes to the table to help make those decisions? How do we understand what the risk tolerance is? That's going to be a part of that – some of that governance, risk, and compliance that surrounds a strategy for utilization of AI in the organization. Then, you might think, before we select a tool, we want to understand if you're going to buy it as opposed to build it. Maybe you want to think about which vendor we are going to work with? How do we assess that vendor from the standpoint of price, service, quality, privacy, and security? Then, of course, the tool itself and how it works?

Then, if you make that decision and you wind up saying, this is a technology that we think is going to work for us and it's going to produce the results that we need. Now, we have to also think about what our compliance obligations are. I'm sure we'll cover this or have covered it on another episode; how do we think about whether this particular AI is subject to some type of law or some type of obligation that we have? To me, that's where I'm getting into looking at the federal laws first. Let's look at who's going to be impacted by these tools, where they're going to be used, and then try to understand what we have to worry about when doing business in those states. Do we feel like we come under the purview of that state law? If yes, then let’s get into whether that state law applies and how it stands up against laws in other states and try to make decisions about how best to deploy that technology in a way that's compliant. It’s going to be a challenge, and clients are really going to struggle with this, particularly when they're working with different silos or what used to be silos in their organization, because a lot of different disciplines have to come together to make some of those decisions.

Felsberg 

I agree with that. You mentioned something that made me grin – take an inventory. I enjoy that as we speak with clients, the conversation, a lot of times, will start with I'm not aware that we're using any AI. Usually, my response to that is, yeah, right? There's AI being used. It's your job now to go out and figure out what's out there. I agree with you absolutely, in terms of an inventory. 

To pick up where you left off, there are a lot of state and local laws popping up around the country. The examples I was giving were around employee selections. There is a consistent drumbeat now across the country, with places like New York City, Colorado, California, and Texas. In some other states, such as Illinois, they're all looking to regulate the use of AI. While the AI could be broader, as you were suggesting, a lot of them are really focusing on the issues of bias and transparency. If an AI tool is being used in your business, again, just to pick up the example I was using. If it's an employee selection or applicant selection role being required, if you're subject to the law, Joe, you're exactly right. Depending on the nature of the tool you're using, you may or may not be subject to all these laws. You may be subject to a subset of them. If you are subject to these laws, there is a lot of emphasis on transparency. In other words, in my example, give the job seeker or the candidate notice that you're using AI as part of your selection mechanism. In at least one jurisdiction, we may even have an obligation to provide essentially an opt-out notice. We also have obligations to have notice, and the content of that notice is somewhat prescribed. 

Then, of course, the issue of bias audits, which goes back to kind of where I started, is really tantamount to a disparate impact analysis. In New York City, for example, if you're subject to their AI law, which is known as AEDTs in the city, you would have to publish that bias audit. Picking up what you were mentioning and understanding exactly what tools are being used out there, or which tools the business wants to use in the future? Hopefully, you're having this conversation before any of these things go live, but sometimes it doesn't always work out that way. Figuring out which tools they are, which laws you're subject to, and then which obligations flow from those laws, are really the critical parts of this, and I would say certainly among the most difficult.

Lazzarotti

To add to that, speaking of difficulty, I know a big challenge for managing compliance is the shadow AI. Which employees are using AI that the company doesn't really know about? I was talking to someone at a conference, and they were telling me that they saw some employees visiting ChatGPT on the company's systems, and they began to inquire about it. Someone said, Yes, I just got the $20 account, and I do a ton of work with it. It was like, wow, what are you sharing and how are you using it? They had to quickly begin to put some guardrails around that because they found that many employees were doing that, and there were a lot of issues that arose. 

The other key issue is that we are at the very beginning of a significant amount of regulation around this technology, and just being able to keep up with that. I know I have to give a presentation down here in Tampa to a bunch of legal professionals, just what the strategy is, and how you keep up with all of it. I don't know that there's a clear answer. There are a lot of resources that are out there. Part of any management plan for compliance is trying to manage the new laws, requirements, and contractual obligations that begin to apply to the organization. Again, it depends on whether it's on the business side or the employment side. It's a bear for anyone who's charged with this responsibility.

Felsberg 

Just to give a plug to one of our other episodes around AI policies, a lot of the risks that you just mentioned and considerations, a lot of times can be addressed, and at least have some guardrails in place in the form of an AI policy. A lot of times when we talk about this stuff, it's not like your employees are going out of their way to try to make your life difficult. Oftentimes, they just don't know any better. A lot of times, you can demystify the use of AI, provide some guardrails around its use and also provide an avenue where employees can come forward and ask questions like, I heard about this new AI, can we use it? I don't want to go too much into detail now because we do have another episode on that, but that is also a critical thing to have in place to help control and mitigate some of the risks around these issues.

Lazzarotti

That makes a lot of sense. This was a great discussion, and this is a good place to wrap it up. For our listeners, we hope that you found this helpful as you begin to manage your own use of AI technologies, whatever they may be. If you have any questions or would like us to cover some particular AI topics, please reach out anytime. We have a dedicated email for that: AI@JacksonLewis.com 

As always, Eric, it was a pleasure. Thank you all for listening.

© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome. 

Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,100+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged and stable, and share our clients’ goals to emphasize belonging and respect for the contributions of every employee. For more information, visit https://www.jacksonlewis.com.