Skip to main content
Podcast

We Get AI for Work™: Is Your Tool Really AI?

Details

November 6, 2025

Employers face a patchwork of federal, state, and local laws, each with its own definitions and requirements for AI technologies in the workplace. Understanding these legal nuances and proactively evaluating each tool’s function before deployment are essential for staying compliant and minimizing liability.

Transcript

Eric Felsberg
Principal, Long Island 

Well, hello everyone, and welcome to our latest episode of We get AI for work. My name is Eric Felsberg, and as always, I'm joined by my colleague and friend, Joe Lazzarotti.

Joe, we have an interesting episode today that, from first pass, may seem a little basic, but it's actually really important. The question we're going to talk about today is my AI tool, actually, AI? It may seem simple, but it could have critical implications as you think about your compliance plan as you work with AI in the workplace. 

Joe, any thoughts to get us started?

Joseph Lazzarotti
Principal, Tampa

First, good to see you, Eric. I hope all is well. There’s a lot to think about with this question, both from the standpoint of what the technology is or isn't, and what the impression of the company's organization believes, maybe in different pockets. People in IT may think about it differently than people in HR or people in marketing. Then, there’s what the law says AI is. From our standpoint and in our world, that's what matters a lot of times. It's something you really have to think about in terms of answering that question, because as you said, it could have pretty significant consequences.

Felsberg 

A lot of the technology we use on a daily basis —and probably don't give all that much thought to —is AI. The question becomes, but is that the AI we're concerned with? Is that the AI that regulators are seeking to regulate? When I'm answering questions about AI from clients, a lot of times, at least in the current environment, what clients are interested in knowing, and you touched on this in your comment, is what do I need to worry about from a legal perspective? I'm using this tool, maybe I'm using it as part of an employee or an applicant selection function. This AI promises to help streamline our process, be more efficient, and allow us to vet many more applicants than we otherwise would if we were not using the tool. What do I need to worry about from a legal perspective? I think about it from the federal perspective first. 

From the federal perspective, one of the things I often think about is under Title VII of the Civil Rights Act, are there issues of disparate impact potentially that I have to worry about? Maybe there'll be an obligation here where I need to have this tool validated. As I'm answering or thinking about answering these questions, I think about things like the Uniform Guidelines on Employee Selection Procedures, which is still good law today, but it was a document written back in the seventies. What it talks about in a nutshell is that whenever you have any selection mechanism, you must monitor it for disparate impact. If you identify that there is statistical evidence of a disparate impact, you need to have that tool validated. Again, when they wrote this law, they were probably thinking about cognitive ability testing or physical testing, like the ability to lift a certain weight, as part of the consideration for a job. It’s still good law today. 

If we're using an AI tool to make selections, uniform guidelines would apply. If you look at how they define what they mean and what they're trying to regulate from a selection perspective, it's really pretty vague. Paraphrasing, it's really, essentially, any mechanism used to select employees. On the federal level, we have this very broad definition of what potentially could fall under there, but we have to think about that as we're speaking with employers about using some of these technologies. 

Joe, there are a bunch of state laws and local laws that are coming on the scene, which approach this from a slightly different perspective. Maybe you can comment on that.

Lazzarotti 

There are lots of levels of this analysis. A lot of times, when we're talking to clients and colleagues, people are focused on generative AI tools, which are what most people are using. Now, we're hearing a lot about agentic AI, and then there's the more traditional machine learning AI that's been around for a long time. That’s one way to bucket the different types of AI and maybe put some definitions around it. 

Then, to your point, you start looking at the federal level, and you start saying, maybe it's really broad, so we have to see what other frameworks we're working under. If you look at the EU AI Act, Eric, you do a ton of work with the New York City AI law, or the California Fair Employment and Housing Act – they defined that set of regulations that were just issued and finalized. They have a definition of AI that may not exactly coincide with what is required in New York. If you're an organization in multiple states and you're thinking about imposing some type of what we're saying AI or some type of tool nationally, you're going to have to really look at those and decide. In the Colorado law, for example, it's a tool that assists in making a decision, and in the California regs, it facilitates a decision. Are those terms that wind up in litigation meaning different things, and they're not defined themselves? How do you know what that means? Those are just some of the distinctions. It's a substantial decision, or how much involvement does the tool have compared to the human? All of these things play into the question of, well, do we have to comply with that law? 

There are other provisions in the particular framework. Another area where California has had some impact on AI regulation is in the area of the California Consumer Privacy Act. That law is only going to apply if the entity is a business under that law. For example, you have to do business in the state, and you have to control personal information. Then, you have to meet one of three thresholds, one of which is that you have to have gross annual revenues above a threshold. Right now, the threshold for the prior year is $26,625,000. If you're thinking about it, you might have AI, but that law might not apply to you. It’s really an issue of going through and saying, if we're going to deploy this tool, it might be AI or not under the law that we're worried about in a particular jurisdiction and use for a particular purpose.

Felsberg

What you're describing is exactly right. It’s challenging for employers because right now, as we think about using AI tools in the workplace, we are confronted with a patchwork of laws around the country. Now, right now, it's somewhat manageable. It's for the most part, even though AI is getting a lot of attention, it's really only a handful of jurisdictions. Right now, it's a bit easier to keep track of which laws require what and is our tool subject to the laws. To your point, for each tool, you have to examine each of the laws, understand exactly what that tool is performing, what type of tasks it performs, and how it does that. That may require discussions with your provider and the developer to determine exactly how it is that it's going about helping to complete that task. 

Again, the impact can be significant. You mentioned the New York City law, and my office is right outside of New York City. A lot of times, we'll be speaking with New York City employers about the New York City AEDT law. Why do we care whether we hit the definition or not? Because if we do, we potentially may have to prepare bias audits. We may have to, in New York City, publish those bias audits on our career site for the world to see. We will have to issue certain notices – what is the content of those notices, and what are the obligations stemming from those notices? Then, we have to worry about what Colorado and California require, and the other states that are releasing their own laws. While we see some of the same types of themes emerging in a lot of these laws, there are these nuances where it may be considered AI under one law, but maybe not another. 

To the point you're making, we don't really have a lot of litigation or precedent to look at because these laws are so new. We’re all in the same boat here, trying to feel around and do our best to come into compliance and stay there. This is a significant issue. As I mentioned at the outset, it may seem like a really simple question: Is this tool really AI? It's a really critical decision point for employers as they think about rolling out some of these technologies.

Lazzarotti 

Yeah, and there are two additional points I would make to that. One is when an organization acts as an employer; there are issues in that context. However, it's possible that a business could decide, when we have our employer hat on, that this law might apply to us because we're making a significant and employment-related decision. However, that same technology applied in a different use case, like maybe to decide which tools to sell in a particular market, or what have you, may have no implications. The same law just doesn't apply because the way that the law defines a significant decision isn't the kind of activity that's similar to what the legislature and that state decided was a significant decision, like a healthcare decision or a housing decision – those are other areas that you tend to see in these AI laws. That's one.

The other thing is, I'm seeing a lot of clients that serve business customers and provide services that they may have, by contract, agreed to certain limitations on the technologies that they use. In some cases, that's AI. What are you saying in that contract? Are you prohibiting use, or do you have some particular controls in the agreement that you plan to use in the performance of that agreement? Understanding that, it might be very different than how the laws in New York City, California, or other states define AI. It's just what you agreed to by contract. That's something, just thinking about the organization as a whole, there are all these different pockets where AI can have some meaning. It’s really important to understand when you're using certain technology and how the applicable circumstances could affect whether there are rules governing that use.

Felsberg

Absolutely. One last thing around this is that you have to think about these issues before you go live. Oftentimes, there's a lot of excitement, maybe around a particular business unit, whatever it may be, that sees the clear advantages that AI can provide. They get excited about the efficiency, and oftentimes, there's a rush to just implement these right away because of the impact they may have on the business. It is really important to have a gatekeeper there to think about these issues before going live because it's always easier to deal with some of these issues and understand exactly the road you're going down from a compliance standpoint or potentially from a liability perspective. What types of risk are you encountering in doing that assessment beforehand, before you go live? 

Joe, this is, as always, a great discussion with you. I'm sure we could chat about this for an extended period of time, but we won't for now. For our listeners, we hope you found this discussion helpful. If you have any questions or would like us to cover a certain AI-related topic, please do not hesitate to contact us. We have a dedicated email address at AI@JacksonLewis.com. Thanks again for listening.

 

© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome. 

Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,000+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged and stable, and share our clients’ goals to emphasize belonging and respect for the contributions of every employee. For more information, visit https://www.jacksonlewis.com.