Skip to main content

2023 Mid-Year Report: AI Update


June 22, 2024


Alitia Faccone:

No matter the month or year, employers can count on one thing, changes in workplace law. Having reached the midway point of the year, 2023 does not look to be an exception. What follows is one of a collection of concise programs, as We Get Work™ the podcast provides the accompanying voice of the Jackson Lewis 2023 Mid-Year Report. Bringing you up-to-date legislative, regulatory, and litigation insights that have shaped the year thus far and will continue to do so. We invite you and others at your organization to experience the report in full on, or listen to the podcast series on whichever streaming platform you turn to for compelling content. Thank you for joining us.

Eric Felsberg:

Thank you for joining us. My name is Eric Felsberg. I’m a principal in the New York metro region of the firm and co-lead the firm’s AI services team along with my colleague Joe Lazzarotti.

Joe, how are you today?

Joe Lazzarotti:

Doing good, Eric. Doing good.

Eric Felsberg:

All right, great. Well, Joe, looking forward to chatting with you today about AI 101, or Artificial Intelligence. I think both of us would agree that this is an exciting topic, not only for you and I personally, but for all of our listeners, I would suspect, only because it’s one of the most rapidly emerging issues that’s impacting the workplace. Technology is developing at a lightning pace, and interestingly enough, the regulators who are chasing after the issue are rushing to catch up with some legislation. And we’ll talk more about that in a few minutes, but I thought it’d be interesting to start with what is AI, right, Joe? We’ve all heard basic definitions that talk about predictions using data, machine learning, natural language processing, but the evolving set of laws and legislation that we’ve seen, they may have different definitions of AI. And one of the things that is critical for employers to think about is, what law you are you thinking about? Which one are you determining whether you’re subject to, and how does that particular legislation define AI?

So first things first, right? Employers, we have to think about what is AI. So Joe, what do you think?

Joe Lazzarotti:

Well, I think you hit on it a bit in terms of machine learning and using computers and data to process through algorithms that provides an output on which companies, organizations, businesses, can make decisions. And we’re seeing a lot of that with the New York City law. Companies are evaluating data and using machine learning and these types of algorithms to provide some rating on candidates for a position. And he hope is that that helps them make those decisions.

And there’s obviously all different types of applications of that kind of technology. We’re seeing that with the ChatGPT and different models that allow employees, which seems like that’s gotten a lot more attention because it’s so easy to use and people are amazed at how that gets put together. But the AI technology also is used in terms of wearables that can help determine whether a person is doing their job in the most safe manner or cameras that can determine is someone else in the room or is the person that’s there falling asleep and so on. So there’s all these types of applications and devices that are using this type of technology to help analyze the situation, analyze data, and then provide some output that can provide alerts, can rate candidates, and so on.

Eric Felsberg:

Yeah, I mean, it seems there’s no end to what technology’s capable of helping us do. The issue of AI, we hear about it. You and I were talking the other day about how there seems to be an article every morning when we get to our desktop about how AI is somehow impacting the workplace.

And I think the one thing that we think about, and I know that both you and I have, and as well as a number of our colleagues have received a lot of questions from our clients about, how do you manage this process? How do you implement and roll out AI in the workplace? What are the steps that we have to take? And I think that we agree that certainly employers must have a process in mind. So thinking about issues as to, why do we want to use AI, what are the use cases? Who in our organization is going to be permitted to use AI? Who’s going to be the gatekeeper? Who’s going to monitor that technology to ensure that it’s appropriately used, that we’re aware of the information being leveraged to access the AI? And a whole host of related issues that candidly, we just don’t have a heck of a lot of time to spend on today. But certainly we advocate to our clients that they need to institute a process and follow through on that process to ensure that AI is being implemented in a responsible and risk averse manner.

I know, Joe, you and I have had a lot of discussions about this, and I know you have some interesting perspectives on the rollout of AI in the workplace.

Joe Lazzarotti:

Well, I think the rollout is really critical. I would even say that, at the outset, do you even want to implement it, right? And what exactly is it? And who’s investigating it and who’s assessing the data that goes into it, the data that’s going to come out, how do we validate that? How do we test it? How do we train the AI? What vendors are we using?

I mean, one of the things we’re seeing, I know you are as well, a lot of what clients are thinking about, they’re getting these great offers or these pitches from vendors that are saying, "Hey, I can offer you this. I can help you with that." And everybody’s selling AI, and I think a lot of our clients are getting bombarded and trying to evaluate this. So what does that process, that procurement process look like to figure out, is this what we need? Is this what it says it is? Who’s responsible in the long run? What should that agreement look like? What kind of language do we want in the agreement that allocates responsibility fairly? What data is collected in the process? Who manages that data?

So I think there’s all these questions, I think, that you have to think about, clients have to think about in terms of what’s that process for looking at a solution, evaluating it, and then deciding, okay, we’re going to go with it. And then, I think, yeah, you’re right, we maybe start talking about, well, at, as part of that, what laws apply? If you can get through all of that and think it’s still going to work, then yeah, I think rolling it out, it has to be done in a smart way. And I can tell you, just thinking about what we went through with a lot of, not necessarily AI, but just technology in general, leaving that to one department can really be dangerous. I think there’s importance to having a team of people in the organization with different disciplines who can help to evaluate it from different perspectives.

Eric Felsberg:

Yeah, no, I completely agree. And you spoke a good amount about data there, and that’s something that’s in near and dear to my heart. I mean, one of the other roles I have in the firm is in the area of data analytics. And in this space, data is critical. It is the backbone upon which all of these AI tools are based. So we need to make sure that the data that we’re thinking of using as part of our AI initiative is, number one, do we have the data? Are those data accurate? And that’s important for two reasons. Number one is if you’re going to going to use those data to train AI models, are the data that we’re using, is it accurate? And then, as we’ll see in a moment when we talk a little bit about some of the emerging laws, data is also important because in order to comply with... You mentioned the New York City law, which we’ll talk about in a minute or so. There’s a requirement there to conduct a bias audit and to make the results of that bias audit public.

So you have to be particularly comfortable with your data that it is accurate. And I can tell you from experience, data’s tricky, and it very well could be inaccurate despite your best efforts to ensure its accuracy. So it is certainly another issue to talk about. But to your point, talking about some of the laws, the law, as I mentioned at the top of our discussion, is emerging and literally is being shaped as we speak now. And it makes it tough for employers to stay up-to-date because the laws are new. There’s very little past precedent that we can look to determine how these laws are going to be enforced. And so that presents another challenge for employers. And we’re seeing this activity both on the federal level. The EEOC has been particularly active in the area of AI, Joe as I know. But we’re also seeing, I think, even a more rapid emergence among the state and local levels as it pertains to the use of AI.

And you mentioned New York City. They have a law that deals with the use of AI as it relates to employee selection procedures. And that’s top of mind as this law is out there right now. And a lot of our clients that are in the New York area have been thinking about that. And that law requires bias audits and certain notices to be posted. But that, that’s really just the beginning. So beyond the EEOC, beyond New York City, there’s a host of other states. And Joe, you and I were speaking earlier, we could probably stay on the phone for a couple of hours talking about some of the emerging laws that are coming out. But states like Illinois, California, they’re right there with New York in terms of trying to put some regulations around the use of AI.

But Joe, I mean, in your space, and when you’re dealing with clients on AI and related privacy issues, what are you seeing from a legal perspective?

Joe Lazzarotti:

Well, I think you’re exactly right. And I think you look at... There’s a couple of ways to approach this. You look at the New York City statute ordinance, and what you see is a lot of times clients are just saying, well, we think we’re subject to it. How do we comply with New York? And I think the first question is, you may not even be subject to that New York City law as we’ve talked about. You really have to go through that analysis to see, do these laws apply?

And then, you’re right, there’s a whole ton of states and cities and at the federal level talking about regulation of AI per se. But that AI doesn’t exist in a vacuum. And there are other laws and privacy laws, rules are around monitoring of employees. And those also have to be evaluated in terms of rolling out this technology. So you think about the CCPA, which the enforcement period begins on July 1st, and it deals with the collection of personal data. It may apply to the data that’s collected from monitoring someone’s activity using AI or ChatGPT on a company’s systems. So it’s not just the specific New York City law that has to be taken into account. So I think what you’re getting at is exactly right, and we’re seeing the same thing in our group.

Eric Felsberg:

Yeah. And I think that’s right. I mean, with the emergence of these generative AI tools that we’ve seen impacting the workplace. For an employer that is dealing with these issues, I mean, I think the first step is they have to identify the stakeholders that are impacted by some of this technology, and then think about creating a policy. I know, Joe, you and I have worked on AI policies, which is a relatively new thing to us. I mean, we’ve been practicing employment law for many, many years, but the issue of creating an AI policy is a new one. And there are certain features of that policy that we would want employers to have in place.

And Joe, I’ll ask you in a minute just to talk about that, but one of the things that is interesting with these policies is because this area is evolving so rapidly, you continually have to go back to revisit these policies to ensure that they’re contemplating the most recent and up-to-date technology uses. And that’s challenging for employers because, again, much like we’ve seen with these legislatures, they’re always in this state of chasing down this issue only because it is evolving. So while it’s exciting to be a part of it, it also could be fairly daunting to have to deal with some of the collateral issues that come from usage of these tools.

But Joe, when we think about creating a policy, I know, like I said, you and I have spent some time working on these types of policies, what are some of the features that we think should go into a policy? And understanding that we probably can’t cover all of them right now, but just to at least get our listeners started.

Joe Lazzarotti:

I think as a threshold question, maybe you need two policies, maybe an internal policy that governs the technology folks, the HR folks, legal folks. How do we manage it internally and how do we use certain features of the technology? How do we configure it? That’s done internally. And then, when you roll it out to your workforce, some of the things that I see and we think make some sense to consider, do we want some kind of approval? Do we want to have a process where we want to see what use case the group of employees wants to apply that technology for? Do we want to do that? Maybe, maybe not. Do we want to monitor that activity? Do we want that activity going on outside of our environment for remote workers to do on their own home computer? Or do we want that being done at the company? What kind of data do we want to allow employees to use an input into a chat with ChatGPT, for example?

And then we saw that story of the unfortunate situation with a couple of lawyers who introduced their argument into court with hallucinations from ChatGPT and wound up getting sanctioned, it looks like. How do we deal with accuracy? How do we help employees understand what they should do and how they should go about making sure that the output is accurate?

And then the last thing I would say there is thinking about, well, there’s all these deep fakes, the technology has gotten so easy to create a deep fake, which is maybe a video that’s basically created, that looks like it’s real, but it really isn’t. And if those wind up getting stored on the company’s systems for some reason, how does that affect discovery in a litigation later on? How do we validate that? So maybe regulating or providing some policies around whether we want employees to be making deep fakes using our system. So those are just some things we’ve seen coming up in crafting policies around this.

Eric Felsberg:

Terrific. I was going to say, that’s all that we’ve seen, right? There’s a number of issues we’ve mentioned during our discussion, but certainly more to come on this as we partner with our clients to respond to these emerging technologies. So with that, I think we’ll break here, but certainly, we expect this discussion to be continued as we move forward. So thank, you all.

Joe Lazzarotti:

Yeah, thanks so much.

Alitia Faccone:

Thank you for joining us on We Get Work™. Please tune into our next program where we will continue to tell you not only what’s legal, but what is effective. We Get Work™ is available to stream and subscribe on Apple Podcasts, Google Podcasts, Libsyn, Pandora, SoundCloud, Spotify, Stitcher, and YouTube. For more information on today’s topic, our presenters, and other Jackson Lewis resources, visit As a reminder, this material is provided for informational purposes only. It is not intended to constitute legal advice, nor does it create a client lawyer relationship between Jackson Lewis and any recipient.

© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome. 

Focused on labor and employment law since 1958, Jackson Lewis P.C.'s 950+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged, stable and diverse, and share our clients' goals to emphasize inclusivity and respect for the contribution of every employee. For more information, visit