Details
Evolving technology is changing the face of workplace law at a pace previously not contemplated, including in the area of leaves and accommodation.
Transcript
INTRO
Evolving technology is changing the face of workplace law at a pace previously not contemplated, including in the area of leaves and accommodation.
On this episode of We get AI for work™, we sit down with Bryon Bass, CEO of the Disability Management Employer Coalition (DMEC). Bryon discusses the risks and benefits employers should consider when adopting AI technologies in decision-making.
Our co-host is Joe Lazzarotti, principal in the Tampa office, and co-leader of the firm’s Privacy, Data and Cybersecurity Group.
Joe and Bryon, the question on everyone’s mind today is, what are the ways organizations can leverage AI while remaining compliant with potentially competing federal and state policies, and how does that impact my business?
CONTENT
Joe Lazzarotti
Principal, Tampa
I have the pleasure of being here today on our We get AI for work™ podcast to introduce Bryon Bass, who is the CEO of the Disability Management Employer Coalition. That's a great organization that's dedicated to absence and disability professionals.
We're really thrilled to have Bryon here. We do a lot of work with his organization and had the pleasure of being with his team and a lot of other absence and disability professionals, about a month and a half ago or so. They have an upcoming conference, their annual conference in D.C. I you can make it, be sure to look at that. It's in August.
Bryon, thanks again for being here.
Bryon Bass
CEO, Disability Management Employer Coalition
Thanks for having me, Joe. it. It's great seeing you again.
Lazzarotti
You really sit in an interesting spot. You work with a lot of organizations on disability and absence issues. A lot of companies are thinking about how AI affects a lot of different things. One of those things is how they affect what they're doing in their day-to-day. What a lot of people are wondering about, just as an initial matter, is what you are thinking about and what they are thinking about in terms of how the new administration has come in, some of the changes that they're seeing, particularly at the EEOC, there was some guidance that was issued and pulled back. Give us your thoughts on that.
Bass
Not just with respect to what's happening from an AI perspective, but we are seeing lots of things that are being pushed out and then being pulled back and reconsidered. At the end of the day, when we take a step back and look at the information that's been coming out from the EEOC and the DOL, they are still going to enforce the law the way that the law is written. We still need to be concerned with discrimination, and ensuring that whatever practices we're using, whether AI or any other type of technology, any type of tool, is not inadvertently causing a disparate impact and causing someone to be discriminated against. That's really been one of our key concerns as we think about the proliferation of AI overall and what it could do for us in the human resources space.
In particular, from a DMEC perspective, we're focused on leave, disability, accommodation and absence management. The DOL still has guidance out there regarding AI and some of the things that employers need to be concerned about as it relates to decision-making. They have specifically called out things like you shouldn't be using your AI tools to make eligibility determinations or final determinations as to whether or not a condition does or does not qualify under the FMLA as a serious health condition. If you do use tools in that in that way, you need to ensure that they are doing the right things and they aren't making in decisions that could inadvertently diminish or take away the rights that an individual has under the FMLA and other laws that the DOL is responsible for enforcing. Even though we're seeing this ebb and flow happening with guidance being published and then rescinded, the basic fundamentals of ensuring that individuals are not discriminated against first and foremost, and then secondarily that employees are still afforded the rights that they may have under federal law.
The other thing that we need to be concerned with is that this isn't just a federal matter. We have so many states that offer leave rights as well. The federal government may have pulled back some of the guidance from an AI perspective, but we are starting to see a huge number of laws that are being passed across the United States as it relates to AI. Some of them are specifically stating that you cannot use employee data to make decisions. If you do, in many instances, they're also stating that you need to have the employee's consent to do that. That adds some more complexity to how you would incorporate and use AI in HR related functions, especially as those related to leave, disability and absence management.
Lazzarotti
I'll put you on the spot a little bit here. Any predictions on the state moratorium that was proposed in the “One Big Beautiful Bill Act” (H.R. 1)? Any thoughts on that?
Bass
The ten-year moratorium is what's in “One Big Beautiful Bill Act” (H.R. 1). There is a lot of talk out there that that may not be a legal provision, in terms of our federalism, aspects of how things are set up and the power that states have. It might be in the bill, but that doesn't necessarily mean that it can actually be enforced. I suspect, as we've been seeing with many other things, that some of the states, like California, where I live, being one of them and New York, where you are, that's another. I would suspect that there would be some pushback and potential lawsuits if that were enforced to see whether or not there is legality associated with the ability to put that moratorium on. That's my prediction
Lazzarotti
Interesting. When we were preparing for this, you mentioned that DMEC conducted a little survey about some of these issues. I'm just curious how that went and if you can share any results, that would be really interesting.
Bass
I'd be happy to. We started what we're calling an AI think tank, if you will, late last year, where we've brought together a significant number of our employer members. We also have some brokers, consultants, tech professionals and some service providers in the disability absence management space. As part of that, we wanted to understand whether the professionals in this space can tell us what their basic understanding is of AI. How do you think AI can be incorporated into the work that we do? What are some benefits and some challenges?
What we found and what stood out in that survey overall, 130 professionals responded, mostly employers first and foremost, but only 60% of them had any basic understanding of AI. That’s not uncommon. Across the board, the general populace probably doesn't have a deep understanding of AI. In many respects, even the definition of what AI is and what AI isn't differs. A lot of folks are seeing things from companies out there like, AI is helping with this and AI is helping with that. Frankly, it's not really AI. It's just another form of automation, and it's rules-based. However, everybody's using the AI term because that's the new shiny object that's out there.
Part of what we're working on as a think tank is that we want to put some finer granularity around what we're talking about when we're talking about things like AI. In that respect, we're talking about things like large language models and generative AI, where there are decisions that are being made based on data and other pieces of information that are out there and available. We’re really focused on that so that we can define what it is we're concerned about or should be concerned about with respect to AI in our particular space.
We also asked about the formalization of policies. We are starting to see more policies becoming part of the workplace. We found about 30% of them had formal policies in place for using AI in employee benefits decisions. We know there's a huge gap there in terms of the policies that are in place, and there might actually be some use of AI going on out there, and there are no policies in place. We see those types of things in leave all the time. There are certain types of practices that are happening, but there may not be a policy around that supports it. It wasn't a surprise to us to actually see that in the survey results from AI.
On the flip side of all of that, there is a ton of optimism. The top benefit cited was efficiency. 85% of the respondents really felt that AI can help streamline some of their processes. They also flagged some major hurdles, like systems integration and compliance ambiguity
Then, of course, I talked a little bit about this earlier, the lack of transparency about how AI makes decisions. What are the algorithms that are being used, and how are those algorithms being developed and defined? Then, how are those that are utilizing the AI actually checking to make sure that it's doing the right things? What's encouraging for us is that there is an appetite for learning and people want case studies from us. They want ethical guidelines and some practical tools. That’s exactly what this group is building into our white paper and our upcoming AI think tank sessions.
You already mentioned our 2025 DMEC annual conference, where we have an actual session that we will go into a lot more detail around the white paper itself and our findings. We’ll start to introduce some of these tools that our employer members are looking for.
Another thing that we're seeing and being asked to provide some guidance around is RFPs and what types of questions you should ask your vendors or software providers as it relates to their utilization of AI. We're hopeful that we'll be able to help as we have in many other areas through our more than 30 year history to provide employers with the tools and resources they need to keep ahead of yet another emerging type of area that we need to keep our eyes on and ensure that we can get our arms around as a community.
Lazzarotti
That makes a lot of sense. One of the things I'm wondering about, particularly for your members doing this work day-to-day more than most people in their organization, and they're dealing with sensitive data about their colleagues. How are they viewing AI? For example, I'm seeing a lot more uptake of AI transcription services, where note-taking, collecting, analyzing, and summarizing notes from meetings with employees in different contexts are being utilized. It is very good in terms of doing that, although there were some issues. I'm wondering, did anybody identify those? Was that something that you guys focused on at all in the survey?
Bass
We didn't really focus on that in the survey. We kept it broader so that we had an understanding of where people were and their thinking process. That could help direct us on some of the initial issues, if you will, and focus on what we know. I'll just say this before I jump into answering the specific question you asked. That is, this is only the start of the work that we have to do on AI. It's going to continue to emerge. We can't boil the ocean is what I like to say. We need to start small and in ways that provide our members with the most crucial pieces of framework that they need to move forward. We're going to continue this beyond that.
Going back to your question, we have subcommittees that are part of the team, and frequently, there are topics such as that that come up. In this one in particular, there has been a lot of conversation around how your policies internally need to have something in place that talks about how you use transcription and when it should and shouldn't be used. It's interesting because I'm starting to notice some advances, if you will, in the use of AI from a transcription perspective. I'll give you an example. Copilot is a Microsoft product and is part of their Microsoft platform. It's something we're testing internally at DMEC to see how it might help us with content delivery and things of that nature. I bring it up because, in particular, meetings with Copilot, you can set it up in such a way that it doesn't actually take a full transcription of the conversation that's happening, but at the end of the meeting, you can ask it to summarize what the conversation was. That allows you to take that information out, similar to what you do if you were typing it out on your own or transcribing it on your own, to ensure that any of that sensitive, personal or identifiable information that may have been discussed during that meeting is not being disclosed. We're finding in our testing that it's doing a fairly good job of not including that type of sensitive information, and it's providing you with broader types of transcription if you are meeting minutes or what was discussed in the meeting itself. I bring that up because I believe that many of the tools are starting to recognize that this is a potential issue, and they're introducing things that can be used to not eliminate but to reduce the risk overall in capturing something that probably should not have been captured.
Secondly, the other thing you have to be concerned with is, I'm again, learning this on my own by using the tools internally here at DMEC, is you can actually restrict it in terms of the folders or the libraries of information that it goes into to source information and content. We have separate libraries that are set up for various functions within our group. We have a very private area, which has our HR information, payroll information and our accounting information. We've specifically excluded that library from being accessed by Copilot to protect that sensitive information.
All that said, it's still an evolution and something that's evolving. We’re going to continue to see those types of things that are going to come out. They're going to hopefully put in place things that limit the risk associated with AI getting inappropriate information or information that it shouldn't have.
Lazzarotti
It's definitely a concern. Switching gears a little bit, but I guess maybe it raises some of the same issues, depending on how these tools are configured. Ever since COVID, we have worked with a lot of clients who want to find better ways to deal with remote workers in particular. There are these performance management and monitoring platforms that have evolved and are becoming even more popular. I'm wondering, have you looked at those and tried to think about some of the, in particular, disability issues that might arise? If a platform might indicate an employee is not performing well, what are the reasons for that, and whether that could be an accurate or complete reflection of their performance? How are employers looking at that and dealing with that issue? I don't know if you've seen any issues arising out of that.
Bass
I have some concerns about it, and we're also talking about it as part of our think tank. It's important for listeners and employers in general to take heed to this warning that I'm going to give. In this particular space, we're talking about providing accommodations for individuals who have disabilities. When we talk about individuals who have disabilities and the type of accommodations they might need, we have individuals who have visual impairments, people who have an inability to maybe type as well or as fast as someone else or may have difficulty with some cognition. There are various tools that are available that help employees with these types of challenges. My concern and my caution are that these tracking tools don't necessarily take those things into consideration. They don't necessarily know that Joe is working at home and has a visual impairment. As a result, there are things that need to be done on the monitor, and it takes a little bit longer to read things and to go through the editing process. There are a lot of things that come with visual impairment. Same thing as it relates to any type of impairment related to your ability to type, use the mouse or any type of thing that we're doing to interface with a computer or other technology.
I haven't looked deep into these types of tracking mechanisms, but I would hope that there is something there where you can actually indicate that maybe an individual only has 25 or 30% the capacity of the general person. I don't believe that's there. Even if it is there, I would seriously question again, what's the algorithm? How's the algorithm being used? How is it tested? As we all know in the reasonable accommodation space, you're making an individualized determination in terms of what's the right accommodation for each individual based on their own specific individualized circumstances. It's difficult to take all of the things that an individual might be challenged with as it relates to the general population into consideration to come up with the best way to track and monitor them.
That's not to say it's any easier if you're just doing it from a humanistic perspective. That's one of the hardest parts of our job in this space, which is the overall accommodation assessment and determining what might be appropriate and what might not be. In many instances, we try things out on a trial basis to see if they're going to work. It's a little difficult to incorporate those types of things into these tools that are monitoring keystrokes and other types of behaviors of remote workers. That's going to be one area that we need to be very cautious around.
Lazzarotti
Exactly. I have the same thoughts. There's a lot to be considered with those tools. Along those lines, one of the things that Eric Felsberg, who we missed today, since he had another engagement, but he co-hosts this with me. One of the things we try to do at the end of each podcast, particularly those with a guest, is think about governance. We think governance is a really strong piece of this. We talked about this a little bit, but can you share three things that you think from a governance perspective around AI? I know you talked about questions with vendors, and maybe that's one of them. Are there things that you think employers should be thinking about as they decide to implement an AI tool in their organization, as it relates to absence and disability?
Bass
One of the most important aspects is that we need to ensure that there's some form of human oversight that's incorporated into the decision-making process. There's an interesting book that's out now called AI Snake Oil. It's written by two individuals at the AI lab at Princeton, I believe, and has very interesting information. We're to read that book, go back and look at some of the things that have happened historically. A lot of AI is built around predictive modeling. Predictive modeling has a bad name in its own right because prediction is based on large sets of data that come in. Unfortunately, predictive modeling has discriminated in many instances because it takes into account or sees data that generally you see happening within lower socioeconomic classes or older populations and things of that nature. One of the examples that they use is an insurer or an insurance company that was making determinations or decisions on their rates and what the rate increases were going to be for their customers. What they found with some of these predictive models is that there was disparate treatment for older insured folks, from a car insurance perspective, and also for those in the black community. They found in that regard that the prediction was wrong; it was like inappropriately causing these higher rates for those two groups of people. The cautionary tale there is that there was a huge reliance on this prediction, and there really wasn't a lot of human oversight to understand why it came to that prediction. If they had understood at the very beginning, they probably would have made a different decision, and they wouldn't have increased the rates.
I bring that up because human oversight is a really critical part of the work that we do in this space, because we're making decisions on whether someone is going to receive wage replacement benefits, job protections, accommodations in the workplace, et cetera. Without that human oversight to find flaws in any decision-making that's not being made by a human, there's such a significant risk of doing the wrong thing.
One of the things that we have been talking about in our group is perhaps we need to come up with an agreement across the board, as it relates to a denial decision – either denial for any type of benefit or job protection, things of that nature, that that always requires a human oversight component to ensure that nothing was missed in the final analysis. Those are some of the guardrails, if you will, that we're talking about putting in place to help with the industry and ensure that we're not inadvertently discriminating or making the wrong decisions for individuals.
I talked a little bit about that RFP toolkit with scoring rubrics that can help employers vet AI vendors and ask the right questions about the logic transparency, audibility and their compliance testing. We're also hearing that they want some case study libraries. For example, let's talk about where AI has failed and where it's helped. We know that there are going to be situations that come up where it's being utilized or we try to use it, and it may not be working as well as we would like it to. There needs to be an opportunity for us to share that collectively, so that we're aware of these failures as they exist. Also, where is it helping? Where are the things where it's doing well, like summarizing information, such as medical information and records that might be coming in? That might be an area where it might do a fairly good job, and an area where you might see an increased amount of adoption. In those regards, those really are a couple of what I call the framework associated with what we're putting together. We're also talking about things like what policies look like. How should you be training your teams on this? How you should use pilot programs and test things in lower-risk areas before scaling them. One of the things that I've been saying in this meeting is, hey, let's not race into the adoption of this technology because the tech is shiny, right? We need to do it because it works, and it works in a responsible way.
Lazzarotti
Those are a lot of really great ideas. We really appreciate, Bryon, joining us. This is going to be a really good podcast for a lot of folks who are in the absence and disability space. Again, if anybody is listening, DMEC is a great organization. You definitely should try to check out that annual conference. I know they provide a lot of really helpful resources. Bryon, thank you so much for joining us. Really appreciate it.
Thank you, everyone, for joining us again. As you know, if you have any questions, you can send them to us, or you can pass them along to Bryon and get his thoughts. If you have any questions, our email address is AI@JacksonLewis.com.
OUTRO
Thank you for joining us on We get work®. Please tune into our next program where we will continue to tell you not only what’s legal, but what is effective. We get work® is available to stream and subscribe to on Apple Podcasts, Spotify and YouTube. For more information on today’s topic, our presenters and other Jackson Lewis resources, visit jacksonlewis.com.
© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.
Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,000+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged and stable, and share our clients’ goals to emphasize belonging and respect for the contributions of every employee. For more information, visit https://www.jacksonlewis.com.