Skip to main content
Podcast

We Get Privacy for Work — Episode 10: Employee Monitoring Tools: Too Good to Be True?

Details

October 9, 2025

Lured by the promise of better productivity and compliance with company policies, employee monitoring tools are gaining a lot of traction among employers.  

On this episode of We get Privacy for work, we discuss the important privacy and legal implications that organizations must consider before implementing employee monitoring tools.

Transcript

INTRO

Lured by the promise of better productivity and compliance with company policies, employee monitoring tools are gaining a lot of traction among employers.  

On this episode of We get Privacy for work, we discuss the important privacy and legal implications that organizations must consider before implementing employee monitoring tools.

Today's hosts are Damon Silver and Joe Lazzarotti, principals, respectively, in the New York City and Tampa offices of Jackson Lewis, and co-leaders of the Privacy, AI and Cybersecurity Group.  

Damon and Joe, the question on everyone’s mind today is: How can I take advantage of the new employee monitoring tools while maintaining reasonable privacy safeguards, and how does that impact my organization?     

CONTENT

Joe Lazzarotti
Principal, Tampa

Welcome to the We get privacy podcast. I'm Joe Lazzarotti, and I'm joined by my co-host, Damon Silver. Damon and I co-lead the Privacy, AI and Cybersecurity Group at Jackson-Lewis. In that role, we receive a variety of questions every day from our clients, all of which boil down to the core question of, how do we handle our data safely? In other words, how do we leverage all the great things that data can do for our organizations without running headfirst into a wall of legal risk? How do we manage that risk without unnecessarily hindering our business operations?

Damon Silver
Principal, New York City

On each episode of the podcast, Joe and I talk through a common question that we're getting from our clients. We talk through in the same way that we would with our clients, meaning with a focus on the practical. What are the legal risks? What options are available to manage those risks, and what should we be mindful of from an execution perspective? 

Joe, our question today is, can we video record our employees, review their emails, and track where they travel? In other words, can we take advantage of all these various monitoring tools that have come to market that promise things like better productivity and better compliance with company policies and legal obligations? 

Joe, I was thinking that a place we could start, since there are so many of these tools coming to market, is with a little overview and discussion of some of the tools that we're being asked about most frequently.

Lazzarotti

It's a great and loaded question. You could even roll back to just even; we got a lot of basic questions about simple cameras and audio, and reviewing emails. What you're talking about are some of these platforms that have come out recently, where they can capture all of this and pull it together into a single dashboard. Maybe even in some cases, leverage AI and other technologies to synthesize and summarize this data for management so that they can see how exactly their employees are performing, what applications are they using, and when are they using them? Based on their organization and the employee's job, you can get a sense of, or at least an estimation, whether that person is performing in an optimal way or is distracted and doing things that they shouldn't be doing. That’s what we’re talking about, right? Those particular types of platforms that are out there are more so now than they have been. It gets back to the central question of what the expectation of privacy is that an employee has. 

Then, you get into, depending on the particular platform, certain types of laws that get triggered. Just as one example, the types of videos that are in the process of being captured could include capturing a biometric. This is just one issue. You could be triggering laws in Illinois, Colorado, or other states that have specific rules and not even realize it because of how that particular platform is configured. It raises an issue that you didn't even think you might run into. You might be just focused on productivity and didn't realize, hey, we have this biometric issue now that we have to contend with, which may not be necessary for the purpose for which you rolled it out.

Silver

Those are all great points. I do think an important starting point with looking at these tools, and this is true in the context of AI tools, more broadly, is to really understand how the tool works and how people or you as a company, are going to use them. There are some productivity monitoring tools out there that really go very granular. They might take down every single keystroke that someone makes in every search that they run. They might require people to be on camera for some period of time, and the software is analyzing, to your point, whether people are distracted or other indicators of productivity or lack thereof. Understanding what the tool can do is a very important starting point. 

Related to that, understanding what controls the tool has in place and what your options are in terms of features to configure. Some of the clients I've worked with have said they feel like it's an all or nothing. Someone working in-house counsel’s compliance office may say, my God, this tool is recording everyone's keystrokes, but it may not be the case that that feature has to be used. There may be other aspects of the tool that are lower risk and that might provide some similar value. You can disable some of the more concerning features. It is beneficial to really understand what the technology is, what it can do, what options you have in terms of how you're going to configure it, and also what options you have in terms of how you're going to use it. 

Joe, maybe a place to go from here, in terms of how it's going to be used, is a discussion of company systems and company devices versus personal systems and personal devices. One of the issues that can come up is if you're using these types of tools to log in from home, or they're using a personal laptop or phone to do their work as part of a view IOD program. You're running the monitoring software on those personal devices, which can definitely impact the risk profile of the monitoring because of all this non-business information that might be inadvertently caught up in. Maybe you could speak a little bit about that issue of personal versus company devices and systems.

Lazzarotti

It obviously gets a lot more sensitive when you're running some type of monitoring application on a person's personal device. BYOD has been around for a while. Typically, there's a mobile device management or a similar application where the company's activity, email, calendar, and whatnot are contained in an app on the phone. Oftentimes, companies are careful not to go beyond that. Obviously, they want to see and make sure that what they're able to see is only the environment that they've directed employees to set up on their phones. Obviously, there are some issues with going beyond that and the unintended consequence of getting information that they might not want to have. 

One of the things that you see is sometimes an employee is having a conversation with a family member, talking about a medical condition. That could technically be genetic information in the genome. Employers generally aren't permitted to collect that or even request it. Certainly, that risk is heightened when you have a personal device. 

Then, of course, you may even want to think about the more preliminary question, which is, do you want to have employees using personal devices, and what controls are in place around those devices? Obviously, if you can't monitor, track, and control it, and the employee leaves the company, what happens to that data? There's almost a threshold question about whether and to what extent to use a personal device. 

Then, if that's the case, how do you want to direct employees to manage that and use their personal devices? Then, after that, you say, we're going to permit you to use the personal device, and here's how we want it to happen. Then, the ongoing monitoring and the policies to put controls around the data have to be managed. All the while, thinking about whether the data security controls are appropriate. If you're talking about a population that's represented by a collective bargaining unit, do you have to get the union's approval, or do you have to negotiate something? These are all things that certainly have to be considered. 

Again, to your point, Damon, whenever you're dealing with a person's personal device, it becomes a little bit trickier. It doesn't mean you can't do it. It just means that you have to be a little bit more sensitive to it.

Silver

You hit the nail on the head, Joe, in the sense that this issue of collecting extraneous information is one of the big risk factors with using these monitoring tools, especially the ones that are not more targeted towards certain activities or certain systems. The ones that are very broad-based. Yeah, people do talk about their upcoming medical procedure or family's medical condition; they talk about their religious beliefs or may talk about something that tells you about their sexual orientation, their legal off-duty activities, or political views. There are all kinds of stuff that's going to come up in the course of people chatting before a Zoom meeting starts or on Slack. Just stuff between colleagues where they think they're just speaking offline, so to speak, and mention things. That could all be hoovered up by a monitoring tool. 

Then, you have to think about what happens with that data. Who is going to have access to it? Is it being appropriately secured? How long are you keeping it? Do you have some type of control in place to keep it away from employment decision makers? All of a sudden, we may be put on notice of someone's sexual orientation or their religious beliefs, or their potential need for an accommodation or leave, when otherwise we would have had a very strong argument as an employer that we had no idea. We weren't on notice of any of those conditions or memberships in protected classes. There is a lot to think about in terms of how you structure it. 

Then, perhaps we can segue into this, at the front end, as you're thinking about rolling these tools out, or if you've already done it and you're trying to think of how to better manage risk going forward. One big issue is going to be what type of transparency you have had around your use of these tools. Have you provided people with notice? Have you collected their consent? This is something that, in some states, may be addressed already through an electronic monitoring notice, but there are a lot of states that don't expressly require that you get employees’ consent.  A lot of businesses, maybe there's very limited language somewhere in their 80-page handbook that mentions something that could cover this. What are your thoughts, Joe? What are your suggestions for clients you work with on this, around how, at the front end, as you're structuring this, you can best protect yourself from a notice consent perspective?

Lazzarotti 

The first step, to your point, it’s a great question: Do you even have to get consent? What's the requirement? Do you need some type of affirmative consent, something in writing, like a written acknowledgement of monitoring, or do you have to actually get a document that says consent to X? There are some states, like New York, where you have to get a written acknowledgement. It's not necessarily consent, but a written acknowledgement that the company is going to be monitoring certain activities that the employee engages in on their information systems. Interestingly, in California, if you're subject to the CCPA, the notice or the privacy policy has to express the type of data collected. 

When you start thinking about these monitoring tools, they collect a lot of information that you may not have thought about in the normal course of the employment relationship. Potentially, even in the customer relationship, because employees may be monitored while they're interacting with customers, and the system would be collecting that information all the same. It's interesting, Damon. I'm curious what you think about this.

One of the things that I do see happening, and this ties into the consent issue and into the overall governance of these products and tools that companies are using, is that it often doesn't start out with the full range of monitoring that a vendor might present to the company. They may start out with just monitoring email. Then, the company goes through a vetting process. Hopefully, they go through a vetting process to understand the risks and take appropriate steps for due diligence around the tool. Eight or twelve months later, the vendor comes back, maybe they're in direct contact with the IT department, and say, we added this option. We enhanced this option, and there are some great benefits. We've had a lot of good feedback from customers, and no, there won't be any more cost to the company. Maybe, there is, maybe there isn't. Would you like us to talk to you about it? Maybe the IT department says, yeah, that looks great. It would really help the company, and we've been looking for this solution. Let's start next month on August 1st.

What happens then? Talk about that. Have you dealt with clients who wind up doing that? At that point, do they go through the same process to vet it?

Silver

Yeah, absolutely. I'll take it one step further. I was talking to a healthcare client pretty recently, and I was working with their director of IT and head of compliance. They don't want to have their people in the field using their actual personal cell phones to make calls to patients for employee protection reasons; they don't want those numbers out there. They use one of these services that provides alternative numbers, obscuring real phone numbers. Without the knowledge of IT, compliance, or anyone who's in a position to vet this, the vendor started making available to all the users of these numbers an option to record and transcribe these calls. Employees out in the field assumed that, since they were seeing this option being presented to them, it must be company-approved. They were just recording and transcribing all their interactions with patients. It had been going on for a couple of months before the company even realized it was happening. It may even be the case that the vendor does not formally come back and say, we've made this feature available. It just may be there for end users to just start using as part of the program. Probably what happened was the company just did a normal upgrade that had been pushed out by the vendor of the service, and that caused this option to be presented. It's definitely something to be wary of. 

In thinking this through with clients, given the fact that one, you just can't know everything that's going on, and two, you don't want to have to keep revisiting the same issue over and over and over again. To some degree, you might have to, but you really want to cut down on that. Be at the front end to really try and take a holistic, longer-term view. How might this tool be used? What features is the vendor talking about rolling out? Oftentimes, these vendors will preview what they're working on and what they think will be available. They're making it a selling point, but also, of course, it does have downstream privacy and other types of risks. Trying to do that thought exercise beforehand and develop a framework that is a little more adaptable than just addressing the very narrow problem that might be in front of you at that point in time. That way, you are better protected in case the tool starts being used in ways you didn't anticipate or the tool itself changes because new features have been made available. It’s not perfect. There are going to be things that you do have to revisit from time to time, particularly if you're using technology that's evolving fast. Taking that step back and looking bigger picture, you can get out in front of a lot of those things in a more effective way.

Lazzarotti

It sounds like this is probably a good place to stop. We could probably talk for hours on. It's just that there are some really important things to think about, not the least of which is what exactly you're doing and what the technology is doing, because that could trigger a whole range of obligations.

Silver

A great place to wrap up is on that exact point. Just maybe one more thing to throw out there is I've been getting a lot of questions like the last couple of weeks, all of a sudden tons of questions about AI-powered dash cams. The ones that do video out the front, video into the cabin, and also look at things like harshness of braking and wavering within a lane, as well as other indicators. The driver might be tired, or it might be overly aggressive. One thing that's come up a number of times is, in a few instances, the client didn't realize there was an AI component. Also, in a few instances, they didn't realize that the default was that this tool would also capture audio. They were thinking of it purely as a video. If we put up a little sticker on the car saying, this vehicle is subject to video surveillance, aren't we good? If that was all that was happening with the tool, sure, you would be good. However, this is another example of there probably is more, or at least could be more to the tool than meets the eye, than what you were looking for initially. 

Understanding the full picture of what you're purchasing and making available to your employees is very important because otherwise, a lot of the steps you're taking and the policies you're rolling out may address the problem you thought you were addressing, but leave completely open other problems that you weren't aware of, because it just wasn't where your focus was. 

On that note, Joe, we'll wrap things up for this discussion. As always, if you have any thoughts for us on additional topics that we can cover or thoughts on this topic, you can email us at privacy@JacksonLewis.com

OUTRO

Thank you for joining us on We get work®. Please tune into our next program where we will continue to tell you not only what’s legal, but what is effective. We get work® is available to stream and subscribe to on Apple Podcasts and YouTube. For more information on today’s topic, our presenters and other Jackson Lewis resources, visit jacksonlewis.com.

© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome. 

Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,000+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged and stable, and share our clients’ goals to emphasize belonging and respect for the contributions of every employee. For more information, visit https://www.jacksonlewis.com.