Details
As AI-powered smart glasses and related emerging tech blur the line between everyday activity and constant data collection, organizations face growing privacy, security and compliance risks. In this episode, Jackson Lewis Privacy, AI and Cybersecurity co-leaders Damon Silver and Joe Lazzarotti clarify what these technologies are, why they’re raising red flags across industries, and how employers should respond.
Transcript
Damon Silver
Principal, New York City
Welcome to the We Get Privacy podcast. I'm Damon Silver, and I'm joined by my co-host, Joe Lazzarotti. Joe and I co-lead the Privacy, AI, and Cybersecurity group at Jackson Lewis. In that role, we receive a variety of questions every day from our clients, all of which boil down to the core question of how do we handle our data safely? In other words, how do we leverage all the great things data can do for our organizations without running headfirst into a wall of legal risk? How can we manage that risk without unnecessarily hindering our business operations?
Joe Lazzarotti
Principal, Tampa
On each episode of the podcast, Damon and I are going to talk through a common question that we're getting from our clients. We're going to talk through it in the same way that we would with our clients, meaning with a focus on the practical. What are the legal risks? What options are available to manage those risks? What should we be mindful of from an execution perspective?
Today, we're going to talk about a technology that's been around, went away for a little bit, and seems to be back pretty boldly – AI smart glasses people are wearing. They have a wide range of functionalities, so the question is: what are these smart glasses? What do they do? What problems do they create? Most recently, we've seen many reports about organizations, including the U.S. Air Force. I know that there have been some other industries that have banned these glasses; we'll talk a little bit about that. Damon, as a starting point, let's frame the technology and talk a little bit about what these technologies do, what their capabilities are, and some of the contexts in which we see them used?
Silver
These AI smart glasses are essentially AI-powered computers that are built into what look like regular glasses like I'm wearing right now. They have the ability to capture audio from around where the wearer of the glasses is located, as well as video. They may have facial recognition capabilities so they can recognize people in the area and provide information about them. They're AI-powered, and AI can analyze all these inputs, including what's written on a computer screen, on a piece of paper, or on a sign where the person is located. It's basically allowing the wearer to have the capabilities they might have sitting at their computer, but everywhere they go, processing what they are seeing and hearing in real time.
Joe, that all sounds cool and everything; I'm guessing you're going to share some concerns with the audience.
Lazzarotti
Certainly, I know I try to think back when I was growing up, if we had these devices, we probably would have gotten in a lot more trouble than we did anyway. They do seem to create a lot of concern. I know I was reading a couple of articles today about how people are forgetting they have them on and walking into places like bathrooms to film things they didn't intend to, but it's just one of those issues that happens.
When you look at some of the organizations, the U.S. Air Force, for example, is banning these in certain contexts. National security probably comes to mind, and the secrets the government obviously has are things they're trying to protect. Some banks, again, pretty obvious, right? Wanting to protect customer data, non-public personal information. There are a couple of cruise lines that have banned these glasses, colleges, and some courts.
There's a lot of concern about the recording of audio without proper consent, videos that could become problematic when filmed inadvertently, and, perhaps now with AI, the creation of digital replicas based on those recorded videos. Again, that creates some problems. Also, the data that is stored: you can be standing online and being able to capture conversations from people, and that data is all getting stored. If not on the device, usually it's in connection with your phone, and so you now have that data on your phone. How does this work, say, in a retail environment where an employee is walking around hearing customers? Maybe they can look down at the register and see credit card information.
There are all kinds of things you can think about that may happen not just with videos and audio, but with everything that someone sees during the day. That's now a recorded, documented event, and you lose control over it if you're trying to protect certain things in your organization at a particular location. What happens with that? How do you protect that? There's the whole ethics issue around that. Should we be recording everything that we see? There's been a lot of back and forth from privacy advocates about mass surveillance in cities like New York and others where there's just a camera every X feet. How does that affect society? I don't know that we have the answers to those questions, but those are some of the things that we're seeing.
Maybe, Damon, there are more practical instances that you have in mind.
Silver
I thought you covered it well. Some of the issues raised here are not new per se, but there are layers to them that become more complicated in this context. For example, we're often asked by clients about putting up surveillance video cameras. Typically, as long as you keep them out of private places like bathrooms and locker rooms, you're permitted to do so on your premises. We get asked about audio recording, say via dash cam or on video conferences or phone calls. Again, generally speaking, you can do it if you provide notice to all people involved. So for your own employees, that's usually through a policy. If you have a lot of recordings of external conversations, you can use one of those announcements we've all heard at the beginning. This call will be recorded and monitored for training and quality assurance purposes.
The challenge when someone's wearing AI glasses, and in particular, you noted the fact, Joe, that sometimes the person wearing them is not even conscious of the fact they're doing this, is that there's just really no way to get consent from everyone whose conversation they might record. They're going to end up going to more private places and still be recording it on the glasses. Some of the more traditional privacy issues are more complicated when it comes to something like this, where it's going to be on someone's face and go with them everywhere they go. That may be forgotten, and it may not be known to their employer or to other people who have a stake in what's being collected.
Lazzarotti
To your point, this whole idea of people living their lives. You could be wearing these glasses on your off hours. Suppose you're a licensed practical nurse in a nursing home, and you show up for work on Monday. All of a sudden, you put on these glasses and start walking around, potentially capturing information as you normally do your job, without even realizing it. The regulatory environment in that situation is HIPAA, and the question becomes: what could be protected health information are you collecting now? How is that being managed? If you're in a school environment, you might have FERPA issues just because of the data that you're surrounded by that's now being collected. One of the things to think about is what is that regulatory environment that you're subject to? Maybe it's not a regulatory mandate. Maybe you have clients or customers who, by contract, negotiated certain rules around what type of data you can collect.
Even if you can collect it, how is that data to be safeguarded? In most cases, having it stored on an employee's personal device that connects to their smart glass probably isn't what they had in mind when they thought about the security of their information. Just in terms of doing what you do every day, walking around in your organization's building facility or warehouse could create issues that, again, Damon, to what you were saying, you're just not even aware of just going about your business.
Silver
That is a huge one, especially if you're dealing with a knowledge worker every day and their their emails and the documents they work in, there is going to be data likely that it's either sensitive to your organization, sensitive because it's protected by a statute or sensitive because one of your your contractual partners have said it's it's confidential information and just the act of that employee having it being uploaded to their personal device is very likely going to be a violation. You just have no idea, as the organization responsible for that data, what is then happening to it. The employee may not, or the person wearing the glasses may not, fully understand which other devices are synced with that device and what its security settings are; is that data then used to train LLMs used by other customers? There's just a lot of stuff that can't be vetted when an employee device collects the data, versus when an organization rolls out an application in a more measured way.
Lazzarotti
I’m curious about thoughts on this, Damon. I've heard some clients have raised concerns that people who show up for interviews have these glasses on and, one, they're recording the interview, two, they may be using the glasses to help be better performing in the interview, and then, three, there's a concern about can we ban it? If so, are there any exceptions to that? Any thoughts on that in terms of just that one situation where these glasses are being used in an employment context?
Silver
Most, actually I would say all of the clients I have discussed that use case and other employment-related use cases with have landed on banning the glasses. Because there really isn't a good way aside from banning the glasses to manage the risks we have discussed here. Smart glasses, and we may reach a point where companies issue phones, tablets and other wearables. It makes sense for a company to issue it, manage all the data and set rules around it. As it stands now, I haven't yet talked to a client that is in that place. It's all coming from the job applicant, the employee or the contractor who wants to wear the glasses, and where we've landed after talking through the various considerations is that, outside of potentially someone claiming to have a disability-related reason for needing the smart glasses as an accommodation, which obviously you need to go through the normal process to understand and evaluate. There really isn't much benefit to allowing someone to do that. There is significant downside risk that is very hard to manage in any type of feasible way.
Lazzarotti
Over the course of the podcast that we've done, it's touched quite a bit on different technologies that organizations develop, deploy and adopt in their organizations. It's really a matter of trying to think it through, understand the risks, figure out how to manage them or not, depending on what policies and procedures and constraints they have.
With that, unless Damon has any other things to add, it might be a good place to kind of wrap this one up.
Silver
I totally agree. Good discussion. Definitely something that we and others will continue to monitor. That's a great point, Joe. There are so many new technologies coming online, even within the same space, whether it's smart glasses, chatbots or electronic monitoring software. They're not all created equal, both in how they're set up, their attention to privacy and the features they offer to give you better controls. While, of course, you do want to look at it holistically, how are you going to manage all these technologies? You also do need to dig in on each of them and really understand what you're dealing with and what options you might have, particularly if you think they might be useful. There probably is a way to become more comfortable with the risk profile, but you need to dig into it to do so.
Lazzarotti
Absolutely, strong governance, risk and compliance programs are critical. Thank you, Damon, and thanks everyone for listening today. If you have any questions about this program or suggestions for future programs, please reach out to us at Privacy@JacksonLewis.com. Thanks so much.
© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.
Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,100+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged and stable, and share our clients’ goals to emphasize belonging and respect for the contributions of every employee. For more information, visit https://www.jacksonlewis.com.