Details
CIO of Vista Clinical Laboratory Nick DeMeo joins podcast co-hosts Eric Felsberg and Joe Lazzarotti, co-leaders of Jackson Lewis’ Artificial Intelligence and Automation Group, to discuss the intersection of healthcare, cybersecurity and AI. The trio shares insights and tips on how to manage the mix of increasingly innovative and autonomous systems with human-centric governance more responsibly.
Transcript
Eric Felsberg
Principal, Long Island
Hello everyone, and welcome to another episode of We Get AI for Work. My name is Eric Felsberg, and as always, I'm joined by my friend and colleague, Joe Lazzarotti.
Joe, we have a great episode planned for today. Why don't you go ahead and tell our listeners about our guest today?
Joseph Lazzarotti
Principal, Tampa
Good to see you, Eric. Today, we are joined by Nicholas DeMeo. Nicholas is the Chief Information Officer at Vista Clinical Diagnostics, where he sits at the intersection of healthcare, cybersecurity and our favorite topic, artificial intelligence. Vista Clinical is headquartered here in Claremont, Florida. It's grown from a small lab into a CAP-accredited organization serving hundreds of skilled nursing facilities, assisted living communities, and a host of providers around the Eastern United States. Nick is also the author of Cyber Defense, the Art of Forging a Sentinel, where he makes a compelling case that in healthcare, cybersecurity is fundamental, a patient safety issue, of course. He challenges organizations to move beyond treating AI as just another tool and instead build what he calls a human AI nexus – a model where human judgment, governance, and ethics are fused with machine speed and automation.
Today, we will explore how those ideas translate into real-world healthcare environments, how leaders can prepare for an AI-driven future, and what it means to defend increasingly autonomous systems while keeping governance human-centric. Nick, welcome to the podcast. Great to see you.
Nicholas DeMeo
Chief Information Officer, Vista Clinical Laboratory
Thank you so much. Really honored to be on here, and very thankful that I got the invitation.
Lazzarotti
Awesome. Well, we're grateful to have you. It's been a pleasure getting to know you during my time here in Florida and speaking with you on some panels, so really appreciate it. I thought it'd be helpful just for our listeners to get a sense of your role. I know we have a lot of clients with questions about governance. Maybe, weave that in and tell us what your day-to-day is like. Obviously, we want to talk about AI, and we're interested in that, but give us a sense of what you do for Vista.
DeMeo
Obviously, I am the Chief Information Officer, so I oversee the generic IT, cybersecurity, and our laboratory information system. We do have a lot of small teams, so many of us, even at the executive level, wear many hats. I like to call myself a very hands-on CIO, so I'm not straying away from managing devices and Intune, and deploying EDR solutions. If I need to get it done, it gets done. Day to day, if there's something my team can't handle, it gets escalated to me, especially when it comes to policy enforcement. It's a lot easier for a CIO to go to somebody about not following policy than the IT support specialist. There's a lot of that.
Also, constantly staying innovative. AI has become such a big part of everybody's life, as well as my job, as far as you know, essentially having a co-pilot, whereas you know, there are things that come up in the laboratory. I'm not a doctor, but I also don't have time to get a PhD in clinical studies. Having the use of combining laboratory tools out there and frameworks, and being able to have AI help guide the best way to understand those. Then, come up with ways to integrate that into our own laboratory system to really just give the patient the best life cycle that we can.
Felsberg
Nick, thanks for joining us today. One of the things that Joe and I spend a lot of time doing among ourselves and also with our guests is discussing the excitement around AI, but also some of the challenges that are attached to its use. We'd love to just hear a little bit from you, how you balance that excitement and the eagerness to jump into the AI world, but be mindful of some of the challenges.
DeMeo
The most important thing for me is to really just keep in mind what AI really is. It's really, you could say it's just a collection of Python scripts running. It's created by people, which means it's flawed. It also operates under the ability to access the information. It's important to know that while it may do a really good job of giving the illusion that it's a genius, it's only as smart as the data it has access to. The challenge in healthcare is that when you're asking a question, it can only answer that question based on what information is openly available on the internet, which is not always a great thing. There's no filter sometimes, and it's not able to go dig into these specifics, like say our laboratory, for example, where we use proprietary instruments and those instruments have their own software or middleware on how they measure a specimen and create a result. Not every vendor is the same – it's all very unique to that vendor in that instrument. There's no way AI could possibly know that to be able to even help you. In that sense, that's a challenge because you're stuck doing a job very manually that involves going through lots of messages that are essentially almost like a machine language. While it does have obviously the English text in it, it's not structured in a way that's easily readable by a human. If you have to dig through hundreds of those and try to compare them and do an analysis with a human eye, it's not fun. It's time-consuming, and it's inaccurate. Having AI available to leverage in those situations would be fantastic. The challenge is, how do you do that? How do you safely integrate AI with these unique situations, and not just at our lab, really, any healthcare business has some sort of old, legacy, or maybe some homegrown system they've used to adapt to their business. How do you create a general platform that you can essentially really benefit the machine speed and data analytics of AI into those unique systems without compromising the intellectual property?
Felsberg
Interesting. Nick, just wanted to follow up on one thing you alluded to in your comment there, and that is the use of third-party vendors when using AI. What are some of the steps that you take when selecting vendors in the AI space to provide a particular service?
DeMeo
One of the most important things I look for is verification of transparency. Is this a model that they've created in-house? If so, what verification has it gone under? What certifications have they done to validate the model, or are they just using OpenAI and they've tailored it to their business? I've seen in some cases that they're just using an API key where it connects to ChatGPT, but they're selling it as a proprietary AI company. Things like that are very dangerous. Really, the most important thing is transparency. They need to be able to tell exactly what AI model they're using, how it was built, and how it's secured. That's step one before anything can really be evaluated. Because what I found especially in the cybersecurity spaces, they're like, we've infused with AI to help automate. Really, all they're doing is connecting your own ChatGPT or Google Gemini account. It's like, I don't want that information just flying out to a commercial AI platform. People probably need to validate that before they sign anything, because that, for me, is a dealbreaker.
The second thing, outside of security and privacy, is just the accuracy of the model itself, especially in healthcare. If they say our solution's able to increase results faster and they're more accurate, I ask what you have done to validate that in the healthcare space? What have you done to truly get control of the hallucinations to make sure the model is stable and isn't going to change after six months? What does that validation process look like? Again, it really just boils down to my first statement with transparency of the model.
Lazzarotti
Nicholas, you talked about Eric's question just about vendor management, vendor procurement, and a little bit about some of the challenges and opportunities with AI. A lot of that gets woven into the whole question about governance and managing this. We get a lot of questions about that, a lot of different organizations at various stages of maturity in that process. Some are at the very beginning, thinking about whether to use it. Some, it sounds like you've been further along in that process at Vista. Just wondering, what would you recommend or maybe describe a little bit about how your organization approached governance in a way that might be helpful for listeners who are thinking about it, and what next steps they may want to take to help them with managing this new technology in their organization?
DeMeo
I started with what's the number one challenge when you're trying to build governance? That's company support and getting people involved who actually believe the same way you do. How do you tackle that challenge? You can't force them to believe something different, but you can educate them. The more you educate them, the more they realize, I don't really want to do this, maybe we should have a committee or something. I don't want to be responsible for something like that happening in my department, and I'm the one who's blamed for it. Educating on their risk is for me a step one. A lot of people don't understand what AI is. They think it's magic, or they're very trusting in a solution because it gave them an awesome recipe over the weekend. It saved dinner plans, so it's totally trustworthy to put in sensitive data now. Education is number one. Then, that is a really good icebreaker into developing governance and saying, now that you understand the risks and the liability, this is how we can fix it or prevent this from happening.
Inclusion as well. If we have a department, say our microbiology department, that is implementing a solution. It's probably a good idea to have somebody from that department somewhat involved to an extent so that they're included in the decision-making. If you don't include them, they're not going to enforce a decision that they had no part of, or they may not understand why that decision was made. Step one is education, step two is start building out their roles and responsibilities and make sure the right people are included, so they're not getting blindsided with a policy that can completely disrupt what they're doing. Then, the response is going to be that you didn't even respect us, didn't even take our thoughts into consideration, and you don't work down here. That just turns into a big fight, and then, compliance is out the window. Now you guys are arguing about something completely different. The very first step is really just start educating and explaining why these things are necessary. Then, once you start getting the hooks, we can start getting the formal framework laid out, like roles and responsibilities, permissions, how we manage vendors, and start getting the technical details worked out.
Lazzarotti
You talked about inclusion, which makes a lot of sense. We see that also in the privacy policy and procedure spaces as well, because privacy and data, as you know, and AI affect really most parts of the organization. We've also got a lot of questions about whether to use AI note takers. The reason that comes up is that everybody has access to that, so the question becomes, do we want everybody to have access to it? How do we manage that technology, which can really be involved in most parts of the organization and departments? How do you deal with that from a governance perspective?
DeMeo
That is really tricky. There are a lot of moving parts to just a simple tool that is supposed to be an assistant. What if it misinterprets what you said? There is an incident where we need to recall the notes, and now the accuracy of the notes could learn something left or right. That is difficult for our organization. We tried to stay away from any type of audio recording period, just because you can't control what happens after it's recorded. Even if you have a BA assigned or you're going through the due diligence with the application of the tool, at the end of the day, it's still an attack vector, and you can still lose control in some way because it's being managed by a third party. When you have a third-party tool, not that they're all bad, but you are accepting the risk that the security and privacy of that data relies on the third party now. A lot of the BEAs or the contracts you sign with them may not tell you about anything for 60 days, or they may not have a time period at all. They may just say, we'll tell you if there's a breach, but there's no explicit time period. It could be next year, you could find out that you got breached and that a whole conversation got leaked when we were talking about patients, or we were discussing payments or credit card numbers, things like that, through a working session.
The most important thing is to ask, why do you need it? What are you going to use it for? Then, how do you prevent it from being used for a different type of use? That's step one. Do you really need a note-taking tool that's going to listen to your conversation or summarize your emails? Is the risk really worth it?
Lazzarotti
Maybe just as a final question, unless Eric has some other thoughts, but just from my perspective, looking forward, as lawyers, clients look to us to help them understand compliance and what lies ahead. I'm assuming you saw the recent executive order by the president about wanting to curtail state regulation of AI and what the impact of that might be. Now, at this point, we don't have any law as a result of that. It's just what the administration is going to do if states begin to enforce their laws or enact new laws. How do you, as an organization, look at that and react to that, if at all, at this point?
DeMeo
For us personally, because we were in two or three different states, it could affect us. Looking at the executive order, I'm leaning towards being for it, but I'm also in the middle because when we talk about AI governance, in an organization from a business point of view, we always point to the centralized governance. There's one committee or one group that oversees the governance of it, everybody else kind of supports it, and then you enforce the policy. When you don't have centralization, then you can run into a scenario where a state like California or New York could create its own AI law that a business in Florida or Texas could be affected by because their customers live in those states. Now, those companies have to essentially change their business around one or two states' laws, which can really slow or actually deter innovation in the AI space.
When we're talking about this, it really goes into the AI arms race with China. Other countries could really put us behind if you have to go through 50 states' legislation before you can build a company or release a product, which can really slow it down and hinder it. Then, because of that, you have states that are essentially, because of their economic power, creating national AI laws that other states have to abide by unwillingly, simply because that's where their customer base is. There is that point of view.
The other side of it, too, however, is where you get a unique situation, where we are 50 states, and we have almost 50 different countries inside the U.S. Our economic positions are completely different. It's interesting if you say no state can make an AI law or whatever governance, how do you adjust one central law to accommodate all of those economic positions? Maybe for citizens in a state like Vermont, their number one concern is data privacy. However, if you have an economic power like Texas or California that only cares about innovation, making money, and doesn't care about the privacy aspect. Now, those citizens who voted for privacy data-driven leaders in their state pretty much lost their vote because that concern is going to be superseded by bigger states that have more pull in Washington.
It can definitely affect us as well because now we're in different states. We have to model our company essentially under one uniform that abides by both states. What happens if they contradict each other? You're put in a position where it's like, I guess we just can't do business in that state, and we have to choose which is our most profitable positioning. Stay with that one and cut the other one loose. It's going to create quite a bit of confusion, simply because governments are notorious for taking something simple and making it complicated. You could use NIST, HIPAA, and things like that to build out the AI laws, but there needs to be room for the granular level at the state level for their unique economy, like farming, or at Silicon Valley, it's all tech. They don't care about farming. How do you balance those two needs under one law?
Felsberg
Nick, thanks very much. It's very interesting. Just very briefly before we close out, Joe mentioned at the top of the episode that you've written a book. Congratulations on that. Just for our listeners, maybe if you could just talk very briefly about what prompted you to take that on and what you hope the readers take away from the book?
DeMeo
What prompted me was that I was actually teaching a cybersecurity program at the University of Central Florida. I was looking at that program, looking at how certifications are set up and then working alongside cybersecurity people in the industry. I realized that we're doing this completely wrong. It seems like no matter what tool we implement, threats and the success rates of attacks keep increasing by 30-40 % each year. Obviously, it's not working. Our whole cyber defense is not working because we're teaching people to monitor or supervise a tool. If that tool is not deployed or if that tool gets bypassed, our cyber defenders become useless. That's really what prompted me to write the book to get out of that tool supervisor mentality and get back to the human-centric mind of it. We're the most important asset at a company, and we're also the weakest link that can't be fixed by a configuration. It can only be fixed by fellow humans supporting one another and understanding how we operate, how we think, and then attacking the problem that way. That's really what drove me to write the book. It'll be really beneficial for anybody looking to get into cybersecurity or who's already in it to reshape the way they view their job and the way they tackle their daily tasks.
Also, my last chapter is focused on AI and how we're humans. Let's keep the human things to humans and let the machine do what the machine can do. Then, find a balance between the two so that you're getting the most efficient performing defender out of that little bundle. That's really important for us to teach that in a very responsible manner.
Felsberg
That's a great point to close on, and Nick, we appreciate your time. That was a great discussion. Of course, Joe, it's always a pleasure chatting with you and with our listeners. If you have any questions, by all means, reach out to us. If you have an idea of a topic that you would like us to cover, please let us know.
Of course, if you're interested in being a guest on our podcast, we'd be happy to speak with you about that as well. You can contact us at AI@JacksonLewis.com.
© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.
Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,100+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged and stable, and share our clients’ goals to emphasize belonging and respect for the contributions of every employee. For more information, visit https://www.jacksonlewis.com.