Is artificial intelligence a cybersecurity ally or menace?

MITRE's Dr. Brian Anderson talks about the pros and cons of AI in cybersecurity – and the role of generative AI like ChatGPT – in a preview of his panel session at the upcoming HIMSS Healthcare Cybersecurity Forum.
By Bill Siwicki
10:42 AM

Dr. Brian Anderson, chief digital health physician at MITRE

Photo: MITRE

Artificial intelligence continues to push cybersecurity into an unprecedented era as it offers benefits and at the same time drawbacks by assisting both aggressors and protectors.

Cybercriminals are using AI to launch more sophisticated and unique attacks at a wider scale. And cybersecurity teams are using the same technology to protect their systems and data.

Dr. Brian Anderson is chief digital health physician at MITRE, a federally funded nonprofit research organization. He will be speaking at the HIMSS 2023 Healthcare Cybersecurity Forum in a panel session titled "Artificial Intelligence: Cybersecurity's Friend or Foe?" Other members of the panel include Eric Liederman of Kaiser Permanente, Benoit Desjardins of UPENN Medical Center and Michelle Ramim of Nova Southeastern University.

We interviewed Anderson to help unpack the implications of both offensive and defensive AI and examine new risks introduced by ChatGPT and other types of generative AI.

Q. How exactly does the presence of artificial intelligence bring up cybersecurity concerns?

A. There are several ways AI brings up substantive cybersecurity concerns. For example, nefarious AI tools can pose risks by enabling denial of service attacks, as well as brute force attacks on a particular target.

AI tools also can be used in "model poisoning," an attack where a program is used to corrupt a machine learning model to produce incorrect results by inserting malicious code.

Additionally, many of the available free AI tools – such as ChatGPT – can be tricked with prompt engineering approaches to write malicious code. Particularly in healthcare, there are concerns around protecting sensitive health data, such as protected health information.

Sharing PHI in prompts of these publicly available tools could lead to data privacy concerns. Many health systems are struggling with how to protect systems from allowing for this kind of data sharing/leakage.

Q. How can AI benefit hospitals and health systems when it comes to protection against bad actors?

A. AI has been helping cybersecurity experts identify threats for years now. Many AI tools are currently used to identify threats and malware, as well as detecting malicious code inserted into programs and models.

Using these tools – with a human cybersecurity expert always in the loop to ensure appropriate alignment and decision-making – can help health systems stay one step ahead of bad actors. AI that is trained in adversarial tactics is a powerful new set of tools that can help protect health systems from optimized attacks by malevolent models.

Generative models such as large language learning models (LLMs) can help protect health systems by identifying and predicting phishing attacks or flagging harmful bots.

Finally, mitigating insider threats like leaking of PHI or sensitive data (for example, for use on ChatGPT), is another example of some of the emerging risks that health systems must develop responses to.

Q. What cybersecurity risks are introduced by ChatGPT and other types of generative AI?

A. ChatGPT and future iterations of the current GPT-4 and other LLMs will become increasingly effective at writing novel code that could be used for nefarious purposes. These generative models also pose privacy risks, as I previously mentioned.

Social engineering is another concern. By producing detailed text or scripts, and/or the ability to reproduce a familiar voice, the potential exists for LLMs to impersonate individuals in attempts to exploit vulnerabilities.

I have a final thought. It’s my sincere belief as a medical doctor and informaticist that, with the appropriate safeguards in place, the positive potential for AI in healthcare far exceeds the potential negative.

As with any new technology there is a learning curve to identify and understand where vulnerabilities or risk may exist. And in a space as consequential as healthcare – where patients' wellbeing and safety is on the line – it's critical we move as quickly as possible to address those concerns.

I look forward to gathering in Boston with this HIMSS community, so committed to advancing healthcare technology innovation while protecting patient safety.

Anderson's session, "Artificial Intelligence: Cybersecurity's Friend or Foe?" is scheduled for 11 a.m. on Thursday, September 7, at the HIMSS 2023 Healthcare Cybersecurity Forum in Boston.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.