2023 cohort
Water and drought

Providers, politicians, patients present perspectives on a healthcare industry integrating AI

Data access and transparency – how data is collected, stored and shared – are essential to innovation in the medical […]

Sophie Nguyen

August 5, 2024

Data access and transparency – how data is collected, stored and shared – are essential to innovation in the medical field. Conversely, at the very basis of healthcare, doctors and other healthcare providers have ethical duties to ensure patient information is kept confidential. Lawmakers trying to regulate the role of artificial intelligence (AI) in healthcare are grappling with the same tensions as they try to strike the right balance. 

At its core, medical innovation is driven by personal data. For instance, a new approach to healthcare called precision medicine aims to provide a personalized treatment plan for patients by considering each individual’s unique biometric information and lifestyle. Many of the medical devices at the heart of precision medicine are powered by AI; they collect data on “genetic information, physiological monitoring data, or EMR data.” With massive amounts of personal health information processed by AI, data privacy concerns are rising among patients and their families in California, especially since the state lacks a universal privacy law and relies on sets of regulations

And precision medicine is just one example of AI entering the healthcare industry: applications of AI have grown by approximately 40% each year from 2016 to 2024 across the country, according to a study from Global Market Insights. From machine learning applications to physical robots, AI has been used to streamline personalized treatment, diagnosis and surgery by health professionals. 

“If you’re going to provide privacy, then you’re not going to have transparency [of how data is used in training]…Individual privacy should be protected, but the process that you’re dealing with should be transparent,” California state Sen. Roger Niello (R-Roseville), who sits on the Senate Business, Professions, and Economic Development committee, said.

Lawmakers are debating over medical ethics and AI regulation, while hospitals continue to introduce new devices and AI-powered technologies to facilitate administrative tasks and streamline surgical processes. For instance, nurses are often tasked with transcription at each patient checkup, but by using AI bots with a natural language processing (NLP) algorithm to perform simple tasks, nurses and doctors can have more dynamic and comprehensive interactions with their patients, rather than focusing on note taking. 

“Many AI methodologies can transcribe doctor and patient interactions and then generative AI can summarize those transcriptions automatically, enabling nurses and doctors to engage more meaningfully with patients without being distracted by note-taking,” UC Davis Health Chief AI Advisor Dennis Chornenky said, adding that AI can also annotate medical coding for billing and “analyze patient data in real-time.”

Aside from the debate whether the data should be collected or not, experts also noted problems with how information about data collection is disclosed to patients. Health tracker apps collect data, such as heart rate variability, glucose monitoring, body temperature and skin temperature, from consumers to offer recommendations and predictions about suggested sleep and exercise schedules or immediate health emergencies. Patients using these devices might not be equipped to navigate digital settings or comprehend application disclaimers. 

Moreover, last September, California Governor Gavin Newsom passed an executive order, addressing the progress of AI in the state. The order established state agencies and departments to perform risk-analysis reports along with ongoing evaluations of AI usage in the public sector. In addition, the state pledged to work with vendors in providing patients with disclosures on whether their treatment plans use generative AI.

“These [disclosure statements] should use plain language, avoid jargon, and provide explanations for any necessary technical terms. I expect in the future we will see more visual aids, infographics, and flowcharts to help patients make sense of complex information disclosures. Standardizing disclosures across the industry can also help, ensuring consistency and clarity,” Chornenky said, stating that regular reviews of such disclosures could provide insight into patient understanding. 

Another concern with digital devices is their susceptibility to hacking, a risk that is particularly acute when it comes to patient data. The Food and Drug Administration (FDA) has cataloged over 800 medical devices powered by AI that can be used in hospitals and healthcare settings across the United States. But these digital devices are at risk of data breaches, which could expose “confidential preference data, surgeon-specific performance information, or patients’ personal health information,” according to a 2022 study. 

Cathy Kennedy, President of California Nurses Association (CNA) and Vice President of National Nurses United (NNU), said that patient consent is key before any new AI technology is introduced, stating “By and large, this technology is untested, unregulated, and unproven. Our patients should not be guinea pigs.”

Despite these concerns, with the potential effects from the technology, solutions do exist for the future. For example, training programs for a diverse group of employees, including health professionals, communication specialists, attorneys, privacy officers and security officers, help provide the skills necessary to navigate regulating AI in healthcare for the next generation of the workforce. 

“The training we should be providing is not the technology itself, … [it’s] utilization,” Amy Tong, California’s Secretary of Government Operations who currently leads efforts around regulating Generative Artificial Intelligence (GenAI) in state government, said. “Private industry, quite frankly, has a more competitive market for [generative AI], but what we think we can be best at is training the analyst.”

Regardless of the amount and type of data collected by AI-devices, data collection must always adhere to the federal Health Insurance Portability and Accountability Act (HIPAA) and other privacy laws, California State Rep. Josh Lowenthal (D-Long Beach) said, emphasizing the importance of patient autonomy over personal data.

“Our data belongs to us. And that should be an inalienable human right. And for whatever reason, we allow that to pass by without much more robust protections,” Lowenthal said.

About the author

Sophie Nguyen is a 2024 JCal reporter from Placer County.

Read more

AI in the Screenwriting Room: Enhancing creativity or erasing humanity? 

Abby Pace

Unveiling Bias: College Programs and the Workforce

Ava DeJesus
Published with

California Lawmaker and Ethics Professor Discuss A.I. Ethics in the New Workforce

Gabriel Worley

JCal is a free program that immerses California high school students into the state’s news ecosystem. It is a collaboration between the Asian American Journalists Association and CalMatters.