5 Ethical Considerations for Implementing Machine Learning in Healthcare

Like it or not, researchers predicted that machine learning (ML) in healthcare might soon replace physicians in fields involving high scrutiny of images, such as anatomical pathology and radiology. As algorithms hold great promise for healthcare, experience with ML tools will be fundamental for the next generation to examine big data. And private companies have already been rushing to incorporate ML into medical decision-making.

However, the ethical challenges of ML in healthcare have not always been addressed proportionally in scientific studies. It was not until recently that this started to change for the better. If healthcare organizations are to realize the benefits of ML, the ethical challenges inherent in using such tools must be in consideration. While some challenges are straightforward, others, albeit having less apparent risks, present broader concerns, such as how algorithms may turn into the repository of the collective medical mind. In this blog, we discuss what ML is and the ethical complexities that come with it.

What is Machine Learning?

Machine Learning in healthcare is evolving over time

As a branch of computer science and artificial intelligence (AI), ML revolves around using algorithms and data to imitate how humans learn while gradually enhancing accuracy.

The Mechanism Behind Machine Learning

UC Berkeley breaks down an ML algorithm into 3 components:

  • Error Function: If examples are available, an error function can evaluate the model’s accuracy through comparison. It assesses the model’s prediction.
  • Decision Process: An ML algorithm, on the whole, helps make a classification or prediction. The algorithm estimates a pattern in the vast data based on input data, whether unlabeled or labeled.
  • Model Optimization Process: The algorithm reviews the miss and then updates the method for reaching the final decision, reducing the discrepancy between the model estimate and known examples.

Electronic Health Records – the Basis for Machine Learning in Healthcare

CTA Graphic for EHR Guide Download

The availability of electronic medical records (EMRs) and electronic health records (EHRs) is the basis for integrating ML algorithms into medical applications. Companies in the United States have massive EHR collections. Cerner, for example, has about a 25% EHR market share. And it plans to work with Amazon to analyze these collections using ML. Flatiron and Foundation Medicine are additional examples of companies with large and clean enough data collections of patients for immediate use in ML applications. To analyze EHRs with ML, at least central data collections or central databases are necessary. 

Ethical Considerations for Machine Learning Applications in Healthcare

Ethical factors should be considered when implementing machine learning in healthcare.

Despite its robust utility, implementing ML in healthcare has raised several ethical concerns. Survey data around patient data usage for ML applications in medicine and healthcare indicates notable regulatory and ethical challenges associated with accountability, fairness, privacy, transparency, and the impact on jobs.

Accountability

Accountability refers to a set of norms for examining an entity’s conduct. It entails an interaction between entities wherein one’s actions require justification. ML algorithms bring about a new circumstance: the actor in the ML pipeline cannot predict future machine actions and thus cannot be held liable or morally responsible.  And ML algorithms and AI do make mistakes. Large language models tend to hallucinate periodically, providing irrelevant or factually incorrect answers when lacking sufficient information. The viral generative AI ChatGPT, for instance, still has limitations like hallucinations and social biases. Tech giants and AI software coding consultants are working to address such issues, but the risks remain high for use in medical or patient care settings.

There’s also no sufficient enforcement mechanism to ensure healthcare organizations conduct ethical ML practices, as there isn’t substantial legislation to regulate such practices. Right now, the negative repercussions an unethical ML system can have on the bottom line are the only incentives for companies to be ethical. Research findings suggest that the blend of distributed responsibility and a lack of proactivity toward possible repercussions may be ineffective in averting societal damage.

However, blaming the algorithms should not be a satisfactory excuse when algorithmic systems have ramifications or make mistakes, including from ML in healthcare processes. Ethical frameworks, therefore, have emerged as researchers and ethicists collaborated to govern the development of ML models. These nevertheless serve only as guidance. Increasing ML applications will prompt governments to seek more suitable solutions to fill these legal gaps.

Fairness

Like how ethics vary depending on different societies, perspectives, and values, fairness is contextual and has no uniform definition. But one core notion is that people should receive equal treatment unless a relevant, justified reason exists for not doing so. 

When examining various ethical dilemmas, fairness serves as a compass for weighing different principles and navigating complex issues. Ensuring such impartial treatment encompasses evaluating both processes and outcomes, focusing on the equitable allocation of gains and costs while avoiding arbitrary decisions or unfair bias

Instances of discrimination and bias across multiple ML systems have sparked ethical questions about the use of ML in healthcare. If training data may be a product of biased human processes, how can stakeholders safeguard against discrimination and bias? Amazon, for instance, unintentionally discriminated against job applications for technical roles by gender when attempting to simplify and automate a process. Bias isn’t limited to the hiring process either. 

There are several causes of bias, ranging from problems with algorithmic design to decision-making and human perception. The data or measurement can be a reason, such as when an instrument works worse for some groups. Another involves geographically discriminating bias against people from nations that have less access to healthcare or medical facilities. The most prevailing cause is perhaps the replication of existing biases in past data when algorithms make decisions based on such data. Nonetheless, it’s noteworthy that, since bias can be favorable or unfavorable, the crux is to mitigate those that result in inequitable outcomes. As organizations become more mindful of the risks, more discussions around ethics and values arise when using ML in healthcare.

Privacy

Discussions about privacy tend to be in the context of data security, data privacy, and data protection. In healthcare, data privacy challenges arise mainly from EHRs, and the data used to train models is one of the privacy concerns. Systems may collect data in a way that violates privacy, such as scraping personal information or gathering information without consent. Large language models may also leak personal data. On top of that, inference style or reverse engineering attacks can de-anonymize training data, compromising legitimately collected data. As a result of these concerns, policymakers have been able to make more strides recently. 

Transparency

In a broad sense, transparency means providing stakeholders access to the information needed to make informed decisions concerning the application of ML in healthcare. As a holistic concept, it covers the pipeline or process ML models go from inception to implementation and the ML models themselves. The Ethics Guidelines For Trustworthy AI suggests 3 parts that constitute transparency:

  • Traceability – ML implementers should comprehensively document their definitions, assumptions, design choices, and goals.
  • Communication – ML implementers should be open about their use of ML technology alongside its limitations.
  • Intelligibility – Stakeholders of ML systems should be able to monitor the functioning of those systems to the extent needed to attain their goals.

Implementers of ML systems thereby hold the ethical responsibility of reporting and recording model performance metrics properly. 

Impact on Jobs

Despite a large part of public perception of ML and AI concentrating on job losses, this concern deserves a reappraisal. With every new, disruptive technology, the demand for particular positions shifts. For example, GM and other automobile manufacturers are gearing towards electric vehicle production in pursuit of green initiatives. Yet, the production volume and need for talents with appropriate skills aren’t changing much.

Likewise, because of ML, the demand for jobs will shift to other areas. There will be a need for individuals that can manage ML systems or tackle more intricate problems. The bigger challenge with using ML in healthcare is instead assisting people with their transition to new in-demand roles.

Need Help With Implementing Machine Learning in Healthcare Applications?

CTA graphic

Choose KMS Healthcare as your technology partner. A team of experienced developers can revolutionize research, testing, integration, and delivery. Contact KMS to learn more about our positioning as a top healthcare machine learning company and solutions provider. 

Get The Latest In Healthcare Straight To Your Inbox

Other Posts You Might Be Interested in

using large language models to obtain insights from patients medical records
The emergence of large language models (LLM) stems...
ai chatbots in healthcare
The healthcare industry has long struggled with providing...
the impact of ai on healthcare a deep dive into remote patient monitoring
The pandemic spurred a rapid shift towards telehealth,...
benefits openai healthcare
Artificial intelligence (AI) is revolutionizing every...

Confidently Cast Your Healthcare Technology Strategies with KMS Healthcare Consulting

Work smarter toward greater results by partnering with the KMS Healthcare Technology Consulting team—start today.