A periodic feature by Cornerstone Research, in which our affiliated experts, senior advisors, and professionals, talk about their research and findings.
We interview Professor Ben Handel of the University of California, Berkeley to gain his insights into the benefits, concerns of bias, and potential for the use of algorithms in healthcare.
You have been studying the use of algorithms in the healthcare field. What is an algorithm, and how are you seeing algorithmic tools used in healthcare?
An algorithm is a set of instructions that can be used to solve problems, perform tasks, or make decisions when given certain inputs. In the medical setting, such inputs include a patient’s health information, claims data, and clinical protocols, among other details. Algorithms are currently used across numerous areas in the healthcare industry, including to solve problems or make decisions about disease diagnosis, treatment, administrative tasks, and health insurance plan design. Similarly, algorithmic tools and methods have also been essential to drug discovery in recent years. They are used in all stages of drug discovery, including finding new uses of drugs, predicting drug-protein interactions, and analyzing digital data in clinical trials.
As diagnostic tools, algorithms can support physicians who make diagnoses based on visual information—such as pathologists and radiologists—by reading medical images quickly and accurately. With regard to treatment, algorithms can assist during surgeries or help develop and modify treatment plans.
Algorithms can also be used for scheduling medical care and processing insurance claims, thus easing the administrative burden of providing healthcare.
Additionally, health insurers can use algorithmic tools to enhance and personalize services for their customers, such as by developing customized insurance plans for patients who suffer chronic illnesses. Insurers can also use algorithms to adjudicate insurance claims and set premiums.
What are the benefits of using algorithms in healthcare settings?
There are many potential benefits of using advanced algorithms in healthcare. Algorithms can maximize accuracy in the delivery of medicine. For instance, algorithms can enhance the quality of care by detecting or mitigating human errors resulting from subjective decision-making in diagnosis and treatment. Recent academic work has attempted to quantify this potential benefit.
Algorithmic technologies can also enable the efficiency of healthcare delivery and the cost effectiveness of care. This outcome can be achieved, for example, through processing large quantities of health data from electronic medical records, which reduces the demand for human labor required to perform these tasks. On a current research project, I use granular data from emergency departments to study how using artificial intelligence (AI) to automate the processing of data can reduce administrative costs and improve care processes and outcomes.
Algorithms can also help with consumer choice in healthcare markets. I have research showing that recommendations provided by algorithms can improve individuals’ choices of health insurance plans, especially when insurance brokers also have access to such algorithms.
Algorithms that use AI also have the potential to improve public health outcomes because they allow for the analysis of complex personal health data in a way that enables better decisions at the population level. One of my published articles discusses how information technology tools applied to data from wearable devices can improve health behaviors.
Are there any concerns about the use of algorithms in healthcare?
Industry observers have raised concerns that using algorithms in healthcare can have unintended consequences. For example, some believe that the use of algorithms can undermine the doctor-patient relationship, reduce transparency, and result in misdiagnosis or inappropriate treatment because of errors within the decision-making algorithms that are hard to detect.
Concerns have also been raised regarding the privacy of confidential health data. Sharing data with vendors that provide AI software increases the risk of data breaches if appropriate safeguards are not in place.
A recent concern suggests that algorithms can exacerbate existing racial or socioeconomic biases, worsening health inequities. Government entities are starting to pay attention to potential algorithmic bias. For example, the California Attorney General, Rob Bonta, recently announced that his office would open an inquiry into whether healthcare providers are using software that results in disparate impacts based on race and ethnicity. The Attorney General sent letters to hospital CEOs across California requesting information about how healthcare facilities and other providers are identifying and addressing racial and ethnic disparities in commercial decision-making tools.
Overall, algorithms have immense potential to improve healthcare. But, with algorithmic capabilities advancing at a lightning pace, it is also important to monitor the use of algorithms to ensure they do not cause harm in certain situations, whether in aggregate or for certain subgroups.
What are examples of algorithmic bias in healthcare?
Algorithmic tools can pick up human biases by relying on historical data and outcomes. Investigations into the effects of algorithms used in healthcare settings have found that they can perpetuate and amplify biases already existing within medicine, even when factors such as race or gender are not explicitly written into algorithms.
Additionally, algorithms can create discriminatory outcomes along racial lines if they rely on past use of the healthcare system as a proxy for future healthcare needs. For example, certain algorithms used for evaluating kidney health and allocating treatment have been found to disadvantage Black patients.
What kind of economic analysis can be done to evaluate algorithmic bias?
As an initial matter, one would need to be able to tell whether bias exists in a given situation, which empirical economic analyses can help evaluate. For example, one paper published in Science in 2019 studied a certain prediction algorithm used to identify patients who will benefit most from high-risk care management programs. Such programs are generally established with the purpose of caring for patients with complex healthcare needs. Based on a population of primary care patients enrolled in risk-based contracts at a large academic medical center between 2013 and 2015, the study found that at a given risk profile, Black patients were likely to be offered less care than White patients, consistent with the presence of bias. In other words, Black patients needed to be much sicker to be offered the same level of care as White patients.
The authors showed this by conducting regression analysis, a common statistical technique used by economists to identify the associations between different factors to predict future outcomes based on past observations. The analysis showed that the outcome was tied to the algorithm’s use of past healthcare utilization to predict future healthcare needs. Because Black patients historically received less medical care, based on prior insurance claims data for the study population, they were deemed to need less care going forward for a given risk profile. Similar types of economic analyses could be performed in a wide range of situations to evaluate whether bias is likely to exist.
In addition to detecting algorithmic bias, economic analyses such as regression models are well-suited to measuring the impact of such bias. Particularly relevant to class actions, economic analyses could be used to assess whether a common method exists to evaluate the impact of any algorithmic bias or, alternatively, whether individualized inquiry would be required. For instance, while a regression model could be used to analyze the average impact of bias (if any) on certain outcomes, variation in characteristics that modify this effect across putative class members could mean that there is a heterogeneous impact that would only be revealed through analyses of circumstances of the individual members of the proposed class.
Another potential economic question in such cases relates to the net impact of an algorithmic tool. For example, an algorithm that promotes preventive care but prompts certain groups to use preventive care less than other groups might still be better for each subgroup than no algorithm. This would be the case if the algorithm still prompts every population subgroup to receive more preventive care than they would otherwise receive.
The views expressed herein do not necessarily represent the views of Cornerstone Research.