We interview Nobel Laureate David Card, professor at the University of California, Berkeley, to gain his perspectives on analyzing alleged discrimination in pay and promotions, evaluating complex university admissions criteria, and using artificial intelligence (AI) tools to quantify unstructured, nuanced data.
-
You were awarded the Nobel Memorial Prize in Economic Sciences in 2021. Tell us a bit about the research for which you were awarded that prize and your research studying discrimination.
I was awarded the Nobel Prize for my empirical research in labor economics. Part of that work focused on education research, part of it on minimum wages, and part on immigration issues. More broadly, the award recognized the idea of using detailed empirical data to evaluate alternative explanations for phenomena in the labor market.
This empirical approach extends directly to the work I’ve conducted over the course of my career on gender discrimination and racial discrimination, and on how discrimination can play out in academia and the labor market.
One example of this work looks at how firm-level wage differences affect the pay gap between men and women. This research finds that firm-specific wage premiums are an important source of wage inequality. Women tend to work at firms that, on average, pay slightly lower wage premiums than men. Taking this firm-level difference into account can account for 25 percent to 35 percent of the gross pay differential between men and women who are working full-time.
I was awarded the Nobel Prize for my empirical research in labor economics. . . . This empirical approach extends directly to the work I’ve conducted over the course of my career on gender discrimination and racial discrimination, and on how discrimination can play out in academia and the labor market.
This finding raises a major question that researchers on the frontier right now are trying to answer: Does this evidence suggest that women are purposefully being excluded from jobs at the highest-paying firms and organizations? Here, we’re comparing scenarios like a top-tier investment bank versus a regular banking company, or a highly selective research university versus a public university, or a leading technology company versus a typical computer company.
I’ve also conducted research on gender differences in the career advancement of academics, including studies focusing on prestigious accolades, such as appointments to fellowships of economic societies, the National Academy of Science, or the American Academy of Arts and Sciences. This work involves assembling the best possible data on candidates’ achievements, such as publication records and citations, to evaluate to what extent those factors explain differences in the outcomes.
This type of analysis is similar to the work I did in Students for Fair Admissions Inc. v. President and Fellows of Harvard College et al., for which we had detailed information on applicants and sought to explain differences in admission rates.
-
As a testifying expert in high-stakes litigation, you are called upon to apply your economic expertise to real-world disputes. How does an economist approach the question of whether a party is engaging in discrimination against a specific group? What kinds of data and evidence are you looking for?
When we observe a difference in outcomes between groups—whether it’s in pay, promotions, or admission to a university—the fundamental economic question is whether that difference is due to discriminatory behavior or simply reflects the fact that candidates have different qualifications or characteristics.
The standard approach in labor economics, a methodology that has been refined since the 1970s, is to rigorously account for these differences. We seek to collect the best possible data on, for example, the relevant productivity characteristics of workers, the academic qualifications of students, or the career productivity of candidates being considered for a promotion.
When we observe a difference in outcomes between groups—whether it’s in pay, promotions, or admission to a university—the fundamental economic question is whether that difference is due to discriminatory behavior or simply reflects the fact that candidates have different qualifications or characteristics.
The objective is to determine if, once all these objective factors are taken into consideration, there is any unexplained difference in the outcomes between, say, men and women, or—to return to the Harvard case example—between white and nonwhite candidates. This modeling issue is central to all contexts, including promotions, pay rates, and university admissions.
For the economist, this involves a dual effort: obtaining reliable, detailed, and precise information about candidate differences and developing a robust model that accurately reflects how those factors influence the outcome at issue. Ultimately, the finding will depend on the strength of the data we have and the extent to which nondiscriminatory characteristics account for the unexplained difference in outcomes.
-
In college admissions cases that rely on complex statistical models to analyze admissions outcomes, what is the biggest challenge in using data to model a process that is so multifaceted?
Modeling a multifaceted admissions process, especially when a school employs a “whole person evaluation,” requires a truly complete dataset that reflects its full complexity.
While some admissions systems may rely on a formulaic plan, others involve a highly nuanced process. In these situations, the data available to the admissions officers, which can include subjective elements and subtle context, may not be accessible to the outside analyst. It often requires significant effort to pull this comprehensive information out of the system.
Admissions decisions are rarely based on a single dimension. Instead, admissions officers look at a combination of factors, such as test scores, high school performance, extracurricular activities, leadership skills, and awards—factors that translate into a potential ability to contribute to campus life. Officers also consider contextual factors such as differences in family backgrounds or whether an applicant has had to overcome adversity.
Admissions decisions are rarely based on a single dimension.
In a legal setting, if a university is trying to defend the legitimacy of its process, the most effective approach would be to analyze the ultimate success of the students who are admitted. For instance, if a university can show that students admitted under specific nonquantitative criteria successfully complete a degree in a desired field, or go on to have impactful careers or leadership roles after graduation, that evidence would provide meaningful support for the use of those criteria.
-
In recent years, there have been cases alleging that race, gender, and age discrimination via algorithms used in various contexts—applicant screening during the recruiting or hiring process, for example—are discriminatory. Is the economic approach to answering such questions the same?
The fundamental economic approach is similar, as the core task remains to compare outcomes while rigorously accounting for all nondiscriminatory factors. However, the presence of an algorithm introduces a new layer of concern about preexisting bias that may be embedded in the system’s input data.
One way to illustrate this is through the criminal justice system. Consider an algorithmic sentencing or bail decision program. This program might use past offenses, including information on whether a person had been stopped or brought in by police, even if they were not arrested or charged. If there was previous discrimination or bias in the policing system, that bias is now passively incorporated into the algorithm’s inputs, which then affects its output. The algorithm itself becomes a vehicle for propagating old biases.
If a hiring algorithm heavily weighs metrics that depend on a candidate’s prior access to specific high-cost training or certifications, and access to those resources is highly unequal across groups, then applying that seemingly objective rule will result in disparate outcomes.
A comparable problem arises in other contexts, such as hiring or university admissions. For instance, if a hiring algorithm heavily weighs metrics that depend on a candidate’s prior access to specific high-cost training or certifications, and access to those resources is highly unequal across groups, then applying that seemingly objective rule will result in disparate outcomes. The algorithm looks objective, but the underlying access to the things it measures is systematically different for various groups.
-
Looking ahead, where do you see the most significant opportunities for economists to apply modern empirical methods, new AI tools, and advanced data analysis to resolve complex labor and employment disputes?
Significant opportunities lie in applying large language models (LLMs) and other AI tools to analyze the vast amounts of nuanced, textual information that has previously been difficult to quantify.
In areas like hiring or admissions, decision-makers often rely on detailed textual information that is not easily translated into a number, such as letters of recommendation, essays, or performance reviews. State-of-the-art LLMs make it possible to assess the content of these materials in a realistic time frame.
This approach has already proven valuable in research. In my own work in the medical field, for example, I’ve found that using LLMs to review physicians’ notes can provide information about a patient that would not be obvious from other quantitative metrics.
Ultimately, new AI tools allow us to quantify unstructured data, fundamentally improving the precision of complex models that rely on soft information. Processing this nuanced, qualitative material is a major breakthrough. Successfully advancing this work will continue to hinge on applying rigorous empirical methods to evolving technologies, potentially unlocking even deeper insights into what drives labor, employment, and higher education admissions outcomes.