In summary

AB 13 would set criteria to minimize unfair, biased or discriminatory decisions by artificial intelligence systems used by government.

By Ed Chau

Assemblymember Ed Chau, a Democrat from Monterey Park, represents Assembly District 49, Assemblymember.Chau@assembly.ca.gov. Chau introduced Assembly Bill 13 and is chair of the Assembly Committee on Privacy and Consumer Protection.

Debra Gore-Mann, Special to CalMatters

Debra Gore-Mann is president and CEO of The Greenlining Institute, debra.goremann@greenlining.org. The Greenlining Institute is sponsoring Assembly Bill 13.

You can’t see algorithms, but they can impact huge parts of your life, from seemingly minor things like what video YouTube will queue up next to life-and-death issues such as whether or not you can get a COVID-19 vaccination. It’s time we all had a better idea how algorithms impact us, particularly when the government is using them.

An algorithm is simply a set of rules and instructions used by a computer program to perform a task or solve a problem. While algorithms themselves are coldly mathematical, they are created by humans who, like all of us, can have blind spots, biases or preconceptions. And that can lead to algorithms that make bad decisions or even perpetuate racial and gender bias. 

These algorithms feed into an artificial intelligence framework where machine learning makes decisions and predictions from data about people – decisions previously made by people. According to PwC research, artificial intelligence could contribute $15.7 trillion to the global economy by 2030.

The Greenlining Institute recently released an analysis of the problem, titled Algorithmic Bias Explained: How Automated Decision-Making Becomes Automated Discrimination, which included some startling findings. The report reviews a number of incidents that have made it into the media in which algorithms perpetuated discrimination based on race, gender or income – and those reports represent just the tip of the iceberg, because most algorithms operate in the background, unseen and unknown by those whose lives they impact.

Some of the most disturbing reports have involved government programs, including an Arkansas Medicaid algorithm that wrongly cut off medical and nursing home benefits to hundreds of people. Another, used in Detroit, perpetuated old, discriminatory patterns of redlining by channeling community development funding away from the very neighborhoods that needed it most – literally a case of algorithmic redlining.

In February, the New York Times reported serious issues with an algorithm the federal government uses to manage COVID-19 vaccine allocations: “The Tiberius algorithm calculates state vaccine allotments based on data from the American Community Survey, a household poll from the United States Census Bureau that may undercount certain populations – like undocumented immigrants or tribal communities – at risk for the virus.”

Equally concerning, the New York Times quoted researchers and health officials who are frustrated at how little they know about how the Tiberius algorithm decides how many vaccine doses to send where, describing it as “a black box.”

When government makes decisions that affect our daily lives, our communities and potentially even our very survival, those decisions should not be made in a black box.

That’s why we’ve worked to develop legislation that will begin to bring transparency to the use of algorithms by government agencies in our state. If passed, Assembly Bill 13 would set forth criteria for the procurement of high-risk automated decision systems by government entities in order to minimize the risk of adverse and discriminatory impacts resulting from their design and application. The bill is scheduled to be heard in the Assembly Committee on Privacy and Consumer Protection on April 8.

Specifically, the bill would require a prospective contractor to submit an Automated Decision System Impact Assessment to evaluate the privacy and security risks to personal information and risks that may result in inaccurate, unfair, biased or discriminatory decisions impacting individuals. It would also require the contracting entity to produce an accountability report, after awarding the contract, which includes a detailed mitigation plan for identifying and minimizing the potential for any disparate impacts, and would make both the assessment and report available to the public.

At their best, algorithms can do a tremendous amount of good. They have the potential to make decision-making faster, fairer and more data-driven. But we’ve ignored their dark side for too long.

It’s time to shine some light in that black box. Let’s make sure that when government uses algorithms to make decisions on everything from health care to public benefits, we know what’s happening and that the process is accurate and fair.

_____

Assemblymember Ed Chau has also written about facial recognition regulation and about stopping hateful acts against Asian Americans.


Debra Gore-Mann has also written about California regaining its power to regulate internet service providers.

We want to hear from you

Want to submit a guest commentary or reaction to an article we wrote? You can find our submission guidelines here. Please contact CalMatters with any commentary questions: commentary@calmatters.org .