The California Privacy Protection Agency is currently drafting regulations sought by voters in 2020 that, if done effectively, will increase transparency for algorithms and help consumers opt out of automated decision-making. But the law is under attack from tech giants and special interest groups.
Getting denied a home loan, never hearing back from that job you applied for – these things happen all the time, but more and more it’s likely an algorithm making the decision. And increasingly it’s making unfair ones, instantaneously and out of view.
Throughout this decade, 85% of algorithms will provide false analysis due to bias, according to the American Civil Liberties Union. In 2019, an algorithm denied mortgages to high-earning Black applicants with less debt more often than high-earning white applicants with more debt, The Markup reported. Even when living in the same areas, people of color were more likely to be denied loans than white people with similar financial profiles.
It’s usually low-income individuals, people of color, females, religious groups or those with disabilities who suffer the most because of automated decisions.
Algorithms are everywhere, and we don’t know a lot about them. Even Fannie Mae and Freddie Mac’s regulator, the Federal Housing Finance Agency, only broadly understands the lender’s algorithmic logic. Until recently, companies didn’t have to disclose how these automated decisions were being made.
Under Proposition 24, which voters approved in 2020, the California Privacy Protection Agency, or CPPA, is required to issue regulations that would disclose the logic behind algorithms and the ability to opt out of automated decisions. Those regulations are in the beginning stages of being drawn.
If effectively drafted, California residents will be able to stop algorithms that profile them and impact their financial situation, access to education or employment. For example, a job application algorithm used by an employer to automatically assess and rank job applicants according to names, addresses, gender and disabilities is profiling, and Californians should be able to opt out of that automated decision-making.
Or, say a financial lending company uses a certain age bracket as a reason for not having a credit application further analyzed. Since the age of a person doesn’t matter when applying for credit and denial significantly affected applicants, such criteria should be stopped under automated decision-making regulations.
Under the new law, Californians will also be able to pull back the curtain on algorithms and learn about their logic. They have the right to know the personal data processed, the automated decision’s consequences for the subject, and any assigned categories, labels or rankings. Consumers deserve not just meaningful information, but meaningful explanation. If an employer is creating an algorithm to make predictions about potential employees’ behavior and reliability on the job, applicants deserve to know about it, as well as be able to stop their personal data from being used to automatically categorize and predict things about them.
This was what a court in Amsterdam ruled under the General Data Protection Regulation, the data privacy law which the California Consumer Privacy Act is modeled after.
But the law is under attack by large corporations seeking to delay and spread misinformation. The California Chamber of Commerce, which counts Amazon, Google and Meta as members, filed a lawsuit to delay the implementation of the law. On another front, CTIA and TechNet, trade groups pushing the interests of these very same tech companies who hold a monopoly over our personal data, said the CCPA doesn’t force businesses to honor automated decision-making, opt-out requests at all.
The U.S. Chamber of Commerce, one of the most powerful lobbying arms in the country, wrote in recent comments to the privacy agency that the agency is not statutorily authorized to create opt-out rules regarding automated decision-making.
However, California voters who passed Prop. 24 were clear on what the law requires. Interpreting otherwise is antithetical to the voter’s intent, which was to give people more control over their personal data, not less.