Republish
Wall Street is betting big on AI. California consumers must be protected
We love that you want to share our stories with your readers. Hundreds of publications republish our work on a regular basis.
All of the articles at CalMatters are available to republish for free, under the following conditions:
-
- Give prominent credit to our journalists: Credit our authors at the top of the article and any other byline areas of your publication. In the byline, we prefer “By Author Name, CalMatters.” If you’re republishing guest commentary (example) from CalMatters, in the byline, use “By Author Name, Special for CalMatters.”
-
- Credit CalMatters at the top of the story: At the top of the story’s text, include this copy: “This story was originally published by CalMatters. Sign up for their newsletters.” If you are republishing commentary, include this copy instead: “This commentary was originally published by CalMatters. Sign up for their newsletters.” If you’re republishing in print, omit the second sentence on newsletter signups.
-
- Do not edit the article, including the headline, except to reflect relative changes in time, location and editorial style. For example, “yesterday” can be changed to “last week,” and “Alameda County” to “Alameda County, California” or “here.”
-
- If you add reporting that would help localize the article, include this copy in your story: “Additional reporting by [Your Publication]” and let us know at republish@calmatters.org.
-
- If you wish to translate the article, please contact us for approval at republish@calmatters.org.
-
- Photos and illustrations by CalMatters staff or shown as “for CalMatters” may only be republished alongside the stories in which they originally appeared. For any other uses, please contact us for approval at visuals@calmatters.org.
-
- Photos and illustrations from wire services like the Associated Press, Reuters, iStock are not free to republish.
-
- Do not sell our stories, and do not sell ads specifically against our stories. Feel free, however, to publish it on a page surrounded by ads you’ve already sold.
-
- Sharing a CalMatters story on social media? Please mention @CalMatters. We’re on X, Facebook, Instagram, TikTok and BlueSky.
If you’d like to regularly republish our stories, we have some other options available. Contact us at republish@calmatters.org if you’re interested.
Have other questions or special requests? Or do you have a great story to share about the impact of one of our stories on your audience? We’d love to hear from you. Contact us at republish@calmatters.org.
Wall Street is betting big on AI. California consumers must be protected
Share this:
Guest Commentary written by
Justin Kloczko
Justin Kloczko is a tech and privacy advocate for Consumer Watchdog.
Artificial intelligence may soon replace financial advisors, and that’s just the start. While one algorithm predicts stock price, another decides what to disclose to regulators. Chat GPT-style language models could soon hallucinate your mortgage application or life savings, and you won’t even know it.
I spent several months combing through the U.S Patent and Trademark Office to see how the world’s most powerful investment banks are incorporating artificial intelligence, and it’s a scary glimpse into the future.
JPMorgan has applied for a trademark called “Index GPT” that would give financial advice, and one that would match companies with investors. Goldman Sachs is seeking to patent AI that will combine all the data a trader would need to predict stock prices, and another to predict a hedging portfolio. Already, ING Group is screening for potential defaulters.
There is even AI to translate “Fedspeak” so banks can tell if statements by regulators are “dovish” or “hawkish.”
Banks are pouring billions of dollars into AI research, patents and financing without adequate safeguards. A lot of attention is devoted to tech companies developing artificial intelligence, but Wall Street banks are just as interested. Financial services spending on AI is larger than any other industry, exceeding even the tech industry.
There is still little we know about generative AI. Even engineers and coders don’t understand how AI works. Unlike previous models, OpenAI decided not to disclose the training data of GPT-4. ChatGPT has scraped the Internet for over 300 billion words, and there are concerns that it has sucked up personal information along the way. In addition, OpenAI says its language model has a “high risk of economic harm” due to a “tendency to hallucinate,” and should come with “a disclaimer.”
Despite these concerns, JPMorgan said that data and AI will be “critical” to the company’s future success. It currently has more than 300 use cases for AI in production.
But the lack of transparency surrounding AI and its potential for bias means mysterious AI could push risky investments and loans, or hallucinate bad advice on managing debt without a consumer even knowing it was AI.
Without sound regulation, the next financial crisis could be caused by AI, igniting in the mortgage or equity market due to a handful of banks relying on the same algorithms. Don’t just take it from me. Much smarter people like Gary Gensler, chair of the U.S. Securities and Exchange Commission, predicted that within 10 years AI will be responsible for some sort of financial crisis.
read next
California plans to use AI to answer your tax questions
The major concerns are algorithmic complexity, a lack of transparency and biased or false information. For example, Goldman’s AI could be used to create and price out a new type of derivative. These complicated financial instruments enabled Wall Street to gamble with billions of dollars on the housing market. As a result, 6 million Americans lost their homes during the Great Recession.
How else can AI impact our lives financially? AI can also start to “drift,” meaning it can deviate from its intended use and perpetuate bias. An AI could start thinking a certain race or address equals bad credit and deny loans based on that. This has already happened.
Thankfully, California is trying to rein in AI. Right now the California Privacy Protection Agency is drafting automated decision-making rules under the California Consumer Privacy Act, which would safeguard against AI’s risks and biases regarding personal data. For example, a business would have to tell consumers that it uses automated decision-making, details about the algorithm’s logic and give a chance to opt out of the decision, especially if it involves finances.
And for the first time, there are rules dedicated to generative AI and training data. Businesses using language models will have to disclose if those models use personal information to train AI, and consumers will be able to opt out.
America is falling behind on AI regulation. It cannot be slow to act like it was regarding social media when both sound regulation and innovation are possible.