Although the market recovered almost as quickly as it lost those billions on Tuesday morning, it’s unclear what effect the fake images – and the mostly fake accounts that spread them – had on the overall market results for the daytime. It’s also unclear how much money may have disappeared in the form of fees applied to funds, including many pension funds, where investors are charged each time the fund buys or sells shares.
Most of the changes in the stock market were probably not generated by human beings pressing the panic button for fear of possible catastrophe; most stocks are not traded by human beings. Massive moves, like the one that sent the S&P plummeting and rising on Monday, are handled by a different kind of AI that runs assessments that scan information from all directions.
But this situation was not completely free of human beings. Someone ordered these images from Midjourney or a similar AI-based image generator. Someone put them on social media. Someone probably triggered the market drop.
But none of these human beings were critical to this event. With half a day of coding or less, it would be perfectly possible to create a crisis bot that would sift through the news, order footage of a plausible disaster, upload it to social media, boost it with thousands or tens of thousands of retweets and links, push them with seemingly authoritative accounts and present them in a tailored way to trigger a response from bots that mine the stock market, bond market, commodities market or pretty much any other aspect of economics.
They could do it regularly, randomly or on targeted occasions. They could do it much more convincingly than these two images – and much harder to refute. Whether what happened on Monday was a trial balloon, a cyber war, or someone just farting, we should take the results of this action very, very seriously.
Two easily rebuttable false images made $500 billion disappear. Next time, the images might be more plausible, the cast more authoritative, and the effect more lasting.
There is also nothing to say that future damage created by AI will be limited to the economy. Despite some dire warnings in 2016 and 2020, these elections remained largely free of “deepfake” videos and audio recordings using altered voices. This will not be the case in 2024. You can bet on it.
Everything that used to require at least minimal knowledge and a few hours of effort is much, much easier now. In fact, it’s so easy that ordinary scammers can spoof not just a phone number, but the voice of a friend or relative’s person when they call to explain why they desperately need a cash injection.
The next time someone produces a tape like the one from 2012 where Mitt Romney spilled his guts to millionaire donors, or even that of Donald Trump 2016 Access Hollywood Video, How Will You Know If It’s Real? Candidates will simply declare any unflattering revelations to be false. If someone sent Fox News a video today purporting to show Joe Biden making a deal with China to give up Taiwan in exchange for $1 billion, do you think they wouldn’t show it? Imagine the fictions they could create and source on Hunter Biden’s laptop.
Given enough time, experts can determine whether an image, video or audio recording is fake, but not before it has become widely available. Each rebuttal can be countered with more counterfeits. And all the debunking in the world will not sway people who have an ideological interest in believing these fakes, or stop these fakes from spreading.
What happened on Monday happened so quickly that it was easy to miss, and even easier to ignore. We can’t afford to do either.
When AI company executives appeared before Congress last week, they practically begged for regulation.
Right now, humans are both creating and understanding the code behind the large-model, limited-use AIs that dominate the news cycle. But even with that, it’s impossible for humans to understand the decisions these systems make based on the interaction of the millions, if not billions, of documents they’ve powered. Very soon, our understanding will not even extend to the code itself, as the code will be written and modified by other AI systems.
The threat from these systems is not a distant concern. This is not a sci-fi scenario involving Skynet or the robot uprising. It’s a here-and-now problem in which these systems are already powerful enough to eliminate millions of jobs, change the direction of the economy, and influence the outcome of an election. Like a hammer, they are tools. Like a hammer, they can also be weapons.
Until we put regulations on these systems, we’re all part of the experience, whether we like it or not. If we don’t put this regulation in place almost immediately, chances are it will be too late.
WarTranslated’s Dimitri did the essential work of translating hours of Russian and Ukrainian video and audio during the invasion of Ukraine. He joins Markos and Kerry from London to talk about how he started this work by sifting through various sources. He is one of the only people to translate information for an English-speaking audience. Dimitri followed the war from the start and observed the evolution of language and dispatches as the war progressed.