Misinformation in the Age of Artificial Intelligence and What it Means for the Markets


In mid-October, bitcoin’s price briefly spiked by 5%, after a false Cointelegraph post on X that stated, “SEC approves iShares bitcoin spot ETF.” The tweet remained live for 30 minutes, before it was updated to add the word “reportedly.” It was later removed, and Cointelegraph apologized for what it said was “a tweet that led to the dissemination of inaccurate information.”

According to a Lookonchain post on X, some crypto traders suffered losses by buying bitcoin based on the tweet. When volatility returned to the market, within 24 hours 40,723 traders were liquidated for a total of $182.4 million of cryptocurrency, according to CoinGlass data, out of those $100 million were related to bitcoin.

In mid-November, a filing suggesting that BlackRock was preparing to launch a spot XRP exchange-traded fund (ETF) briefly sent XRP’s price soaring within minutes by more than 10%. BlackRock quickly confirmed that it was not planning the XRP ETF, indicating that the paperwork was fake.

The Cointelegraph post was a reporter’s unintended mistake. The fake filing, on the other hand, seemed to be an intentional act to manipulate the market. The fake filing could have been a creation of Generative Artificial Intelligence (AI) application; we do not know for certain but we cannot dismiss this possibility.

News on critical economic or political events may move the market even more profoundly as they may cause fears that could ignite a broad market selloff.

In May, an ominous image of black smoke billowing from what appeared to be a government building near the Pentagon set off investor fears and sent stocks tumbling. The picture first appeared on Facebook, and quickly spread to Twitter via accounts with large followings, including ZeroHedge and the Kremlin-controlled RT.

Within minutes, internet sleuths began to debunk the image as fake, most likely cobbled together with AI. Soon after, ZeroHedge and RT deleted it from their accounts, while Facebook blocked access to the original post, and markets swiftly recovered. That incident underscores how even unsophisticated spoofs can spread mis/disinformation quickly and make an impact.

It illustrated one of the big fears behind the government’s zeal to regulate AI: that the technology could be used to stoke panic and sow disinformation with potentially disastrous consequences.

Mis/disinformation are nothing new. Unscrupulous behavior has been around since the dawn of civilization, but AI and specifically Generative AI makes it easier for bad actors to generate disinformation and with better quality. This, combined with the ubiquitous use of social media, makes the dissemination of disinformation quicker and its impact more profound. How can we prevent mis/disinformation generated by AI? Or at the very least, mitigate its impact?

Misinformation versus Disinformation

Misinformation is false or inaccurate information – getting the facts wrong. The Cointelegraph post is an example of misinformation: It represented the facts inaccurately. Even though it was not intentional, it still influenced a reaction from traders.

Disinformation is false information which is deliberately intended to mislead – intentionally misstating the facts. The fake image of black smoke billowing from what appeared to be a government building near the Pentagon is an example of disinformation, and so is the fake BlackRock filing for the XRP ETF.

AI can generate its own false information unintentionally – a phenomenon called “hallucination.” When AI is given a task, it’s supposed to generate a response based on real-world data, but in some cases, the AI will fabricate its sources, meaning it’s “hallucinating.”

“Hallucination” can be thought of as misinformation. This is something that is under our control and depends on the development and training of the AI models. Developers should be found accountable for hallucination that caused harm. But when unscrupulous people intentionally generate false information with the use of AI, that is disinformation. This situation is out of our control but can be prevented or at least mitigated.

Disinformation and its Ramifications on the Financial Sector and Financial Markets

Some experts estimate that the financial industry accounts for as much as 20-25% of the global economy. Given the sizable cyber risk and regulatory stipulations characteristic of the financial industry, financial institutions have long been at the forefront of cybersecurity best practices. They also face some of the most savvy and creative cyber threat actors. Here are some of the issues they’re facing:

Reputational Risks: Reputations are highly vulnerable when an institution falls victim to mis/disinformation or a deepfake attack. The sensitive customer data held by financial institutions is highly valuable. With low barriers to entry, the incentives run high for cyber criminals. Generative AI can generate texts using large language models (LLM), generate fake images including ID cards, generate fake customer records or false records of transactions, and automate fraud by creating multiple fake IDs.

Vulnerability in Content-Based Authentication Protocols: As banks increasingly rely on voice ID, fraud can become easier for cyber criminals that can generate audio deepfakes using LLM. As of this writing, there are over 70 open-source voice cloning software programs on GitHub, amplifying the availability and ease of access to such technology. If customers become aware of the ease of bypassing front-line security measures, their trust in the bank may be eroded, which can lead to bank runs.

Risk to Credit Ratings: Credit ratings can be affected by spreading false narratives. Fraud claims and fraud activity can become even more prevalent with the ease of use of LLM applications. Customers can be readily misled and in turn affect the standing of the bank.

Market Manipulation: Other forms in which deepfakes can affect banks and financial institutions are through stock manipulation. Audio or video deepfakes can be used to falsify an endorsement for a product or a company. Like in the case of the deepfake discussed above, malicious actors can generate fake images of critical negative events that could ignite a market selloff. The speed of information spreading is crucial to the impact this can have. With the use of bots paired with a fake audio or video recording, rumors can appear to be true very quickly and easily.

Scaled and Accelerated Cybercrime: Attackers can access technical code without necessarily learning new skills thanks to LLM applications such as ChatGPT or Google Bard. Models like this also make business email compromise readily available as criminals are easily able to mimic someone’s writing style using LLM programs.

Prevention of Mis/Disinformation

Content provenance, transparency, and authenticity: Financial institutions can strengthen authentication protocols by using secure content provenance technologies for any content-driven authentication process (e.g., upload a photo of a driver’s license). The Coalition for Content Provenance and Authenticity (C2PA), founded by Microsoft, Adobe, Truepic, BBC, Arm, and other industry members, has published an open standard on how to secure content provenance using cryptography.

Several of the C2PA’s member organizations provide solutions to help capture, secure, and preserve content provenance data (where, when, and how a piece of content was created). Users can take a picture or video within a secure platform so that metadata is transparent and tamper-evident. The content created will have proof of authenticity.

Deepfake Watermark Identification: One solution when these AI images are generated, the algorithm will issue them with a watermark, which will immediately signify the creation as non-authentic and generated by AI. Better yet, this should be a cryptographic stamp verified and authenticated with blockchain technology, a solution I discussed in an article published in April.

The stamp would identify that the image or video is AI-generated content, disclosing the fact that it is synthetic. The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software. Trupic and Adobe, both members of C2PA, are providing such solutions.

Open standards for content authenticity like the C2PA can make this kind of collaboration easier, but requires wider adoption by more companies and organizations. New technology adoption, even of an open standard, can take time, especially when decision-makers have limited technical vocabulary and knowledge.

This calls for responsible innovation.

Responsible Innovation (not just for AI)

Responsible Innovation is making new technologies work for society without causing more problems than they solve. This concept applies to all technologies, all innovations, across all organizations, industries, and regions. If innovators innovate responsibly – making sure that the application is safe, secure, and protecting users’ privacy as well as providing transparency and accountability – then this will result in a trustworthy service/product.

Technology advances exponentially. More technologies will emerge, and existing ones will evolve and advance. They all come with their benefits and challenges, and require the implementation of responsible innovation. Regulation and legislation, though, will not be able to keep up with the pace of innovation.

But innovators should not halt innovation and wait for regulators. They simply should have a Responsible Innovation mindset, understanding that being responsible will gain the trust of their stakeholders and regulators. By innovating responsibly, we can establish guardrails and prevent, or at least mitigate, any ramifications of mis/disinformation on financial markets and the global economy at large.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.





Original: Artificial Intelligence Feed: Misinformation in the Age of Artificial Intelligence and What it Means for the Markets