Shareholders at Alphabet and Meta, following on the success of a similar resolution at Microsoft last month, have filed shareholder proposals recommending that the companies issue annual reports on the risks of misinformation and disinformation produced and amplified by their deployment of generative artificial intelligence (gAI). All three companies have made multibillion dollar investments in gAI.
The proposals ask the companies not only to assess the material risks to the companies’ business operations and public welfare but also to outline steps they will take to mitigate potential harms from gAI-powered mis- and disinformation and to evaluate their effectiveness in doing so.
View the full Alphabet and Meta shareholder proposals
“Artificial Intelligence offers great promise — and we should be excited about that — but there’s also enormous concern about how it can be abused,” said Michael Connor, Executive Director of Open MIC. “In a world where democratic institutions are already threatened by online mis- and disinformation, Alphabet and Meta need to assure billions of users and their shareholders that their management and boards are up to the task of responsibly managing the technology.”
Microsoft shareholders were the first to consider the resolution—Proposal 13 on its proxy statement. Submitted by Arjuna Capital (lead filer for all three resolutions), Azzad Asset Management, Ekō, and Open MIC, it was presented by Nirvana co-founder and bassist Krist Novoselic at the company’s annual general meeting (AGM) on December 7 and garnered 21% of the shareholder vote. Responsible Investor, a leading industry trade publication, called it an "impressive" result for a first-time proposal, showing that AI-generated misinformation is a key issue for shareholders.
The resolutions give voice to widespread shareholder concerns that gAI foundation models, such as GPT 3 and GPT 4, on which applications like Chat GPT are based, will accelerate the creation and dissemination of false information, with potentially dire consequences for this year’s more than 50 elections worldwide, including the U.S. presidential election.
In January, Eurasia Group ranked generative AI as the third highest political risk confronting the world in 2023, warning that the new technologies “will erode trust, empower demagogues and authoritarians, and disrupt businesses and markets.” Just this fall, the distribution of AI-generated content in Slovakia and Argentina has contributed to the manipulation of public opinion, undermined trust in institutions, and possibly swayed elections.
Such examples validate predictions by some of the world’s leading AI thinkers, including the briefly embattled Sam Altman, CEO of OpenAI, who earlier this year said he was “worried that these models could be used for large-scale disinformation.” An information environment made murky by gAI-powered mis- and disinformation also risks undermining the integrity of public health, financial markets, and other systems on which stable, equitable societies depend. Even the perception that mis- and disinformation pervade our media environment fuels distrust in all information.