FOR IMMEDIATE RELEASE
THURSDAY, MAY 23, 2024 | Over the next two weeks, shareholders at Meta and Alphabet will vote on first-time resolutions calling for more disclosure on the costs and risks of generative AI to the company and society, and on how the companies are mitigating those risks. To date, neither company reports on how well it adheres to the promises it makes with regard to AI, a trend with a long precedent in the social media era.
The vote on Proposal 6 at Meta will take place on Wednesday, May 29, 2024, at 10 a.m. Pacific time/1 p.m. EDT.
The vote on Proposal 12 at Alphabet will take place on Friday, June 7, 2024, at 9 a.m. Pacific time/12 p.m. EDT.
Specifically, shareholders want the companies to issue an annual report that includes analysis of the risks to each company of pursuing generative AI, including financial, reputational, legal and regulatory risks. Both companies have acknowledged that the technology can contribute to the creation and spread of misinformation and disinformation. Now, investors want the companies to go a step further and report on how they are mitigating those risks and whether their efforts are working.
Co-filed by Arjuna Capital, Open MIC, and Ekō, the resolutions are notable for asking that the companies establish metrics and measure their efforts to safely deploy AI to demonstrate that they are living up to the commitments they make. “As long-term shareholders, we want Meta and Alphabet to succeed over the long-run,” said Natasha Lamb, Chief Investment Officer at Arjuna Capital. “Which means our companies must do what they can today to mitigate the generative-AI risks of tomorrow.”
Similar resolutions have progressively earned support over the past several months at Microsoft (21% of the vote) and Apple (37.5% of the vote). Supporters of the Microsoft proposal, which Responsible Investor called “impressive” for a first-time resolution, included Norway’s trillion-dollar sovereign fund, the office of the New York City Comptroller, and California public pension giant CalSTRS. The Apple proposal, which the company tried to exclude from its proxy ballot, focused on risks to workers presented by AI, and was filed by the AFL-CIO.
Meta and Alphabet both oppose the resolutions, claiming that their current policies and self-regulation are adequate. In its opposing statement, Meta cited the adoption of responsible AI principles, board oversight, investments in combating misinformation and disinformation, the development of watermarking tools, and its provision of “visibility into the impact of our products.” None of the resources it lists, however, quantifies or qualifies the impact of the tens of billions of dollars invested in AI, the return on that investment, or the losses the company could incur if it is held liable for harms related to the deployment of these unpredictable tools. Observed and potential harms include copyright infringement and undermining of elections with disinformation, along with resulting reputational damage to the company. Arjuna Capital filed an exempt solicitation letter enumerating many of the risks posed.
“Meta and Alphabet, while rushing ahead with their AI programs, have unfortunately offered little transparency or accountability for the impacts of this complex and transformative new technology,” says Christina O’Connell, Ekō’s Senior Manager of Shareholder Engagement. “While both companies make policy claims of responsible development and deployment, we already see the missteps and failures of these very policies – and just this week, we have reported on Meta’s failure to live up to their election content promises when AI-generated election ads inciting violence against non-Hindu voters were approved for publication. Shareholders have a right to clear information on the risk and mitigations associated with this AI race.”
In its opposing statement, Alphabet, embarrassed earlier this year by its Gemini tool’s producing historically inaccurate images and other hallucinations, similarly cites the many frameworks, policies, and tools it has implemented “to hold ourselves accountable.”
In line with its position that the safe use of generative AI cannot rely on self-regulation alone, Open MIC responded to Alphabet’s statement with a letter to shareholders, urging a vote for the resolution:
Without consistent and regular accounting of how effective Alphabet’s AI frameworks, policies, and tools are, backed by established metrics, examples, and analysis, neither shareholders nor the public determine the amount of material risk the Company has assumed as it invests tens of billions of dollars in developing the technology and the data centers it needs to support it.
Influential proxy advisor ISS has recommended that shareholders vote for the resolution at Meta, asserting that “additional disclosure on how the company manages misinformation and disinformation risks related to generative AI would be beneficial for shareholders.” In addition, ISS cites an EU investigation at Meta “over concerns that the company has not done enough to protect upcoming EU elections or combat foreign disinformation on its platforms.” It also notes that the European Commission, like Proposal 6, is asking the company to “provide more information on mitigation measures for risks related to generative AI” and “is looking to assess the impact of generative AI on issues such as electoral processes and dissemination of illegal content.”
In its guidance, ISS also mentions Meta’s multi-class share structure “with disparate voting rights that is not subject to a reasonable time-based sunset.” Both Meta and Alphabet maintain dual-class shares that severely undermine common shareholders’ ability to exert influence, owing to the diluted value of their votes compared to those of insiders. Such share structures make it impossible for shareholders to win the vote on their proposals without support from insiders.
Even with dual-class shares stacking the deck against them, shareholders can send a strong signal to management by voting for the resolutions. For instance, while the resolution at Microsoft did not pass, the company did issue an annual Responsible AI Transparency Report in which it did address, in part, the requests made in the resolution, including establishing some AI metrics.
After the votes take place, the companies must file preliminary tallies with the SEC within 4 business days of the meeting. Open MIC will publish the results on its campaign page (https://www.openmic.org/generative-artificial-intelligence-misinformation-and-disinformation).
For more information, please contact:
Jessica Dheere
Advocacy Director, Open MIC
(202) 330-3637
jdheere@openmic.org
Michael Connor
Executive Director, Open MIC
(917) 846-7608
mconnor@openmic.org