Above, hear from proposal co-filers Natasha Lamb of Arjuna Capital and Michael Connor of Open MIC on tech companies' role in preventing widespread society harm due to unchecked AI development.
Recent breakthroughs in generative artificial intelligence (gAI) have brought AI-based products into the mainstream, but they’ve also brought the potential to generate and spread disinformation at a previously inconceivable rate. Especially in 2024, a major election year where over over half the world has a stake in more than 60 elections, the potential negative effects of disinformation on a global scale are staggering. Simultaneously, the companies that develop and deploy gAI products are opening themselves to a huge amount of risk.
Open MIC and our partners have organized a shareholder engagement campaign to urge companies to take a closer look at the ways that they mitigate these risks, in order to promote both public welfare and long-term success of the companies themselves. The campaign involves shareholder proposals at Microsoft, Alphabet and Meta, recommending that the companies issue annual reports on the risks of misinformation and disinformation produced and amplified by their deployment of gAI. The proposals ask the companies not only to assess the material risks stemming from gAI products, but also to outline steps they will take to mitigate potential harms from gAI-powered mis- and disinformation and to evaluate their effectiveness in doing so.
Microsoft shareholders were the first to consider the resolution, and over 21% voted in favor. Responsible Investor, a leading industry trade publication, called it an "impressive" result for a first-time proposal, showing that AI-generated misinformation is a key issue for shareholders. In spring 2024, we continued to build on that positive result at the meetings of Meta (May) and Alphabet (June) through an advocacy campaign targeting shareholders, asset managers and proxy advisors. Owing to multi-class share structures at both companies that concentrate voting power with company insiders, neither of the proposals officially passed. Nevertheless, strong support among independent shareholders (53.6% at Meta and 45.7% at Alphabet) suggests that these issues are a high priority for investors.
In July 2024, Open MIC, Arjuna Capital, and Ekō refiled the shareholder proposal at Microsoft, again urging the company to measure and report on the risks of misinformation and disinformation linked to their gAI products. Encouraged by strong votes at other AI companies, as well as Microsoft’s inaugural “Responsible AI Transparency Report,” Open MIC and partners seek to build momentum for more disclosure and remediation of gAI risks.
Campaign Updates
October 30, 2024: Exempt Solicitation Letter filed for Microsoft by Arjuna Capital
October 24, 2024: Microsoft releases 2024 proxy statement and sets date of annual meeting for December 10 at 8:30am PT
July 16, 2024: Shareholder resolution re-filed at Microsoft
June 7, 2024: 45.7% of Alphabet independent shareholders vote in favor of Proposal 12
May 29, 2024: 53.6% of Meta independent shareholders vote in favor of Proposal 6
May 23, 2024: ISS (Institutional Shareholder Services) recommends a vote FOR Proposal 12 at Alphabet
May 13, 2024: ISS (Institutional Shareholder Services) recommends a vote FOR Proposal 6 at Meta
May 7, 2024: Exempt Solicitation Letter filed for Alphabet, Inc. by Open MIC
April 29, 2024: Alphabet, Inc. releases 2024 proxy statement and sets date of annual general meeting for June 7 at 9am PT.
April 24, 2024: Exempt Solicitation Letter filed for Meta Platforms, Inc. by Arjuna Capital
April 19, 2024: Meta Platforms, Inc. releases 2024 proxy statement and sets date of annual general meeting for May 29 at 10am PT.
Proposal Co-filers
Full Shareholder Resolutions
Key Messages for Sustainable Investors
1. Unconstrained gENERATIVE AI is a risky investment.
The development and deployment of generative AI without risk assessments, human rights impact assessments, or other policy guardrails in place puts companies at risk, financially, legally, and reputationally.
2. Generative AI-powered false content is polluting our global information environment.
Not only do gAI chatbots inexplicably and erratically fabricate information, but they also make it easy for malicious actors to create and spread deceptive yet believable content faster and more precisely targeted than ever before.
3. Company commitments to self-regulation are not enough to prevent harms; by aligning policies with best practices, companies can avoid regulatory uncertainty.
Company commitments to rectify the impacts of their technologies and respect and promote human rights represent a baseline standard of good corporate practice. But integrating risk and human rights impact assessments into AI development from the start would do more to create a race to the top.
4. Investors are uniquely positioned to help companies understand how the market will value meaningful commitments to mitigate the risks of generative AI.
Asset managers and all investors should use their power to push companies to align their AI development and deployment policies and practices with proposed regulatory guardrails in order to secure the integrity of our information ecosystems.