As part of our ongoing campaign to address AI-generated misinformation and disinformation, Open MIC co-filed a shareholder resolution at Alphabet requesting an annual report on the risks of misinformation and disinformation facilitated by generative AI (gAI). The proposal is up for a vote as Proposal 12 in Alphabet’s 2024 Proxy Statement.
In response to Alphabet’s recommendation to vote “against” the proposal, Open MIC filed an exempt solicitation letter with the SEC, again making the case for increased transparency and active risk mitigation to prevent widespread harms due to misinformation and disinformation resulting from generative AI products.
In its opposing statement to the proposal, Alphabet declares that it published its AI principles in 2018 “to hold ourselves accountable for how we research and develop AI, including Generative AI….” The company asserts that the steps it takes to regulate itself are adequate to prevent harm to the company and to society. While we commend Alphabet for establishing principles to guide its development of AI, these commitments alone do not confirm responsible, ethical, or human rights–respecting management of these technologies, which are so powerful they can with very little effort be deployed to deceive people with human-like thought and speech.
Proponents of the proposal acknowledge that Alphabet has instituted frameworks, policies, and other tools to establish “guardrails,” but we cannot assess how well-placed and sturdy those guardrails are without understanding the metrics in place and ongoing monitoring and evaluation of those metrics.
The speed with which Alphabet is traveling in its investment and deployment of generative AI assumes travel along a straightaway. Meanwhile, daily news reports about the shortcomings and resulting impacts of this technology…illustrate that the path forward is more like a mountainous dirt road with hairpin turns. No guardrail is effective at every speed.
As ever with most shareholder resolutions, the Board does not believe it is in the “best interests of the company and our stockholders” to publish such a report. Which is to say, it is seemingly not in the Board’s interest to document that, at best, it is uncertain about the financial, legal, and reputational risks to the Company of integrating generative AI into every aspect of our information ecosystem, not to mention the risks posed to trust in institutions and democracy as a whole.
For this and many other reasons, we assert that generative AI is a risky investment. The development and deployment of generative AI without risk assessments, human rights impact assessments, or other policy guardrails in place puts Alphabet at risk, financially, legally, and reputationally. It is investing tens of billions of dollars in artificial intelligence, but we know very little about how it is measuring its return on that investment. And when gAI fails, Alphabet stands to lose significant market value, as it did in the wake of Gemini’s failure earlier this year.
Generative AI can also reinforce existing socioeconomic disparities, running counter to the global trend toward corporate diversity. Disinformation, for instance, further disadvantages people who are already vulnerable or marginalized as a result of their lack of access to the resources, knowledge, and institutional positions that are essential for decision-making power.
Despite acknowledging these challenges and publicly calling for regulation to address them, Alphabet and its peers, often privately stave regulation off through lobbying. Last year, the company spent more than $14 million on lobbying against antitrust regulation and to educate lawmakers on AI, among other pursuits.
Proponents of the proposal are not opposed to AI, or generative AI. We are for it, when it does not cause harm or confusion. When it is in line with the need to solve problems for which its power is not just convenient but required. When its use benefits the most people without causing collateral damage, particularly those who are already marginalized and vulnerable because of the obstacles they face in accessing power they need to protect their families, their health, their environments, their livelihoods, and their human and civil rights. We are for AI and generative AI when it is accountable to the people it purports to serve.
With Proposal 12, and in the current absence of general regulation, proponents simply ask Alphabet to minimize the uncertainties about the potential harms and waste of generative AI by measuring its performance against the standards the company has set for itself. We then ask the company to share what it learns once a year, so that shareholders can make informed decisions about their investments with a clear picture of the impact of Alphabet’s generative AI tools on society, including information ecosystems.
By holding itself publicly and measurably accountable to its commitments on generative AI, Alphabet would help rebuild waning trust in not just technology but also the company, setting a standard for other companies to follow at this critical moment.
We encourage you to vote FOR this resolution and take a first step toward telling Alphabet that AI creates value when it centers and serves people.
Alphabet shareholders will vote on Proposal 12 before and during the company’s 2024 annual meeting, which takes place June 7 at 9:00 AM Pacific Time.