Open MIC Refiles Shareholder Proposal at Microsoft Calling for Annual Report on Mitigation of AI Risks

Encouraged by strong votes at AI companies, as well as Microsoft’s inaugural Responsible AI Transparency Report, Open MIC and partners seek to build momentum for more disclosure and remediation of AI risks. 

In July, Open MIC, Arjuna Capital, and Ekō refiled a shareholder proposal urging Microsoft to measure and report on the risks of misinformation and disinformation linked to their AI products. They did this despite Microsoft’s release in May of its inaugural Responsible AI Transparency Report. While the report increased the company’s transparency about its development and deployment practices of generative AI, it did not adequately satisfy key demands outlined in the proposal for the company to assess “the risks to the Company’s operations and finances as well as risks to public welfare presented by the company’s role in facilitating misinformation and disinformation disseminated or generated via artificial intelligence, and what steps, if any, the company plans to remediate those harms, and the effectiveness of such efforts.”

In addition to Microsoft, the investors have also filed similar shareholder proposals at Microsoft, Meta and Alphabet calling for such reporting.

Amid AI’s explosive growth and continued hype, it is vital for shareholders and human rights advocates to have the opportunity to engage with companies leading the AI race, encouraging them to adopt responsible policies that push toward safe, human-centric AI development. Too many of these companies continue development at breakneck speeds in the absence of strong regulation, relying only on internal principles and protocols to avoid harms to the firms’ operations and finances, as well as to human rights.   

A strong vote signals the need for increased transparency 

In 2023 and 2024, the same investors filed shareholder proposals at Microsoft, Meta and Alphabet calling for reporting on risks linked to generative AI. Microsoft shareholders were the first to vote on the proposal, which was on the ballot as Proposal 13 in the company’s 2023 annual meeting. Over 21% voted in favor of Proposal 13 – an “impressive” result for a first-time proposal, according to Responsible Investor. Votes at Alphabet and Meta were even stronger, garnering 45.7% and 53.6% support, respectively, from independent shareholders.

For context, shareholder proposals rarely achieve a majority vote, and even if they do, they are non-binding. Instead, votes can be a strong signal on where the shareholder base stands on a particular issue. Vote counts as low as 10% often spur company leadership to address the issue, either through continued engagement with shareholders or through concrete actions. 

Following the 21% “for” vote in December, Microsoft published its inaugural Responsible AI Report. While Microsoft does not present the report as a response to the shareholder proposal, it includes a number of areas that reflect the issues raised in Proposal 13. 

The report ultimately falls short, though, of providing measurable data that would be required for shareholders to evaluate risks to the company related to its pursuit of generative AI. It does offer a deeper explanation of the company’s approach to AI development than was previously published. It clarifies, for a public audience, how Microsoft’s internal structure functions in reference to its self-authored AI development values. 

The report lacks detail, however, on the risks of generative AI and metrics that the company and the public could use to monitor and measure the impact of harms stemming from generative AI, including fabricated information or other inaccuracies that can affect the operations of financial markets, electoral systems, and other public systems that comprise the foundations of functioning societies. Open MIC and its partners sought a report that outlines what the risks are, as well as methods and metrics for quantifying them, to be released publicly and periodically in structured formats. A major concern for the filers is that the public and independent third parties should be able to evaluate the effects year-over-year.

How the report adds value

Microsoft’s report does offer a number of case studies that illustrate the company’s very real struggle with AI risks, much of which revolves around mitigating the generation and spread of harmful content. One particularly effective case study centers on an external assessment of Microsoft Designer, an AI image-generation app. The assessment was performed by NewsGuard, and first found that the app generated harmful content in 12% of test scenarios. Microsoft was then able to tweak the safety settings and reduce that number to 3.6%. Measurable metrics, third-party participation, and a clear response to an undesirable result on the part of Microsoft make this a valuable case study for investors. If the report followed this paradigm more often, it would be a far more useful tool for evaluating AI risks.

The report most directly addresses concerns raised in Proposal 13 in the section on “managing information integrity risks,” wherein the authors admit that it is becoming increasingly difficult to trace AI-generated content. They also discuss, in slightly more detail, the risks to elections and global democracy stemming from AI-based content. But the report provides only a top-level overview of Microsoft’s efforts to mitigate disinformation facilitated by its AI products. 

Perfect isn’t possible, but we do need metrics

Microsoft acknowledges that no approach to evaluating risks and harms is perfect, but the company stops short of indicating just how severe the risks, or how frequent the incidents, of generating and spreading harmful content may be to the company and to the public. 

On “jailbreaks,” or incidents where users are able to circumvent safeguards to create potentially harmful content, Microsoft outlines its process of mapping, measuring and managing risks during development of AI products. Assuming that the company regularly carries out these processes, they must have some data available as to the frequency of jailbreaks and the resulting risks, which they have chosen not to include. In fact, the company may be increasing its risk of misuse by outlining its methods for finding and addressing jailbreaks, while measurable data would be far more useful to investors and to the public.

Other sections of the report follow a similar pattern: presenting a potential issue; providing a cursory overview of the company’s attempts to address it; and omitting data, definitions and other specifics that would help the reader better understand the scope of the problem. 

Certainly, this document raises more questions than answers. A section on “sensitive use” explains that Microsoft employees have flagged over 300 potentially damaging applications of the company’s products in 2023 alone, yet fails to mention how many were validated by leadership, or even the nature of the hazards reported. Even the section that outlines the company’s “30 responsible AI tools” fails to define those tools. It says there are 30 of them, but provides no details on what they do, how they work or who uses them.

In evaluating public-facing corporate materials of this type, it’s important to keep in mind its utility for the intended audience. If the report is intended to offer customers more information on Microsoft’s approach to responsible AI development, it does achieve that goal – but only inasmuch as it serves as a positive narrative for the company’s AI program. On the other hand, if the report is intended to provide the public and investors with actionable information related to the risks of generative AI, both to society and to the company itself, it falls short of being useful. While concerns over misinformation and disinformation linked to AI products receive a surface-level nod, the report requested in Proposal 13 is nowhere to be found. 

As it introduces the topic of information integrity, Microsoft simultaneously warns and brags that AI-generated content “can be indistinguishable from real-world capture of scenes by cameras and other human-created media.” In fact, even unrealistic AI-generated misinformation and disinformation create immense risks for society, as well as for the companies that market AI tools, which face significant legal, financial and reputational risks. If Microsoft intends this report to be more than a reputational marketing vehicle, it must expand the scope to contain pertinent information related to the risks of generative AI. It must not only outline what it does, but quantify the effectiveness of its efforts. It must then share its findings with the public, so that together we can avoid what could become irreparable harms.

Click here to view the full shareholder proposal (PDF)