Recent breakthroughs in generative artificial intelligence (gAI) have brought AI-based products into the mainstream, but they’ve also brought the potential to generate and spread disinformation at a previously inconceivable rate. Especially in 2024, a major election year where over over half the world has a stake in more than 60 elections, the potential negative effects of disinformation on a global scale are staggering. Simultaneously, the companies that develop and deploy gAI products are opening themselves to a huge amount of risk.

Open MIC and our partners have organized a shareholder engagement campaign to urge companies to take a closer look at the ways that they mitigate these risks, in order to promote both public welfare and long-term success of the companies themselves. The campaign involves shareholder proposals at Microsoft, Alphabet and Meta, recommending that the companies issue annual reports on the risks of misinformation and disinformation produced and amplified by their deployment of gAI. The proposals ask the companies not only to assess the material risks stemming from gAI products, but also to outline steps they will take to mitigate potential harms from gAI-powered mis- and disinformation and to evaluate their effectiveness in doing so.

At Meta, the proposal will be Proposal 6 on the ballot, with the company’s AGM taking place May 29 at 10am PT. At Alphabet, the proposal will be Proposal 12 on the ballot, with the company’s AGM taking place June 7 at 9am PT.

Recent Updates

April 24, 2024: Exempt Solicitation Letter filed at Meta Platforms, Inc. by Arjuna Capital

April 19, 2024: Meta Platforms, Inc. releases 2024 proxy statement and sets date of annual general meeting for May 29 at 10am PT.

Proposal Co-filers

 
 

Full Shareholder Resolutions


Key Messages for Sustainable Investors

1. Unconstrained gENERATIVE AI is a risky investment.

The development and deployment of generative AI without risk assessments, human rights impact assessments, or other policy guardrails in place puts companies at risk, financially, legally, and reputationally.

2. Generative AI-powered false content is polluting our global information environment.

Not only do gAI chatbots inexplicably and erratically fabricate information, but they also make it easy for malicious actors to create and spread deceptive yet believable content faster and more precisely targeted than ever before.

3. Company commitments to self-regulation are not enough to prevent harms; by aligning policies with best practices, companies can avoid regulatory uncertainty.

Company commitments to rectify the impacts of their technologies and respect and promote human rights represent a baseline standard of good corporate practice. But integrating risk and human rights impact assessments into AI development from the start would do more to create a race to the top.

4. Investors are uniquely positioned to help companies understand how the market will value meaningful commitments to mitigate the risks of generative AI.

Asset managers and all investors should use their power to push companies to align their AI development and deployment policies and practices with proposed regulatory guardrails in order to secure the integrity of our information ecosystems.

2024 Shareholder Resolutions on Generative AI & Disinformation: A Build-the-Vote Messaging Guide for Sustainable Investors