Generative AI & Disinformation: Exempt Solicitation Filed at Meta

As part of our ongoing campaign to address AI-generated misinformation and disinformation, Open MIC co-filed a shareholder resolution at Meta requesting an annual report on the risks of misinformation and disinformation facilitated by generative AI (gAI). Predictably, Meta’s leadership is recommending a vote “against” the proposal, arguing that their current policies are enough to mitigate risks stemming from gAI technology. 

According to Meta’s opposition statement, “We have made significant investments in our safety and security efforts to combat misinformation and disinformation, including content policies and enforcement, AI tools, and partnerships.” Essentially, Meta is avoiding the issue by saying that they employ safeguards when developing and deploying AI tools, ergo there are no risks to report to shareholders.

Open MIC and our co-filers disagree. Lead filer Arjuna Capital provided an exempt solicitation letter to the SEC in response to Meta’s opposition statement, further clarifying the purpose of the proposed annual report, and the need for additional transparency and oversight related to gAI misinformation.

“In its opposition statement, Meta describes its responsible AI approach and governance to obfuscate the need to fulfill this Proposal’s request,” Arjuna Capital writes in the letter,  “Yet, the requested report is asking the Company to go beyond describing responsible AI principles. We are asking for a comprehensive assessment of the risks associated with gAI so that the Company can effectively mitigate these risks, and an evaluation of how effectively the Company tackles the risks identified.”

Arjuna outlines the principal arguments for voting for the proposal as follows:

  1. Meta has an appalling track record of mismanaging its platforms, which has ultimately contributed to societal harm.

  2. Meta's gAI tools have already created false and misleading information.

  3. Meta is failing to quickly identify and moderate gAI mis- and disinformation distributed across its platforms, and its senior leaders underestimate the risks associated with these content moderation failures, even in an important election year.

  4. Misinformation and disinformation disseminated through gAI creates risks for Meta and investors alike.

  5. This Proposal goes beyond Meta’s current reporting, by requesting an accountability mechanism to ensure it is effectively identifying and mitigating mis- and disinformation risks.

Meta shareholders will vote on Proposal 6 before and during the company’s 2024 annual meeting, which takes place May 29 at 10:00 am PT.

Learn More About Our Campaign: