2024 Shareholder Resolutions on Generative AI & Disinformation: A Build-the-Vote Messaging Guide for Sustainable Investors

Introduction

The rise of generative AI (gAI) over the last 18 months has raised numerous questions about how to regulate these powerful new technologies so that they do not compromise people’s ability to exercise their human rights. Generative AI poses a particular threat to the right to freedom of expression, including the right to access information, because it makes it so easy to create and spread deceptive, yet believable content. False content threatens people’s ability to make informed decisions, a prerequisite for healthy democracies. More than two billion people will participate in more than 60 elections around the world this year, making access to reliable information about elections and the policy choices they embody essential to sustain and strengthen democracy.

While governments have a duty to regulate these technologies, the companies investing tens or even hundreds of billions of dollars in building gAI models and applications have a responsibility to identify the risks their products pose to society and to the company itself, and take action to mitigate them. To date, however, too many companies are charging headlong into the AI race with little more than “responsible” or “ethical development principles to guide them.

Starting last year, Open MIC and our partners at Arjuna Capital and Ekō have sought to complement ongoing AI policy development at the government level by filing shareholder resolutions at three of the largest AI companies: Microsoft, Alphabet, and Meta.

Our resolutions focus on the damage gAI is doing to information integrity worldwide—damage that has been acknowledged by the companies and multiple prominent AI experts—and ask each company, in an annual report, to help identify the risks of this technology to both the company and society, to develop means of mitigating these risks, and to demonstrate their effectiveness. Shareholders will vote on the resolutions at Meta on May 29 and at Alphabet on June 7. The vote at Microsoft took place in December 2023 and is eligible for refiling this year.

In the lead-up to the votes at Meta and Alphabet, Open MIC will work with our investor and civil society partners to build the vote for these resolutions. To support this process, we have developed this messaging guide as a tool to help sustainable investors understand and explain the risks and potential harms of generative AI-based misinformation and disinformation in their conversations with tech companies, clients and fellow shareholders. By urging company leadership to recognize and be transparent about the risks associated with gAI, investors can support long-term, sustainable AI policies and practices that mitigate financial, legal, and reputational risks and prevent widespread societal harms. We encourage you to adopt and adapt these messages as needed. We also welcome your feedback: jdheere@openmic.org.

Key Messages

1.    Unconstrained gAI is a risky investment.

The development and deployment of gAI without risk assessments, human rights impact assessments, or other policy guardrails in place puts companies at risk, financially, legally, and reputationally.

Financial risks: Companies have invested hundreds of billions of dollars in artificial intelligence, but we know very little about how they are measuring their return on that investment.[1] Meanwhile, when gAI fails, companies stand to sustain huge losses in market value. Alphabet recently lost $90 billion in value [2] in the wake of Gemini’s failure.

“Regardless of your view, if Google is seen as an unreliable source for AI to a portion of the population, that isn’t good for business.”
            - Ben Reitzes and Nick Monroe, analysts at Melius Research

Legal risks: As of March 2024, more than a dozen copyright lawsuits have been filed in the USA against AI companies.[3] Major media outlets have sued OpenAI and Microsoft, alleging copyright infringement. Billions of dollars in damages could be at stake in Europe, where media companies are suing Alphabet over AI-based advertising practices. From a privacy perspective, AI's demonstrated ability to replicate people's voices and likenesses has already led to one class action lawsuit, potentially creating opportunities for more legal challenges.

Reputational risks: Generative AI is not only accelerating the tech backlash that originated with social media’s content moderation challenges (also driven in part by AI) but also has exposed significant failures of governance, as illustrated by last fall’s debacle at OpenAI. In addition, Big Tech’s opacity and missteps have led to “a marked decrease in the confidence Americans profess for technology and, specifically, tech companies—greater and more widespread than for any other type of institution.”[4] And a recent report by public relations giant Edelman declares “innovation at risk” and documents a 15 percent drop in trust in AI [5] over a five-year period.

“Globally, trust has declined in AI companies over the past five years from 61 percent to 53 percent. In the U.S., there has been a 15-point drop from 50 percent to 35 percent.”
            - 2024 Edelman Trust Barometer

In addition, ggAI can reinforce existing socioeconomic disparities, running counter to the global trend toward corporate diversity.[6] Disinformation, for instance, further disadvantages people [7] who are already vulnerable or marginalized as a result of their lack of access to the resources, knowledge, and institutional positions that are essential for decision-making power. And recent research shows that AI-driven “dialect prejudice”[8] has the potential for harmful consequences by asking language models to make hypothetical decisions about people, based only on how they speak.  

2.    Generative AI-powered false content is polluting our global information environment, sowing distrust in institutions and compromising people’s ability to make informed high-stakes decisions, such as who they vote for in elections.[9],[10]

Not only do gAI chatbots inexplicably and erratically fabricate information, but they also make it easy for malicious actors to create and spread deceptive yet believable content faster and more precisely targeted than ever before. When people can’t tell the difference between fact and fiction, it erodes the integrity of democratic elections, financial markets, public health policy, climate action, and a range of other societal systems. Making matters worse, the perception that false information is circulating can undermine accurate information; conversely, bad actors can reap a “liar’s dividend” by casting doubt on the veracity of accurate content they don’t like, further compromising a sense of shared reality.

Fabricated content: Generative-AI chatbots invent information so frequently that there is a term for it, a “hallucination.” Such fabrications, like not telling the correct time or churning out responses in Spanglish,[11] may be amusing anecdotally, but in aggregate, they contribute to a growing distrust of public information. Some hallucinations can lead to real damage, as outlined in a recent study of legal mistakes perpetrated by large language models (LLMs),[12] the foundation of every gAI application.

Deceptive yet believable content: Generative AI makes it much easier to make a lie look like the truth and much harder for people to detect when they’re being deceived. When people can’t trust the information they see, they are much more likely to make decisions that aren’t in their best interest or not to participate in decision making at all. This is one reason why the World Economic Forum has declared AI-powered mis- and disinformation “the top immediate risk to the global economy,” [13] particularly for its role in exacerbating societal and political divides.

In 2024, more than half the world has a stake in more than 60 elections in nations representing 42 percent of global GDP. [14] Already, texts, images, videos, and audio messages faked with gAI have interfered with and possibly influenced the outcome in elections in Slovakia, [15] Argentina, [16] Taiwan, [17] Indonesia, [18] and New Hampshire. [19] 

“Poll workers, voting rights organizations and individual voters of colour face vicious and dangerous disinformation-inspired harassment online and offline.”
            - Samuel Woolley, Centre for International Governance Innovation

And elections aren’t the only systems at stake. Late last year, in the wake of warnings by the Financial Stability Oversight Council [20] and the Bank of England, [21] Senators Mark Warner and John Kennedy co-sponsored a bill [22] proposing “steep penalties for using deepfakes or other artificial intelligence tools to illegally manipulate markets or to engage in securities fraud.”

While some of AI’s most promising applications are in the health sector–how it can increase early detection of breast cancer is one success story—when gAI fabricates or is used to create health misinformation, it can literally cost lives.

AI-generated climate misinformation [23] is also a concern. In a recent letter [24] to Senate majority leader Chuck Schumer, two dozen organizations, including Open MIC, called attention to the ways in which gAI may directly [25] and indirectly contribute not only to climate change, owing to the enormous amount of water and electricity needed to operate these systems, but also climate denialism. They cite instances in which researchers have been able to coax chatbots to return articles arguing against climate change, using factual inaccuracies, and how AI companies, many of which are also ad-tech companies, profit from the distribution of such disinformation.

It’s not just the content, it’s also the targeting: Generative AI is also making it easier to reach people with tailored messages, supercharging already problematic targeted advertising practices [26]  that helped disseminate false content in previous elections. Despite OpenAI’s recent policy change that banned the use of ChatGPT for political campaigning, a campaign consultant in Indonesia was quoted as using OpenAI’s tools and Microsoft’s Azure cloud service to build an app (Pemilu.AI) that he used “to craft hyperlocal campaign strategies and speeches.” [27] Other experts predict that AI “will allow digital campaigning to evolve from a broadcast medium to an interactive one,” where responsive chatbots can be deployed [28] to host “like a town hall taking place in every voter’s living room, simultaneously.”

3.    Company commitments to self-regulation are not enough to prevent harms; by aligning policies with best practices, companies can avoid regulatory uncertainty.

Company commitments to rectify the impacts of their technologies and respect and promote human rights represent a baseline standard of good corporate practice. But integrating risk and human rights impact assessments into AI development from the start would do more to create a race to the top.

Self-regulation isn’t adequate to prevent harms: Companies’ publication of AI principles and sign-ons to voluntary commitments, such as those elicited by the White House [29] or the recent AI Elections Accord,[30] mean little without accountability mechanisms to verify firms’ good faith efforts to adhere to those commitments. Too often, companies’ proposed harm-mitigation strategies focus on downstream remedies applied only after the harm has been done. Truly responsible companies would institute policies that demand risk and impact assessments farther upstream during early development phases.

Alignment with coming government regulation is a more sustainable approach: Companies’ failure to take responsible action on AI is exposing them to more regulatory risk as a legal patchwork emerges around the U.S. and the world. Companies should seek to align their policies with forward-thinking legislation such as the EU’s AI Act [31] to help foster a consistent and predictable regulatory climate.

Current commitments and disclosures do not constitute substantial implementation: Despite their own acknowledgments that AI poses “profound risks to society”[32] and their inability to predict how and when those harms may occur, as we have seen with Google’s Gemini [33] and Meta’s Llama,[34] tech company executives have long refused to adopt the practices that would have the most impact in building trust: conducting risk and human rights impact assessments before deploying new technologies, disclosing information about how their algorithmic systems operate, and publishing data so independent researchers can study platform effects, including how well companies enforce their own policies. The AI Elections Accord and other self-regulatory commitments are just high-level promises, with no attached outlines, timelines, or deadlines.

“Collectively, they have unleashed upon the world a set of tools and technologies that threaten, in their own words, to “jeopardize” our democratic systems — and done so to enormous profits. At this point, the democracies of the world who may pay the biggest price need more than promises. We need accountability.”
            - Lawrence Norden and David Evan Harris, Brennan Center for Justice [35]

Our resolutions at Alphabet and Meta ask for a concrete commitment to publish an annual report on the specific risks to the company and to public welfare presented by the company’s development of gAI tools and their role in promoting disinformation. We also ask the company to consider how it will mitigate any risks identified and how it will measure its effectiveness in doing so. 

If a company does not understand the full range of risks posed and costs incurred by the unconstrained development and deployment of AI, it follows that shareholders cannot fully understand the risks either. Therefore, their current commitments and disclosure practices cannot constitute substantial implementation. Further, investors should also note actions companies take to undermine their commitments, such as the dismantling of trust and safety teams [36] and lobbying against regulation [37] that would establish AI guardrails.

4.    Investors are uniquely positioned to help companies understand how the market will value meaningful commitments to mitigate the risks of gAI.

Asset managers and all investors should use their power to push companies to align their AI development and deployment policies and practices with proposed regulatory guardrails in order to secure the integrity of our information ecosystems. One easy way to exercise this power is by voting for our resolutions on this year’s proxy ballots at Alphabet and Meta, which call on the companies to integrate assessments of risk, and impacts on human rights, into their development of gAI systems and tools; take action to mitigate identified risks; and demonstrate the effectiveness of their efforts, before deploying these technologies to the public.

Investor power and the performance of AI-related propositions to date: This year, shareholder proposals at tech companies [38] ask companies to increase transparency around AI, including assessing AI-related risks, particularly to children and elections; increasing investment in content moderation; reporting on the human rights impacts of their AI-driven advertising practices; establishing principles for ethical AI development; and appointing directors with substantial AI expertise. It’s clear that AI will be a topic for shareholder engagement for years to come. Two AI-related proposals have already come to a vote, at Microsoft and Apple.

Microsoft: A proposal [39] at Microsoft earned 21 percent of the vote in December. Responsible Investor, a leading industry trade publication, called it an “impressive” result [40] for a first-time proposal, showing that AI-generated misinformation is a key issue for the company’s shareholders. Supporters included Norway’s trillion-dollar sovereign fund, the office of the New York City Comptroller, and California public pension giant CalSTRS.

Apple: The AFL-CIO filed a shareholder proposal focusing on the risks AI presents to workers. The resolution requested that the company prepare a transparency report on its use of AI in its business operations and disclose any ethical guidelines that the company has adopted regarding the use of AI technology. The first-year proposal received 37.5% of the vote.[41] Apple had tried to exclude the proposal from the proxy ballot.

5.    AI accountability is just good business, and the time to demand it is now.

AI companies own and control the vast majority of our global information ecosystem. Smart investors will support regulation and promote practices that point toward long-term sustainability in this volatile time. This includes calling for reliable and independent mechanisms to hold companies accountable for the AI technologies they build, particularly when they have the potential to compromise fundamental democratic processes like elections. Responsible, human rights–based AI policies and disclosures will benefit the whole of society, including shareholders and the companies they own.

REFERENCES AND FURTHER READING

References

[1] Investors punish Microsoft, Alphabet as AI returns fall short of lofty expectations

https://www.reuters.com/technology/investors-punish-microsoft-alphabet-ai-returns-fall-short-lofty-expectations-2024-01-31/

[2] Google’s Gemini Headaches Spur $90 Billion Selloff
https://www.forbes.com/sites/dereksaul/2024/02/26/googles-gemini-headaches-spur-90-billion-selloff/?sh=21cc71d572e4

[3] Master List of lawsuits v. AI, ChatGPT, OpenAI, Microsoft, Meta, Midjourney & other AI cos.
https://chatgptiseatingtheworld.com/2023/12/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoft-meta-midjourney-other-ai-cos/

[4] How Americans’ confidence in technology firms has dropped

https://www.brookings.edu/articles/how-americans-confidence-in-technology-firms-has-dropped-evidence-from-the-second-wave-of-the-american-institutional-confidence-poll/

[5] Technology Industry Watch Out, Innovation at Risk https://www.edelman.com/insights/technology-industry-watch-out-innovation-risk

[6] Global Corporate Governance Trends for 2024
https://corpgov.law.harvard.edu/2024/03/06/global-corporate-governance-trends-for-2024/#more-163181

[7] In Many Democracies, Disinformation Targets the Most Vulnerable

https://www.cigionline.org/articles/in-many-democracies-disinformation-targets-the-most-vulnerable/

[8] Dialect prejudice predicts AI decisions about people's character, employability, and criminality
https://arxiv.org/abs/2403.00742

[9] Joint letter to platform companies on deepfakes and manipulated media

https://form.sflc.in/joint-letter-to-platform-companies-on-deepfakes-and-manipulated-media/

[10] Joint letter to ECI on deepfakes and manipulated media
https://form.sflc.in/joint-letter-to-eci-on-deepfakes-and-manipulated-media/

[11] Why Was ChatGPT Suddenly Speaking Spanglish?
https://www.inc.com/jennifer-conrad/why-was-chatgpt-suddenly-speaking-spanglish.html

[12] Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

https://law.stanford.edu/2024/01/11/hallucinating-law-legal-mistakes-with-large-language-models-are-pervasive/

[13] Global Risks Report 2024

https://www.weforum.org/publications/global-risks-report-2024/

[14] The impact of generative AI in a global election year

https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year/

[15] A fake recording of a candidate saying he’d rigged the election went viral. Experts say it’s only the beginning

https://www.cnn.com/2024/02/01/politics/election-deepfake-threats-invs/index.html

[16] Bullrich lashes out at ‘dirty campaigning’ ahead of election

https://www.batimes.com.ar/news/argentina/bullrich-melconian-dirty-campaign-argentina-election.phtml

[17] How China Exploited Taiwan’s Election—and What It Could Do Next

https://foreignpolicy.com/2024/01/23/taiwan-election-china-disinformation-influence-interference/

[18] When AI brings ‘ugly things’ to democracy

https://www.codastory.com/newsletters/elections-indonesia-ai-abuse/

[19] How Investigators Solved the Biden Deepfake Robocall Mystery

https://www.bloomberg.com/news/newsletters/2024-02-07/how-investigators-solved-the-biden-deepfake-robocall-mystery

[20] AI is a danger to the financial system, regulators warn for the first time

https://www.cnn.com/2023/12/14/economy/ai-danger-financial-system/index.html

[21] Bank of England warns AI could pose financial stability risks

https://www.theguardian.com/business/2023/dec/06/bank-of-england-launches-ai-review-amid-uk-financial-stability-risk-fears

[22] US senators propose tough fines for AI-driven securities fraud or market manipulation

https://www.cnn.com/2023/12/19/tech/senators-fines-ai-securities-fraud

[23] AI Reveals Hotspots of Climate Denial
https://www.scientificamerican.com/article/ai-reveals-hotspots-of-climate-denial/

[24] Joint Letter to Sen. Chuck Schumer on Climate and AI
https://epic.org/wp-content/uploads/2023/09/Final-Letter-to-Sen.-Schumer-on-Climate-AI-1.pdf

[25] Generative AI’s environmental costs are soaring — and mostly secret

https://www.nature.com/articles/d41586-024-00478-x

[26] It’s the Business Model: How Big Tech’s Profit Machine is Distorting the Public Sphere and Threatening Democracy
https://rankingdigitalrights.org/its-the-business-model/

[27] Generative AI may change elections this year. Indonesia shows how

https://www.reuters.com/technology/generative-ai-faces-major-test-indonesia-holds-largest-election-since-boom-2024-02-08/

[28] Who’s accountable for AI usage in digital campaign ads? Right now, no one.

https://ash.harvard.edu/who%E2%80%99s-accountable-ai-usage-digital-campaign-ads-right-now-no-one

[29] List of Voluntary AI Commitments
https://www.whitehouse.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf

[30] A Tech Accord to Combat Deceptive Use of AI in 2024 Elections

https://securityconference.org/en/aielectionsaccord/

[31] EU AI Act: first regulation on artificial intelligence

https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[32] Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’

https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html

[33] Gemini image generation got it wrong. We'll do better.

https://blog.google/products/gemini/gemini-image-generation-issue/

[34] AI models frequently ‘hallucinate’ on legal queries, study finds

https://thehill.com/policy/technology/4403776-ai-models-frequently-hallucinate-on-legal-queries-study-finds/

[35] New Tech Accord to Fight AI Threats to 2024 Lacks Accountability for Companies

https://www.brennancenter.org/our-work/analysis-opinion/new-tech-accord-fight-ai-threats-2024-lacks-accountability-companies

[36] Big Tech Ditched Trust and Safety. Now Startups are Selling it Back as a Service
https://www.wired.com/story/trust-and-safety-startups-big-tech/

[37] Big Tech lobbying on AI regulation as industry races to harness ChatGPT popularity

https://www.opensecrets.org/news/2023/05/big-tech-lobbying-on-ai-regulation-as-industry-races-to-harness-chatgpt-popularity/

[38] 2024 Tech Proposals

https://investorsforhumanrights.org/news/2024-tech-proposals

[39] Shareholders Demand Transparency from Microsoft on Material Risks of AI-Powered Disinformation https://www.openmic.org/news/shareholders-demand-transparency-from-microsoft-on-material-risks-of-ai-powered-disinformation

[40] Resolution round-up: Big investors support pioneering AI proposal at Microsoft
https://www.responsible-investor.com/resolution-round-up-big-investors-support-pioneering-ai-proposal-at-microsoft/

[41] Apple AI Proposal Galvanises Investors

https://www.esginvestor.net/investors-encouraged-by-support-for-apple-ai-proposal/



Further Reading from Open MIC and Partners

2024 Campaign: Report on Generative Artificial Intelligence Misinformation and Disinformation Risks

https://www.openmic.org/generative-artificial-intelligence-misinformation-and-disinformation

Assessing Risks of AI Misinformation and Disinformation (Proxy Preview, March 2024)

https://www.proxypreview.org/all-contributor-articles/assessing-risks-of-ai-misinformation-and-disinformation

Ahead of This Year’s Elections, Shareholders Demand Transparency from Big Tech on Risks of AI-Powered Disinformation (Open MIC, January 2024)

https://www.openmic.org/news/ahead-of-this-years-elections-shareholders-demand-transparency-from-big-tech-on-risks-of-ai-powered-disinformation

Will Democracy Die in AI’s Black Box? Not If These Shareholders Can Help It (Tech Policy Press, December 2023)
https://www.techpolicy.press/will-democracy-die-in-ais-black-box-not-if-these-shareholders-can-help-it/

Microsoft Investors Weigh in on Generative Artificial Intelligence’s Mis- and Disinformation Problem (Arjuna Capital, December 2023)
https://arjuna-capital.com/press-releases-archive/2023/12/7/press-release-microsoft-investors-weigh-in-on-generative-artificial-intelligences-mis-and-disinformation-problem

Microsoft Investors Demand Answers on Misinformation and Disinformation Impacts of Artificial Intelligence (Open MIC, July 2023)
https://www.openmic.org/news/microsoft-investors-demand-answers-on-misinformation-and-disinformation-impacts-of-artificial-intelligence