Shareholders Demand Transparency from Microsoft on Material Risks of AI-Powered Disinformation

At Microsoft’s annual general meeting on Thursday, shareholders will vote on a resolution recommending that the company issue an annual report on the material risks of generative AI-powered mis- and disinformation to its business operations. The proposed report would include Microsoft’s plans to mitigate those risks, as well as the effectiveness of its efforts. The resolution, which appears as Proposal 13 on the company’s proxy statement, was submitted by Arjuna Capital (lead filer), Azzad Asset Management, Eko, and Open MIC

Proposal 13 gives voice to widespread concerns that generative artificial intelligence (gAI), such as Chat GPT, will further accelerate the creation and dissemination of false information online, with potentially dire consequences for next year’s more than 50 elections worldwide, including the U.S. presidential election, and the future of democracy worldwide.

Generative AI (gAI) poses a threat to elections via three main vectors: First, it makes creating false content much easier and faster, ignoring the potential for misinformation and disinformation to proliferate widely. Second, it allows online marketers—nearly half of whom rely on AI for their campaigns—to target audiences more easily, facilitating the dissemination of false information. Third, even if specific election outcomes aren’t compromised by gAI-powered misinformation, the mere perception that misinformation pervades our media environment fuels distrust in all information.

In January, Eurasia Group ranked generative AI as the third highest political risk confronting the world in 2023, warning that the new technologies “will erode trust, empower demagogues and authoritarians, and disrupt businesses and markets.” Just this fall, the distribution of AI-generated content in Slovakia, and Argentina has contributed to the manipulation of public opinion, undermined trust in institutions, and possibly swayed elections. 

Such examples validate predictions by some of the world’s leading AI thinkers, including the briefly embattled Sam Altman, CEO of OpenAI, who earlier this year said he was “worried that these models could be used for large-scale disinformation.” 

Microsoft has invested more than $13 billion for a 49 percent stake in OpenAI, which developed ChatGPT. Shareholders acknowledge the company’s efforts in establishing responsible AI principles, making voluntary commitments to vet its gAI tools for safety, reporting on its adherence to codes of practice on disinformation in the EU and Australia, and creating new tools and processes for use during elections. Yet they remain concerned by company actions that contradict these commitments and prioritize dominating the AI marketplace above all else.

Shareholders question the company’s decision to integrate ChatGPT into its products with full knowledge that the application regularly returns results that, as Microsoft admits, “sound convincing but are incomplete, inaccurate, or inappropriate.” According to Natasha Lamb, Managing Partner at Arjuna Capital, “It’s not a question of whether ChatGPT generates misinformation–it does. The question is to what extent the technology can and will be manipulated to spread disinformation globally causing political and societal harm. The next question is how Microsoft plans to address such an immense business risk.”

Even the firing of Altman by his own board—ostensibly in part owing to safety concerns—did not give Microsoft pause for thought. Instead the company immediately offered Altman his own AI research lab, seemingly without any consideration for whether Altman’s ouster was, in fact, justified by other board members’ worries that he was pushing the company to move too fast.

Implications of AI-Linked Mis- and Disinformation for Microsoft

Shareholders, concerned about these contradictions, seek concrete evidence that Microsoft is adequately assessing the material risks and potential impacts of disinformation being created and spread with its gAI tools. Without action, these risks pose a threat to the company’s operations, finances, and reputations, as well as to public welfare. 

“Our member shareholders cannot properly evaluate their investments without a transparent look at the risks posed by an uncertain regulatory environment, unknown social impacts, and unclear plans for harm mitigation,” noted Christina O’Connell, Senior Manager of Investments and Shareholder Engagement at Eko.

With questions about defamation liability for AI-generated content still unresolved and Chat GPT already under investigations by European and Canadian data protection authorities, shareholders are also concerned that Microsoft may be exposing itself to significant litigation and regulatory risk. Many legal experts believe technology companies’ liability shield provided under Section 230 of the Communications Decency Act may not apply to content generated by ChatGPT. Senator Wyden, who wrote the law, says Section 230 “has nothing to do with protecting companies from the consequences of their own actions and products.” Experts are also debating how the principles of defamation law apply to AI-generated falsehoods, which open the company up to substantial litigation risk. ChatGPT is already running afoul of regulators, with current investigations by European and Canadian data protection authorities.

Proponents of Proposal 13 are asking fellow stockholders to vote for a resolution that urges Microsoft to report to investors annually on these risks, the steps they will take to mitigate them, and their effectiveness in doing so. 

“Microsoft is well aware of how frequently ChatGPT spouts blatant falsehoods, and how easily it could be deployed by malicious actors to undermine trust in elections,” said Michael Connor, Executive Director of Open MIC, a co-filer of the proposal. “It’s not enough for the company to claim it has mitigated the threats to democracy and risks to investors—Microsoft needs to show its work.”

What’s next?

The vote will take place on December 7, 2023. Official results of the vote will be released within five days of the meeting. If the resolution does not achieve a majority of the vote, Open MIC and co-filers may have the option to refile the proposal in 2024.