​​Microsoft’s 2023 HRIA Misses the Mark on Expectations for the Tech Sector under the UNGPs

One of the tools companies can use to gauge the impact of their products and services on people and society is a Human Rights Impact Assessment (HRIA), which is an independent assessment of how a company’s activities might hinder their stakeholders’ enjoyment of human rights. The primary objective of an HRIA is to identify actual or potential adverse human rights impacts related to a company’s activities, the extent to which the company is involved with them, and how the company can take steps to remedy past and ongoing impacts and mitigate future ones.

HRIAs vary in scope based on the business segments, regions, and rights-holders they seek to examine. For example, a mining company may conduct an HRIA to understand how one of its operations affects the rights of surrounding communities, or a consumer products company may conduct one to assess how the factories in its supply chain are respecting the rights of their workers. 

Nearly two years ago, in response to shareholder pressure, Microsoft Corporation agreed to conduct an HRIA to “identify, understand, assess, and address actual or potential adverse human rights impacts” of the company’s products, services, and business relationships with regard to law enforcement, immigration enforcement, and other government contracts.

The results of that HRIA, published in June 2023, are extremely disappointing. An Open MIC analysis finds that the assessors hired by Microsoft ignored the latest guidance regarding human rights expectations in the tech sector, including how tech companies should understand their potential to “cause,” “contribute,” and “ be directly linked” in circumstances where adverse human rights impacts are related to the end-use of their products. 

While Open MIC supports the assessment’s recommendations for improvements to Microsoft’s policies and practices, we believe the assessment fails to live up to its promise of giving shareholders and other interested stakeholders a meaningful assessment of Microsoft’s responsibility for the human rights impacts at issue. We call on Microsoft to implement these recommendations and to remain transparent and accountable to their stakeholders throughout the process.

Background 

In June 2021, the Religious of the Sacred Heart of Mary filed a shareholder resolution with the support of the Investor Advocates for Social Justice (IASJ) calling on Microsoft to conduct an independent, third-party assessment to “identify, understand, assess, and address actual or potential adverse human rights impacts” of the company’s products, services, and business relationships with regard to law enforcement, immigration enforcement, and other government contracts. Open MIC collaborated closely with IASJ to develop the resolution, emphasizing the need for an impartial assessment. 

The resolution raised concerns that Microsoft’s surveillance products enable discriminatory policing and incarceration of Black, Indigenous, and People of Color (“BIPOC”) communities, citing Microsoft’s contracts with US Immigration and Customs Enforcement (ICE), sales of one of the world’s largest real-time digital surveillance systems to the New York City Police Department (NYPD), and controversial contracts with the Department of Defense. It also addressed Microsoft’s development and maintenance of the NYPD’s Domain Awareness Systems (DAS), which surveil New Yorkers without warrants through the use of cameras, automatic license plate readers and radiological sensors. ICE uses the DAS database in order to target immigrants for deportation, and Microsoft partners with ICE through its Azure Government cloud computing arm. 

In October 2021, Microsoft committed to conduct the independent assessment and publish the report. The shareholders thus withdrew their resolution. 

Scope & Method of the Assessment

Microsoft retained the law firm Foley Hoag (“the Assessors”) to conduct an HRIA and co-design the structure of the final report. The Assessors focused their assessment on Microsoft’s licensing of cloud services and artificial intelligence (AI) technologies to US federal and state law enforcement agencies and US immigration authorities. They sought to determine (1) to what extent, if any, Microsoft is responsible for adverse human rights impacts stemming from the misuse of its products by these agencies, particularly with respect to BIPOC communities, and (2) what, if anything, Microsoft should do to mitigate or remediate those impacts [1].

The Assessors reviewed Microsoft’s policies and internal documents and publicly available information about the company. They also interviewed members of the socially responsible investment and human rights communities, individuals within government agencies, and Microsoft’s personnel “to understand Microsoft’s role and develop reasonable and effective recommendations.” Open MIC was interviewed as part of this process. 

The United Nations Guiding Principles on Business and Human Rights (UNGPs), a set of voluntary guidelines for States and companies to prevent, address, and remedy human rights abuses committed in business operations, served as the methodological basis for the analysis in the assessment. Following the UNGPs’ expectations for due diligence, the Assessors sought to (1) identify the actual or potential adverse human rights impact(s) with which Microsoft might be involved through its enterprise cloud services and AI technologies, (2) assess whether Microsoft is “causing”, “contributing to”, or “directly linked to” those adverse human rights impacts, and (3) recommend appropriate mitigation strategies in the event of potential harm and remediation strategies in the event of actual harm [3].

Though Microsoft initially committed to publish the final report by late 2022, the report was only made available in June 2023. The comments in this post reflect Open MIC’s reaction to the report’s analysis and findings.

Applying the UNGPs in the Tech Sector Context

The Evolving Understanding of the UNGPs & Tech

The UNGPs are a high level framework designed over ten years ago to help companies and States understand their responsibilities for respecting and protecting human rights in a business context. Under the UNGPs, a company can be “involved” with an adverse human rights impact in three ways: (1) it can “cause” an impact solely through its own activities; (2) its activities can “contribute” to an impact caused by one or more actors; and (3) it can be “directly linked to” an impact if it has a business relationship with an actor that is causing or contributing to the impact [4].

As the Assessors note, determining involvement is an important consideration because it is not without consequence–under the UNGPs, companies that cause or contribute to adverse human rights impacts are expected to take part in remediating them, whereas companies that are merely directly linked to an impact are expected only to use whatever leverage they have in their business relationship to mitigate further impacts. In the report, the Assessors write that “How to make a determination regarding cause, contribution, or direct linkage is… often unclear based solely on the text of the UNGPs” and that because the UNGPs are a set of principles and not law, there is “considerable latitude in addressing both the question of causation and the remediation or mitigation strategies that might be available” [5]. 

While Open MIC agrees that the UNGPs themselves do not offer readily applicable criteria for whether a particular company’s actions amount to an instance of causation, contribution, or linkage, there is one relevant source that offers some helpful parameters. The Office of the United Nations High Commissioner for Human Rights, which is “an authoritative voice of the interpretation of international human rights standards, including the UNGPs” [6], has been engaged in an ongoing project of “providing authoritative guidance and resources for implementing the [UNGPs] in the technology space” [7]. First established in 2019, the UN “B-Tech Project” was launched after consultations with civil society, business, States, and other experts [8]. 

The B-Tech Project has several foundational papers elaborating on the expectations of the UNGPs for the tech sector, several of which address how tech companies should understand their potential to “cause,” “contribute,” and “ be directly linked” in circumstances where adverse human rights impacts are related to the end-use of their products– i.e. the principal question of Microsoft’s assessment. It is therefore notable that none of the B-Tech Project papers or resources were cited in the final report. Open MIC believes the assessment would have been more relevant and robust had it included the more up-to-date and sector-specific guidance from the B-Tech Project. 

Where the Assessment Failed to Reflect B-Tech Project Guidance

The Assessors conclude that Microsoft could “at most” be directly linked to the adverse impacts that emanate from its Azure services; that “without more involvement from Microsoft in the development of the products or services” Microsoft could not be either causing or contributing to any adverse impacts as those terms are understood under the UNGPs [9]. However, in one of its foundational papers, the B-Tech Project plainly states that “In the context of end-use of its products and services, a technology company can cause, contribute or be directly linked to an adverse human rights harm” [10].

According to the B-Tech Project, “Contribution implies that a technology company’s actions and decisions–including in the course of product design, promotion, deployment, selling/licensing and oversight of use–facilitated or incentivized the user in such a way as to make the adverse impact more likely” [11]. And while they note that it is not possible to provide an exhaustive ex ante “check list” of situations that would fall into one category or the other, there are non-exhaustive factors that can be used to determine whether a technology is contributing to an adverse impact: 

  1. the extent to which a company’s actions or decisions facilitated or enabled conditions that make it possible for use of a product, service, or solution to cause harm;

  2. the extent to which a company’s actions or decisions during design, promotion, and marketing incentivized or motivated users to use a product or service in ways that cause harm; and 

  3. whether the company knew or should have known that there are human rights risks associated with a particular technology, customer, or user but omits to take any action to address it [12].

This guidance from the B-Tech Project calls into question the Assessors’ conclusions that Microsoft could at most be directly linked to the alleged adverse impacts emanating from its Azure services and that Microsoft neither causes nor contributes to the harms associated with the creation and deployment of law enforcement digital systems like DAS and Aware. It would have been more illuminating had the Assessors applied the B-Tech Project’s more relevant and nuanced understanding of contribution in the context of technology end-use and attempted to reason through the above-listed factors. It is arguable that the Assessors would have arrived at different determinations of Microsoft’s level of responsibility and, in turn, their obligation to participate in remedying the adverse impacts at issue in this assessment.

With respect to Microsoft’s level of involvement with harms associated with the use of third party apps available in the Azure Marketplace, the Assessors surprisingly determined that it was not “necessary to assign a formal level of responsibility when Microsoft is simply providing a platform on which developers create third party apps that may be used in beneficial or abusive ways” [13]. According to B-Tech Project guidance, situations of linkage may also exist where the end-use occurs beyond a company’s first-tier customer and user relationships [14]. 

They advise that individual companies need to “scrutinise [their] particular blend of revenue models, value propositions and value chain relationship[s] to address any ‘baked in’ business practices that create significant and recurring risks to people” [15]. Again, the analysis would have been more insightful and informative to stakeholders had it addressed these considerations, and it may have ultimately come to different conclusions about Microsoft’s responsibility for harms related to third party apps.

Without reference to this more relevant and authoritative guidance on interpreting the UNGPs in the tech sector, the assessment fails to live up to its promise of giving shareholders and other interested stakeholders a meaningful assessment of Microsoft’s responsibility for adverse human rights impacts emanating from its relationships with law enforcement, immigration enforcement, and other government contracts. 

A Mischaracterization of the Nature of Digital Technologies and Digital Rights Impacts

A particularly troubling feature of the assessment was a repeated comparison between the sophisticated proprietary products and services Microsoft offers and more rudimentary, fungible products and services in other sectors. The Assessors claim that “Providing a platform in the stream of commerce is akin to providing parts to build a bridge or a car, or paper to write a book” and that “cloud platforms and AI technologies are analytically similar to building materials used for any end purpose” [16]. 

If these analogies were relevant, the B-Tech Project would not need to exist. In fact, there is something qualitatively unique about the business models, value chains, and potential human rights impacts in the tech sector, which is why the UNGPs need to be interpreted differently in the context of tech.

New digital technologies such as cloud computing, AI, facial recognition, and the Internet of Things are “contributing to large-scale infringements on privacy, exacerbating ethnic conflict and dissemination of hate speech, undermining democratic processes, enhancing state surveillance, putting children at risk, facilitating live-streaming of abhorrent acts [and] online violence against women and LGBTI persons and others, and ‘algorithmic discrimination’” [17]. 

As the B-Tech Project notes, these incidents are not so much outliers as built into the logic of how the business of technology has been constructed and evolved [18]. As a result, any meaningful assessment of a tech company’s human rights impacts ought to take account of the unique dynamics and incentives of the sector and the risks they pose.

Moving Forward

While Open MIC supports the recommendations the Assessors included in the final report, the overall analysis leaves out many important considerations that we believe would have led to a more nuanced and accurate assessment of Microsoft’s responsibility for the impacts raised in the original shareholder resolution. We encourage Microsoft to use authoritative guidance more relevant to the tech sector to assess their human rights impacts in the future. 

We agree with the Assessors’ finding that while Microsoft has robust human rights policies and practices, civil society organizations like ourselves feel that the company’s lack of transparency hinders Microsoft’s ability to speak credibly on human rights issues [19]. We not only encourage Microsoft to implement the Assessors’ recommendations but to also make public their timeline for and progress in doing so so that we and other stakeholders can have greater confidence in their commitment to human rights.


[1] Foley Hoag, “A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies” (June 2023), 1

[2] Foley Hoag, “A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies” (June 2023), 1

[3] Foley Hoag, “A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies” (June 2023), 13

[4] See UN Guiding Principle 13.

[5] Foley Hoag, “A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies” (June 2023), 13

[6] UN B-Tech, “Applying the UN Guiding Principles on Business and Human Rights to digital technologies: Overview and Scope” (November 2019), 4.

[7] United Nations Human Rights Office of the High Commissioner, “B-Tech Project

[8] United Nations Human Rights Office of the High Commissioner, “B-Tech Project

[9] Foley Hoag, “A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies” (June 2023), 33.

[10] UN B-Tech, “Taking Action to Address Human Rights Risks Related to End-Use” (September 2020), 4.

[11] UN B-Tech, United Nations Human Rights Office of the High Commissioner, “Access to remedy and the technology sector: basic concepts and principles” (January 2021), 10.

[12] UN B-Tech, “Taking Action to Address Human Rights Risks Related to End-Use” (September 2020), 6.

[13] Foley Hoag, “A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies” (June 2023), 38

[14] UN B-Tech, United Nations Human Rights Office of the High Commissioner, “Access to remedy and the technology sector: basic concepts and principles” (January 2021), 10.

[15] UN B-Tech, “Addressing Business Model Related Human Rights Risks” (July 2020), 10.

[16] Foley Hoag, “A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies” (June 2023), 44, 34.

[17] UN B-Tech, “Applying the UN Guiding Principles on Business and Human Rights to digital technologies: Overview and Scope” (November 2019), 2.

[18] UN B-Tech, “Addressing Business Model Related Human Rights Risks” (July 2020), 5.

[19] Foley Hoag, “A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies” (June 2023), 2.