Open MIC report details business risks for companies and calls on Facebook, Google and others to better explain efforts to address these issues
A new report released today details how leading tech companies are grappling with an onslaught of disinformation and online hate speech, and makes recommendations on how they can improve their policies and practices. The release follows shortly after Facebook admitted that “malicious actors” created “fake personas” to spread misinformation during last year’s U.S. election, and major online advertisers such as AT&T and Johnson & Johnson suspended their advertising on Google’s platforms because of ads appearing near hate speech material.
The report ‘Fake News’, Hate Speech & Free Expression: Corporate Responsibility in an Age of Alternative Facts, is by Open MIC, a nonprofit that works to promote shareholder engagement in media and technology companies. Open MIC finds that these internet companies need to demonstrate “substantive and long-term” leadership and implement new corporate policies and practices to combat these issues.
“In a relatively short span of time these companies have eclipsed traditional, old school media as principal sources of news and information for most of the public and have morphed from technology platforms to brokers of content and truth on a global scale,” the Open MIC report says. “These companies use algorithms that have an extraordinary impact on how billions of people consume news and information daily.”
Fabricated content and hate speech also present online platforms with a number of new business challenges for the companies, according to the report. These include a loss of advertising revenue, potential legal liability, calls for increased regulation and concerns about threats to users’ freedom of expression.
“These issues are much more than a marketing problem for Facebook, Google and other companies,” said Michael Connor, executive director of Open MIC. “Disinformation and hate speech on the internet represent a threat to public trust in information, a threat to democracy – and real long-term business challenges for these companies.”
Shareholder resolutions asking Facebook, Inc. and Alphabet Inc., parent of Google, to provide investors with reports on these issues are scheduled to be voted on at the companies’ upcoming annual meetings in June. The boards of both companies oppose the proposals.
The Open MIC report presents some of the latest thinking regarding the challenges of deceptive content and online hate speech; analysis of the legal, reputational and financial risks to companies; and recommendations for developing greater corporate accountability and transparency on these issues.
It includes data and statements from researchers and technologists, including Sir Tim Berners-Lee, the computer science visionary widely credited with the invention of the World Wide Web.
“We must push back against misinformation by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem, while avoiding the creation of any central bodies to decide what is ‘true’ or not,” Berners-Lee said. “We need more algorithmic transparency to understand how important decisions that affect our lives are being made, and perhaps a set of common principles to be followed.”
Rebecca MacKinnon, director of the Ranking Digital Rights project, a nonprofit initiative that evaluates the world’s most powerful internet, mobile and telecommunications companies’ practices affecting user rights, says tech companies should adopt an “impact assessment model” for evaluating information policy solutions for the private and public sectors.
Open MIC's report compiles a series of recommendations suggesting that companies should develop stronger transparency and reporting practices, implement impact assessments on policies affecting content, and establish clear board-level governance on these issues.
Specifically, the report outlines the following recommendations:
To avoid government regulation and/or corporate censorship of information, tech companies should carry out impact assessments on their information policies that are transparent and accountable, and provide an avenue for remedy for those affected by corporate actions.
Tech companies should appoint ombudspersons to assess the impact of their content algorithms on the public interest.
Tech companies should report at least annually on the impact their policies and practices are having on fake news, disinformation campaigns and hate speech. Reports should include definitions of these terms, metrics, the role of algorithms, the extent to which staff or third-parties evaluate fabricated content claims, and strategies and policies to appropriately manage the issues without negatively impacting free speech.
For more details visit http://fakenews.openmic.org/.