Connect with us

Information Disorder Analysis Centre

Facebook Fact-Checking Policy Framework and Nigerian Politicians: A Critical Reflection

28 mins read

Summary

In the event of elections around the world in 2020 and challenges posed by political dis and misinformation, stakeholders have raised concerns about the implication of this period for the political process. Multiple approaches have been adopted by stakeholders to address these challenges in the digital public space that is seen as a catalyst to the spread of false information and political propaganda. Facebook is one of the tech companies that provide platforms for dissemination of political messages and has sustained collaboration with fact-checking organisations to flag and label contents on Facebook platforms as a way of bringing sanity to the digital public sphere. Following concerns raised by stakeholders about the negative use of Facebook by politicians on one hand, and of the need, on the other hand, to promote principles of “free marketplace of ideas,” Facebook has been changing its policy to satisfy the two opposing demands. This study therefore undertook an assessment of Facebook policy on fact-checking politicians and reflected on whether this policy has influence on the editorial contents of fact-checking organisations in Nigeria (Africa Check, Dubawa and AFP Hub) which are all signatories to the International Fact-Checking Network (IFCN). 

The study reviewed the Facebook policies of fact-checking politicians in relation to literature on the same subject and reference to two of the five IFCN principles related to the policy. Assuming that it is the responsibility of the media to make the government accountable as enshrined in the constitution, the study sought to understand the influence of Facebook policy on fact-checking politicians on the editorials of fact-checking organisations in Nigeria? One of the questions that arose from the assessment of Facebook policy is whether fact-checking claims of politicians and political ads amount to “censoring or stifling political discourse”? While reflecting on the above concerns of stakeholders, the study concluded that Facebook finds itself in an ethical dilemma. It also submitted that in view of the available evidence, the Facebook policy on fact-checking politicians has no noticeable influence on the editorial contents of the fact-checking organisations, despite Facebook funding of some fact-checking organisations in Nigeria. The two IFCN principles that serve as yardstick to reach this conclusion are: “Commitment to Non-partisanship and Fairness” and “Commitment to Transparency of Funding and Organisation.” 

The study also observed and recommended as follows:

  • The changes in Facebook policy on fact-checking political claims and ads on its platform was an attempt to maintain a balance between entrenching free speech and bringing sanity to the digital public space often weaponised for political gains.
  • That the exceptions given to politicians and political ads in respect of fact-checking could be termed discriminatory, as similar opportunity is not available to other users of Facebook. Without prejudice to the ethical decision adopted by Facebook, it is instructive to have a general policy that will be binding on all users of Facebook, irrespective of their political, religious, economic, and social status. 
  • There is also a need for international discourse on the challenges occasioned by the ethical dilemma of Facebook and information disorder in the world. The United Nations and IFCN have a role to play to provide the leadership for this engagement. The call for a UN charter on political disinformation and other variants of information pollution is not out of place. This Charter, if enacted, is expected to put to rest the debate over desirability for legal framework or voluntary intervention to address the challenge of dis and misinformation especially on the digital public space. 
  • This is the time that Facebook and other tech companies need to invest more on Media, especially Information and Digital Literacy, to empower social media users to develop defence mechanisms against all forms of disinformation on their platforms. 
  • In addition, the IFCN should sustain its efforts at ensuring that fact-checking organisations across the world are guided by its five code of principles in order to continue to sustain integrity in the fact-checking landscape.
  • There is a need to further investigate the severity of political dis and misinformation on the political process and measure the extent to which the interventions of the tech companies have reduced the negative effect of information disorder in the political system. 
  • Further studies should also explore content analysis of political ads on Facebook and other social media platforms to understand the severity of political disinformation on these platforms.

Introduction

Freedom of expression is a founding principle for Facebook in its mission to give  people opportunities to express themselves. This objective has been at the heart of everything we do. “We are champions of free speech and defend it in the face of attempts to restrict it. Censoring or stifling political discourse would be at odds with what we are about. That’s why I want to be really clear today – we do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.

It’s not new that politicians say nasty things about each other – that wasn’t invented by Facebook. What is new is that now they can reach people with far greater speed and at a far greater scale. That’s why we draw the line at any speech which can lead to real world violence and harm” (Clegg 2019).

The relationship between political and information power is as old as humanity. There have always been arguments that there can never be political power without influencing public opinion. There are also assertions that information power can have greater impact than political power. Or better still, there cannot be political power without tools to control the minds or influence the masses. The media have been identified as potent tools for achieving political power.

Former American President, Thomas Jefferson, considered the power of the media as more influential than political power:

… Since the basis of democracy was the opinion of the people, the very first objective was to keep it right. If it were left for me to decide whether we should have a government without the press or the press without the government, I should not hesitate a moment to choose the latter (Cited in Akinfeleye, 2005, p. 4).

In the same vein, Jim Morrison acknowledged the latent power of the media when he asserted: “Whoever controls the media, controls the mind.” And in reference to this, Abubakar (2015) further emphasised: “Little wonder why most influential people and institutions in the world cannot do without the media. In fact, it’s often believed that nobody or ideology can either fight the media or do without the media.” It is with the realisation of this that political leaders give greater preference to information management in their quest for political power. There has been extensive research by communication researchers on political propaganda. In fact, one of the factors that influenced the growth of mass media research, according to Esiri & Onwubere (n.d) is the extensive use of propaganda during the first World War.

This development made political players use propaganda as a tool to achieve their goal. This has therefore become part of the political process in the world. The advancement in technology has further fortified political information flow with different tactics and strategies for influencing public opinion. However, the avalanche of dis and misinformation especially in the digital public sphere has remained a source of concern to stakeholders in the information ecosystem. The more innovative technologies are developed, the more these digital tools are adopted and deployed to weaponise the information system to achieve specific agenda. 

Political communication further received a boost with the advent of the internet, providing political players with access to a wide range of audiences. It became a veritable instrument to disseminate political information to an heterogeneous and dispersed audience. For example, the internet provides the platform through which politicians reach the electorate. It also offers an opportunity for the government to communicate with the masses. That is why Badmus, Okaiyeto & Mustapha (2020) asserted that the internet engendered pro-democracy potentials through platform accessibility, openness and freedom.

The popularity of using social media platforms during the electoral process is now being embraced in many countries of the world. Since 2016 when the adoption of these platforms during the US election received attention from politicians and researchers, the platforms have gained considerable acceptance around the world. Researchers have interrogated the extent of usage of these platforms during the political process. They have also raised questions of whether the engagement on the platforms influences elections’ results. All these are on the background of widespread adoption of the platforms. 

In Nigeria, the government has repeatedly complained about the danger of information disorder to the country’s socio-political and economic stability, and thereafter muted the idea of regulations of social media by taking steps to criminalise the spread of dis and misinformation on the digital public sphere through enacting of laws. This includes a proposal in a bill before the National Assembly awarding the death sentence to anyone convicted of spreading disinformation on social media. The bill was short lived, following criticisms and allegations that the political class and players are heavy users of the social media platforms for the spread of disinformation, and that, therefore, any reform should start with the government which equates “hate speech” with “fake news.” This is also against the backdrop that politicians used political disinformation extensively during elections. The preliminary report of  CDD on the 2020 Edo State Governorship election in Nigeria provides evidence of use of political disinformation by political parties and their candidates during the electoral process (CDD, 2020). The avalanche of dis and misinformation during the #EndSARS protest has further given voice to advocates of regulation of social media. The Minister of Information and Culture, Alhaji Lai Muhammed, recently reiterated the need for the National Assembly to revive the proposed social media regulations (Baiyewu, 2020).

Tech companies especially Google, Facebook, WhatsApp, and Twitter are seen as wielding political power and influence owing to the enormous power of these platforms in reaching the desired political audience. From the alleged Russia influence in the 2016 United States’ presidential election to the controversy surrounding Cambridge Analytica, tech companies featured prominently in arguments concerning contribution of tech companies to the process of election manipulation.

There have been continuous accusations that the social media platforms weaponised political dis and misinformation by giving platforms to purveyors of disinformation to spread their messages. This has led to advocacy for reforms including legal and voluntary measures to address this challenge. As pressure is mounted on tech companies to be socially responsible, some of them came up with strategies to discourage and reduce the spread of dis and misinformation on their platforms. For example, WhatsApp configured its messaging platform to indicate “forwarded” messages and limited forwarding to five. Twitter also developed Algorithm to flag manipulated pictures and videos. Also, governments of countries in the world have adopted different approaches to regulate social media including banning, filtering contents, criminalising spread of dis and misinformation on the internet among others.

One of the approaches adopted by Facebook is the collaboration with fact-checking organisations around the world to check the spread of dis and misinformation. This includes claims by Politically Exposed Persons made on the platform of this tech company. Facebook got into partnership with fact-checking organisations to prevent the use of its platform for the spread of false information. However, the tech company in 2019, amended its fact-checking principes by excluding statements or claims of politicians as part of what can be fact-checked on its platform. Also in September, 2020, Facebook also updates its Terms of Service to remove and restrict access to users’ content.

In Nigeria, Facebook has partnered with Africa Check, Dubawa, and AFP to flag misinformation on its platform. What is the implication of the new Facebook policy framework on the editorials of these fact-checking organisations in Nigeria? 

Overview of Fact-Checking on Facebook

In its bid to fight the spread of misinformation on Facebook, the tech company works with independent, third-party fact-checking organizations who are certified through the non-partisan International Fact-Checking Network (IFCN) to review and assess their contents. The focus of this fact-checking program is to identify and address viral misinformation, particularly clear hoaxes that have no basis in fact. Fact-checking partners prioritize provably false claims, especially those that are timely or trending and important to the average person (Facebook, 2020a).

How the program works

The Facebook programme includes several key steps (Facebook, 2020a):

  • Identifying false news: We identify potential misinformation using signals, like feedback from people on Facebook, and surface the content to fact-checkers. Fact-checkers may also identify content to review on their own.
  • Reviewing content: Fact-checkers will review content, check its facts, and rate its accuracy. This happens independently of Facebook, and may include calling sources, consulting public data, authenticating videos and images, and more.
  • Clearly labeling misinformation and informing users about it: We apply a label to content that’s been reviewed by fact-checking partners, so people can read additional context. We also notify people before they try to share this content, and people who have shared it in the past.
  • Ensuring that fewer people see misinformation: Once a fact-checker rates a piece of content as False, Altered or Partly False, it will appear lower in News Feed, be filtered out of Explore on Instagram, and be featured less prominently in Feed and Stories. This significantly reduces the number of people who see it. We also reject ads with content that has been rated by fact-checkers.
  • Taking action against repeat offenders: Pages and websites that repeatedly share misinformation rated False or Altered will have some restrictions, including having their distribution reduced. They may also have their ability to monetize and advertise removed, and their ability to register as a news Page removed for a given time period.

Why Facebook Collaborates with Fact-checking organisations

As a response to allegations that Facebook is encouraging purveyors of disinformation to spread their messages through its platform, the tech company announced partnership with fact-checking organisations in the world to flag dis and misinformation on its platforms as a way of addressing challenges of information disorder. Facebook said its “global fact-checking program is a key piece of our strategy to reduce the spread of misinformation on Facebook, and we now have more than 50 partners fact-checking in over 40 languages around the world” (Goldshlager 2020). 

The collaboration includes fact-checking claims made by public officials including politicians. With the partnership, Facebook users and fact-checkers are equipped with tools to identify potential messages on the platform that are false or manipulated – text, pictures, audio or video. The collaboration enables fact checkers to flag disinformation and Facebook either remove the contents or limit the extent of its spread. This details of this is contained in its post tiled: “Facebook’s Approach to Misinformation: Partnering with Third-Party Fact-Checkers,” (Facebook, 2020b)

How does Facebook select its fact-checking partners?

Facebook’s independent third-party fact-checkers are all certified by the International Fact-Checking Network (IFCN). The IFCN, a subsidiary of the journalism research organization Poynter Institute, is dedicated to bringing together fact-checkers worldwide (Facebook, 2020b).

Facebook identified countries in which it collaborated with third-party fact-checking organisations. According to the map as contained on the Facebook website, Nigeria is one of the countries identified, with Africa Check, AFP Hub and Dubawa as fact-checking organisations mapped to review and rate the accuracy of contents on Facebook. The tech company explained that the fact-checking programme or approach to integrity on Facebook, “is one part of the three-part approach we take to addressing problematic content across the Facebook family of apps.” (Facebook, 2020c).

As a way of addressing the problematic content on its platform, Facebook had on 11th August, 2020 announced  “two new ratings to provide our fact-checking partners with more latitude to better reflect their research, and to help ensure that people who come across these posts have more precise information to judge what to read, trust, and share.” According to Goldshlager & Berman (2020) the first rating, called “Altered,” is designed specifically for videos and images that have been manipulated in ways that could mislead people. The second rating, called “Missing Context,” is designed for content that may mislead without additional context. Facebook further explains that it will apply the strongest warning labels to content rated either “False” or “Altered” as it makes up the worst of the worst kind of misinformation. It added it will reduce the distribution of content rated “Partly False” because it includes some factual inaccuracies.

Prelude to the Policy of fact-checking politicians

Studies have been conducted with findings that technological advancement such as the internet and social media overtly or covertly empowered the purveyors of dis/misinformation.  Ogbuoshi, Oyeleke & Folorunsho (2019, p. 67) subscribed to this view when they submitted that “fake news is currently trending as a result of the liberty and luxuries of technological inventions.” Technological tools and platforms such as social media according to Wardle (2019 p. 6) have become potent instruments in the hands of purveyors of disinformation.

Ibraheem  & Garba 2019;  Ogbuoshi, Oyeleke & Folorunsho 2019; ACSPN, 2018 had interrogated the issues related to political processes with emphasis on the extent to which hate campaigns fuelled by dis/misinformation are used during electoral process in Nigeria.

For instance, Ogbuoshi, Oyeleke & Folorunsho (2019, p. 47) explained the extent to which this phenomenon has eaten deep in the fabric of Nigeria and its implication to the corporate existence of the country:

Today, the Nigerian polity is so heated up that from all political divides. There has been a resort to hate campaigns and speeches. There are no arguments as to how politicians have resorted to divisive comments, insinuations and innuendoes. Not only has these hate speeches pitched the North against the South, but individual hatred has attained an all-time height in Nigeria.

Newman (2019) predicted that “spread of false, misleading and extreme content will continue to undermine democracies around the world with polarising elections in India, Indonesia and Europe likely flashpoints.” While announcing new ratings for fact-checking partners on 11th August, 2020, Facebook observed that the challenges of COVID-19 and elections around the world, made it  “realize how important it is for people to understand what they’re seeing when they’re using our services and then judge its worth.” (Goldshlager & Berman 2020).

In response to this, tech companies such as WhatsApp, Facebook and Twitter are collaborating with Fact-Checking organisations to spell out policies to combat dis/misinformation on their platforms (Ochelle 2018, News Agency of Nigeria 2019, Stencel 2019).

Facebook has come out to deny political interference in the politics of nations across the world, following the controversies generated from its relationship with Cambridge Analytica and alleged Russia influence in the US elections. Nick Clegg, the vice president of Facebook, in 2019 said  “the company had ‘made mistakes in 2016’ adding that Russia had attempted to use “Facebook to interfere with the US presidential election.” He however noted that the company had taken steps to prevent it happening again (BBC, 2019).

As a way of further enhancing the integrity of its platform, Facebook developed tools to assist fact-checkers and other stakeholders to aid verification of contents on its platform. For example, it developed “CrowdTangle”, a public insights tool that makes it easy to follow, analyze, and report on what’s happening across social media. Facebook acquired CrowdTangle in November, 2016, made the tool free, and expanded access from 300 media partners to more than 10,000. Starting in 2019, CrowdTangle added research, academic and fact-checking partners, and built tools to help these partners study how public content spreads across Facebook and Instagram (Miles, 2020). 

Similarly, Facebook has also taken other measures to address challenges posed by dis and misinformation on its platforms. These include: cracking down on fake accounts – the main source of fake news and malicious content; bringing in independent fact-checkers to verify content; recruiting an army of people – now about 30,000 – and investing hugely in artificial intelligence systems to take down harmful content. The vice president of Facebook’s global affairs and communications, Nick Clegg, said the efforts are bringing results as a Stanford report found that interactions with fake news on Facebook was down by two-thirds since 2016. He added that the company launched an initiative called the Deepfake Detection Challenge to work in the Partnership with AI companies to find ways to detect new forms of manipulated content (video) and take necessary action on them:

Crucially, we have also tightened our rules on political ads….People who want to run these ads now need to submit ID and information about their organization. We label the ads and let you know who’s paid for them. And we put these ads in a library for seven years so that anyone can see them (Clegg 2019).

Meanwhile as fact-checkers spring to action debunking claims made on Facebook throughout the world especially during political processes, such as elections, politicians and other public officials are put on the spotlights. The aftermath of this is the fighting back of the politicians by accusing the platform and fact-checkers of partisanship. Some countries also threatened to ban social media platforms after some of the claims of public officials were fact-checked. Tech companies came under pressure from politicians as the fact-checking activities suffocate and reduce the influence of political disinformation. For instance, President Donald Trump has proposed regulation to social media platforms in the US following flagging of some of his social media posts. The US Justice Department had on 23 September, 2020 “unveiled a legislative proposal that seeks to reform a legal immunity for internet companies. This follows President Trump’s bid earlier this year to crack down on technology giants providing internet and social media platforms.” (Odih, 2020)

In addition, critics have petitioned the International Fact Checking Network (IFCN) with allegations of bias against fact-checking organisations. For instance, IFCN in a statement on 11th September, 2019 announced that “one of its verified signatories is being accused of violating IFCN’s Code of Principles on its first commitment to non-partisanship in one of the fact-checks it has published.” In Nigeria, for instance, a fact checker with Africa Check reported that he came under accusation from politically exposed persons in the country with allegations that he tagged politicians and public officials liars by fact-checking their claims (Folarin, 2020).

Social Media Reform, Control & the Zig-Zag Perception of “Fake News”

Resulting from the above, some countries have responded to the fact-checking efforts by initiating policies to frustrate the activities of fact-checkers. Folarin (2020) in a study identified political interference as one of the challenges confronting media organisations on the frontline of combating information disorder in Nigeria. Apart from reports of  allegations submitted to IFCN, political office holders have also called out media organisations who debunk their claims. 

In Nigeria, the government has been criticised for coming up with proposed laws tagged “Anti Social Media Bill” by civil society organisations, to regulate the social media in the country. Observers have also shown concerns about the mixture of “Hate Speeches” and “Fake News” as a way of combating information disorder by the government. David Ajikobi, the Editor of Africa Check, believes that putting “Fake News” in the same bracket with “Hate Speech” will be counter productive. Mr Dapo Olorunyomi, the founder of Premium Times, had identified the wrong label and perceptions of variance in the information disorder as a challenge to addressing the problem. Olorunyomi who is an editorial member of IFCN, termed this phenomenon “zig-zag” approach to addressing the challenge of information disorder. He observed that the power of Algorithm has a lot of influence on information management. He also expressed worry that the government makes mistakes by saying that the issue of misinformation is a media  industry issue requiring efforts by journalists to advance regulations and policies to address. He added that issues around “Colonisation of Algorithm” could be resolved through “Accountability of the Algorithm”(Olorunyomi, 2020).

Daily Tracker (2020) reported that the Federal Government of Nigeria proposed to spend 336 billion Naira for special enlightenment campaign on government policies and programmes; testimonial series to gauge the impact of government policies on the citizenry and to tackle hate speech and fake news amid criticisms from rights groups that it is a ruse to stifle free speech.

In July 2018, the Federal Government launched a national campaign against “fake news”, with the Minister of Information and Culture, Alhaji Lai Mohammed saying: “The biggest challenge facing Nigeria today is fake news and misinformation. Based on that, we dedicated an entire National Council on Information’s meeting to that issue,” (Baiyewu, 2020). And recently, following challenges of dis and misinformation on social media platforms during the #EndSARS Protest in October 2020 with allegations that the mainstream and online media are not immune from the spread of manipulated contents, the National Broadcasting Commission imposed millions of Naira sanctions on some traditional media for alleged infractions to the NBC codes (Premium Times, 2020).

The above statement of problem summarises political interference in the fact-checking ecosystem and the extension of this is already evident in conduct of tech companies.

Place of Accountability Journalism & Fact-Checking in the Face of Facebook Policy 

“The code of principles of the International Fact-Checking Network (IFCN) at Poynter is a series of commitment organizations abide by to promote excellence in fact-checking. We believe nonpartisan and transparent fact-checking can be a powerful instrument of accountability journalism” (IFCN, 2020a).

According to IFCN’s code of principles, the activities of fact-checking organisations around the world is not only to verify the claims of public officials and institutions but also to make them accountable to the masses. If this is the case, then, will the policy of Facebook frustrate this effort in Nigeria’s fact-checking ecosystem? Section 22 of the 1999 Constitution of the Federal Republic of Nigeria (as amended) gives the media institution the constitutional responsibility to make the government accountable to the people. Olorunyomi (2020) referred to “accountability media” as including the duty of the media to hold the government accountable at all times.” He submitted that at the heart of the accountability matrix is the investigative principle, adding that the three critical questions to consider while envisioning accountability journalism are: how are contents produced?, how are they distributed?, and how is the media financed?

Assuming it is the responsibility of the media to make the government accountable as enshrined in the constitution, what influence will the Facebook policy on fact-checking politicians have on the editorials of fact-checking organisations in Nigeria?

Reflection on IFCN Principles with Facebook Policy of Fact-Checking Politicians

Poynter set up the International Fact-Checking Network (IFCN) to bring fact-checking organisations across the world together under a platform as a way of promoting professionalism and in a bid to achieve the objectives of setting up these organisations. 

Fact checking organisations around the world are encouraged to be signatory to the IFCN code of principles and they are expected to be committed to the principles. It expected the organisations to “regularly publish non-partisan reports on the accuracy of statements by public figures and prominent institutions and other widely circulated claims related to public interest issues.” The five principles the signatory organizations are expected to be committed to are: A commitment to Non-partisanship and Fairness, A commitment to Standards and Transparency of Sources, A commitment to Transparency of Funding and Organisation, A commitment to Standards and Transparency of Methodology, and A commitment to Open and Honest Corrections Policy  (IFCN, 2020b).

Political Factor & Need for Sanity on Platforms Influence Setting IFCN Code

The IFCN website documented circumstances that led to developing guidelines for the operations of fact checking organisations in the world. First, it was a response to an attempt by “political campaigns and partisan groups” to determine the yardstick to “distinguish reliable fact-checking”. The second factor was the resolve of the technological companies to bring sanity to digital public space through their platforms. This was a response to the allegation that the companies encourage and provide the platform for the spread of dis and misinformation (IFCN, 2020). This was well noted under the heading: “The code and the platforms” on the IFCN website and is presented below (IFCN, 2020c): 

In the months before the US presidential election in 2016, fact-checkers from around the world began to discuss the need to adopt a code of principles. It was during the third global fact-checking summit, held in Buenos Aires in November 2016, that such idea emerged for the first time. The objective was to come with a set of guidelines that could help distinguish reliable fact-checking at a time when political campaigns and partisan groups tried to claim the term.

On November 12, 2016, Facebook’s founder, Mark Zuckerberg, wrote in a post that it was “extremely unlikely hoaxes changed the outcome” of the election “in one direction or the other.” 20 fact-checkers from around the world responded with an open letter suggesting that Zuckerberg start a conversation that could “underpin a more accurate news ecosystem on its news feed.” Soon after Zuckerberg’s letter, technology companies launched their own efforts and projects to fight misinformation, and a year after that exchange, representatives from the main social media platforms testified in the American Senate after the possibility that viral misinformation had an actual influence in the presidential election.

Facebook will not fact-check politicians

In its September 25, 2019 news on its website, BBC reported Facebook’s announcement that politicians would be exempted from its fact-checking scheme which is designed to reduce the spread of fake news and misinformation.” The tech company said “it will treat all posts by politicians as ‘newsworthy content’ that should ‘be seen and heard’, (as) it did not want to be the ‘referee’ in political debates or prevent politicians’ posts from reaching their intended audience (BBC, 2019).

Speaking at the Atlantic Festival in Washington DC in 2019, Facebook’s vice president of global affairs and communications, Nick Clegg, “set out the measures that Facebook is taking to prevent outside interference in elections and Facebook’s attitude towards political speech on the platform.” (Clegg 2019). The Facebook VP explains the policy:

Now, elections. It is no secret that Facebook made mistakes in 2016, and that Russia tried to use Facebook to interfere with the election by spreading division and misinformation. But we’ve learned the lessons of 2016. Facebook has spent the three years since building its defenses to stop that happening again. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements.

“Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.”, Nick Clegg further declared that “Facebook would let politicians post content that breaks its rules on topics such as hate speech, as long as they did not pose a significant “risk of harm”.

He,  however, outlines exceptions to the policy: “Broadly speaking they are two-fold: where speech endangers people; and where we take money, which is why we have more stringent rules on advertising than we do for ordinary speech and rhetoric.”

One of the questions arising from the assessment of Facebook policy is whether fact-checking claims of politicians amount to “censoring or stifling political discourse and how would fact-checking organisations make public officials accountable if claims made by politicians on Facebook could not be fact-checked especially when it geared towards spreading disinformation?

This is how the VP of Facebook responded to these concerns:

I know some people will say we should go further. That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse. Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves  (Clegg 2019).

While making clarifications on the eligibility of content to be fact-checked on its platform, Facebook maintained that opinion is “generally not eligible for fact-checking because we don’t want to interfere with individual expression.” It further explained that the only exception is that op-ed and editorial may be fact-checked if the “content is presented as opinion but is based on underlying false information – even if it’s an op-eds or editorials – it’s still eligible to be fact-checked.” arguing that “presenting something as opinion isn’t meant to give a free pass to content that spreads false information.” (Goldshlager & Berman 2020) . 

Relating this approach to the policy of Facebook on fact-checking politicians, Poynter quoted Summer Chen, editor-in-chief of Taiwan Fact-Check Center, saying: “My understanding (until today) was that we could not fact-check politicians’ opinions even if they brought false information. We surely would have liked to have flagged some.” Poynter however was quick to add in its report that a “spokesperson for Facebook clarified that statements from politicians are still exempt from the third-party fact-checking program.” (Tardáguila & Mantas, 2020).

In Nigeria, when Adedeji Adekunle of Premium Times Center for Investigative Journalism was asked about this issue, he responded on Twitter thus: “I believe as long as factual assertions are being made, they can be checked. We don’t fact-check ‘statements of mere opinion without factual basis’ that doesn’t mean opinion pieces or columns. They shouldn’t use their platforms to misinform.”

Despite the policy of Facebook not to fact-check political ads and statements of politicians posted on its platform, Timberg & Ba Tran (2020) reported in the Washington Post that fact-checkers who are involved in Facebook’s third-party fact-checking programme ruled that “claims in Trump ads are false” but such findings or label related to it was not reflected on Facebook (Timberg & Ba Tran 2020).

In response to this policy, fact checkers offered suggestions on how Facebook can handle potential misinformation ahead of the 2020 US elections in November. Mantas (2020) in a news story posted on Poynter articulates the views of stakeholders who support the Facebook policy of labelling political ads and those with reservations.

Facebook later made changes to the policy by emphasising the exceptions it earlier placed on its decision not to fact-check claims made by politicians and political ads. The CEO of Facebook in a post on 3rd September, 2020 said that the tech company would start labelling “content that seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods,” as he expressed worry that “with our nation so divided and election results potentially taking days or even weeks to be finalized, there could be an increased risk of civil unrest across the country.”

“Today, we’re announcing additional steps we’re taking at Facebook to encourage voting, connect people with authoritative information, and fight misinformation.”, Zuckerberg declared.

Some of the steps announced by the CEO of Facebook to combat misinformation before, during and after electoral processes are: Blocking new political ads during the final week of the campaign; Working with election officials to remove misinformation about voting; Limiting forwarding on Messenger to reduce the risk of misinformation and harmful content going viral; Partnering with Reuters and the National Election Pool to provide authoritative information about election results; and Attaching an informational label to content that seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods. 

Zuckerberg concluded by promising not to “make further changes to our election-related policies between now and the official declaration of the result,” as a way of ensuring that there are “clear and consistent rules.”

The Week Magazine’s report stated that Facebook had decided to effect changes in its policy on political ads when the tech company’s “stock dropped more than eight percent, erasing roughly $50 billion from its market valuation, after Unilever, the European company behind brands such as Ben&Jerry’s and Dove, announced it would boycott Facebook ads through the end of the year over the amount of hate speech and divisive rhetoric on its platform.” (The Week, 2020).

Reflecting on the complexity of Facebook policies on fact-checking politicians, the age-long debate about ethics in journalism comes to mind. First, Facebook finds itself in an ethical dilemma of whether to encourage freedom of expression by allowing politicians to have the freedom to express themselves on Facebook or prevent the platform from being used for spreading dis and misinformation. That is, Facebook wishes not to censor politicians and at the same time wants to bring sanity to its platform for providing a platform for purveyors of political disinformation. The second issue raised by this development is whether to protect the Facebook community from being exposed to information pollution or allow questionable contents to pass with the belief that the users are capable of differentiating between facts and fictions. 

That is why Oso, Olatunji, Odunlami & Adesina (2020) observed: “Interrogating the concepts of media freedom and governance in the context of a digital space may not be as simple as it seems, especially when one considers the inconclusivesivess of the variables for conceptualizing and measuring media or press freedom.” Many interventions to resolve related ethical issues have both advantages and disadvantages. Should Facebook tilt towards the Teleological approach (consequential – the end justifies the means) or deontological principles (non-consequential – let justice be done though the heavens fall) in addressing the ethical dilemma? Will public interest be the yardstick to determine the correctness of fact-checking politicians or should it adopt a relativists approach? 

Facebook Policy & compliance to IFCN Principles by Fact-Checking Organisations in Nigeria

IFCN sets five guiding principles for the  operations of fact-checking organisations in the world. As of October 2020, there are 106 Fact-checking organisations who are signatories to the IFCN code of principles. 83 are active, 13 are under renewal while there are 16 expired signatories. (IFCN, 2020d). Out of about seven fact-checking organisations in Nigeria, only three (Africa Check, Dubawa and AFP) are signatories to the IFCN code of principles. 

The two IFCN principles that are directly related to the issue are:

  • A commitment to Non-Partisanship and Fairness – Signatory organizations fact-check claims using the same standard for every fact check. They do not concentrate their fact-checking on any one-side. They follow the same process for every fact check and let the evidence dictate the conclusions. Signatories do not advocate or take policy positions on the issues they fact-check.
  • A commitment to Transparency of Funding and Organisation – Signatory organizations are transparent about their funding sources. If they accept funding from other organizations, they ensure that funders have no influence over the conclusions the fact-checkers reach in their reports. Signatory organizations detail the professional background of all key figures in the organization and explain the organizational structure and legal status. Signatories clearly indicate a way for readers to communicate with them.

How are the fact-checking organisations in Nigeria faring in respect to these principles in relation to the background that Africa Check, Dubawa and AFP are signatories to the IFCN code of principles and at the same time entered into collaboration with Facebook? By  IFCN’s standard, these organisations adhere to these principles and this may be why they remain members. Formal report has not been recorded for the violation of the principle of Non-partisanship and Fairness. For instance, Dubawa is explicit in its “Independence Policy” that states:

While Dubawa acknowledges the right of every individual to contest for an elective position as well as support any candidate of their choice running for an elective position, it is our opinion that the independence of the platform may be called to question if team members are partisan. To this end, any team member who wants to contest elections or support/sponsor a candidate in any form either by active campaigning or donating to a campaign fund; shall resign his/her position on the team.

Concerning the second principle regarding fact-checking organisations’  commitment to ensure “Transparency of Funding and Organisations,” Africa Check, Dubawa and AFP declare on their websites that they have partnership with Facebook and that their funders do not have influence on their editorial contents. For instance, Africa Check stated this in clear terms under its “principles”:

We are transparent about our funding sources. If we accept funding from other organisations, we ensure that funders have no influence over the conclusions we reach in our reports. We detail the professional background of all key figures in our organisation and explain our organisational structure and legal status. We clearly indicate a way for readers to communicate with us.

Despite Facebook’s policy of fact-checking politicians, Dubawa has been consistent in fact-checking the claims of politicians especially during elections in Edo and Ondo States in 2020. In fact, the fact checking organisation, dedicated a section on its website to politics where all the fact-check contents are archived. For instance, some of the stories published by the organisation around politics and elections are: Fact-Checking Atiku’s Claims on Nigeria’s Foreign Debt, GDP Growth Between 1999 and 2007Establishment of Entrepreneurial Agency Claim by Akeredolu a Hoax; Quick Checks: Misleading Claims from the Ondo 2020 Election; Quick Checks: False Claims from the Ondo 2020 Election; #EdoDecides2020: The truths, lies and permutations in crucial election; Obaseki Exaggerates Edo’s 2016 Total Debt Figure; Ize-Iyamu’s claim and controversies around security votes in Nigeria.

In addition, Dubawa has another section on its website tagged “Facebook Checks” where stories related to fact-checking claims around politics can be tracked. Some examples are: Desmond Elliot and the thesis on godfathers; Are APC Thugs destroying Roads in the North because of White Paper?; Fake social media accounts use Trump as signifier for trash talk.

Africa Check seems not to dedicate any section on its website to politically related contents but there are many cases of fact-checking around politicians and the political process in Nigeria. Some of them are: Nigeria’s population, economy and history: fact-checking Buhari’s independence day speech; Fact-checking Nigerian politician’s claims about new mobile phone tech: Is radiation from 5G ‘10,000 times’ that from 3G?; Covid-19 pandemic: Fact-checking former Nigeria VP Abubakar’s claims about the economy; Nigerian joblessness at all-time high? Checking a politician’s claims; Crisis in Nigerian schooling? Grading three claims by presidential hopeful; Buhari’s 2018 Democracy Day speech: 7 main claims under scrutiny; Has Nigeria’s president not built even a nursery school, as Jonathan aide claimed?; Has Nigeria’s ruling party brought ‘power to the people’? We fact-check electoral claims; President Goodluck Jonathan claims ‘Nigeria is not a poor country’. Is he correct?

Though AFP’s fact-checking of politicians was not as extensive as that of Dubawa and Africa Check, the news agency fact-check hub fact-check claims by politicians and related issues. Some examples as contained on its website are: Doctored TV report does not depict recent electoral violence in Nigerian state; Nigeria not donating $600m to Biden’s campaign over false Trump links to separatist group; Misleading video circulating online does not show ‘war’ amid Nigerian state election; Hoax ballot does not show Nigerian politician’s vote for rival; Not all of Nigeria’s external borrowings were paid off in 2007, as claimed, and neither was it totally debt free.

With the above assessment, will it then be correct to submit that fact-checking organisations are not influenced by their funders on the type of contents they fact-check? In view of the above evidence, there is no noticeable influence of Facebook policy of fact-checking politicians on the editorials of fact-checking organisations in Nigeria. Due to limitations in the methodology adopted to arrive at this conclusion, there is a need for more elaborate research to provide additional empirical data  and evidence to support this finding or reject the hypothesis. Adeniran (2020) has raised the question whether donor funding may have some influence  on the fact-checking process. 

Conclusion

This study assessed Facebook policy of fact-checking politicians in Nigeria and reflected on whether this policy affects the editorial independence of fact-checking organisations, such as Africa Check, Dubawa and AFP Hub  in Nigeria. The overview of the policy viz-a-viz the debate for and against it generated a series of questions and concerns.

One of the questions that arose from the assessment of Facebook policy is whether fact-checking claims of politicians and political ads amount to “censoring or stifling political discourse”? The study concluded that Facebook finds itself in an ethical dilemma. It also submitted that in view of the available evidence, Facebook’s policy on fact-checking politicians has no noticeable influence on the editorial contents of the fact-checking organisations, despite Facebook’s funding of some the fact-checking organisations in Nigeria. The two IFCN principles that serve as yardstick to reach this conclusion are: “A commitment to Non-partisanship and Fairness” and “A commitment to Transparency of Funding and Organisation”. 

The changes in Facebook policy on fact-checking political claims and ads on its platform reflect an attempt to maintain a balance between entrenching free speech and bringing sanity to the digital public space weaponised for political and related wars. 

Meanwhile, the exceptions given to politicians and political ads in respect of fact-checking could be termed discriminatory, as similar opportunity is not available to other users of Facebook. That is, if the contents of general users of Facebook could be fact-checked, why the exemption to politicians? If there are assertions that political disinformation breed other forms of dis- and misinformation, why should purveyors of such be accorded immunity? Without prejudice to the ethical decision adopted by Facebook, it is instructive to have a general policy that is binding on all users of Facebook irrespective of their political, religious, economic, and social orientations. 

The study also recognizes the need for international perspectives on the challenges created by the ethical dilemma of Facebook and information disorder in the world. For example, there should be a role for the United Nations and IFCN  in providing appropriate  leadership for this engagement. The call for a UN Charter on political disinformation and other variants of information pollution is not out of place. In view of this, the recommendation of Dr Kola Adesina of Crescent University calling on the UN to draft a Charter  may be ripe for consideration, as suggested by Edema (2020) to “address the anti-social potentials of social media across the world while protecting individual’s rights to freedom of expression.” One of the key points that should be sacrosanct in the Charter is to allow signatories to IFCN code of principles to determine what is termed dis-misinformation and other labels in the information disorder. This is because related terms have come under heavy abuse by political actors and other interests. This Charter, if enacted, is expected to put to rest the debate over desirability for legal framework or voluntary approach at addressing the challenge of dis and misinformation in the digital public space. 

Many stakeholders seem to be favourably disposed to the voluntary approach as attempts by governments in different countries to regulate the social media have not been able to address the problem of information disorder. This is the time that Facebook and other tech companies need to invest more resources in Information and Digital Literacy to empower social media users to develop defence mechanisms against all forms of disinformation on their platforms. 

In addition, the IFCN should sustain its efforts at ensuring that fact-checking organisations across the world are guided by its five code of principles in order to continue entrenching integrity in the fact-checking landscape.

On the basis of the above, there is a need to further investigate the severity of political dis and misinformation on the political process and measure the extent to which the intervention of tech companies on reducing negative effects of information disorder in the political system. The hypothesis raised here is that political dis and misinformation is at the top of all other variables when it comes to severity of information disorder. To understand the severity of political disinformation on these platforms,  there is a need for further studies to explore content analysis of political ads on Facebook and other social media platforms. 

This research is conducted for the Dubawa Fellowship programme (2020), and is supported by the Heinrich Boll Stiftung Foundation, to amplify the culture of truth and contribute to literature around information disorder.

References

Abubakar R. (2015). Muslims and the Threats of the Media. Lagos; Salsabil Associates

ACSPN (2018). Understanding Nigerian Media and Elections through Research: Analysis of the 2015 Presidential Election Campaign Messages, Association of Communication Scholars & Professionals of Nigeria; Canada University Press, Concord, Ontario, Canada.

Akinfeleye, R. (2005). Journalistic Integrity in Political and Economic Reporting, Paper Presented at the Nigerian Press Council National Workshop on Reporting Politics and the Economy, Ajah-Epe, Lagos, Nigeria

Badmus I. R., Okaiyeto S. A. & Mustapha L. K.(2020). Maximising the Potentials of Online Civic Forums for E-Governance in Nigeria. In Oso L., Olatunji W. O., Odunlami D. & Adesina K (Eds.). Media Freedom, National Interest and Good Governance. Association of Communication Scholars & Professionals of Nigeria (ACSPN) Book Series 5. Canada University Press, Canada.

Ibraheem I. A. & Garba K. A. (2019). Undermining the Freedom of Expression: the case of Fake News in Nigeria. In Owens-Ibie N., Oji M. & Ogwezi J (Eds.). Fake News and Hate Speech: Narratives of Political Instability. Association of Communication Scholars & Professionals of Nigeria (ACSPN) Book Series 4. Canada University Press, Canada.

Ogbuoshi L. I., Oyeleke A. S. & Folorunsho O. M. (2019). Opinion Leaders Perspectives on Hate Speech, and Fake News Reportingand Nigeria’s Political Stability. In Owens-Ibie N, Oji M. & Ogwezi J. (Eds.). Fake News and Hate Speech: Narratives of Political Instability. Association of Communication Scholars & Professionals of Nigeria (ACSPN) Book Series 4. Canada University Press, Canada.

Olorunyomi D. (2020). Flattening the Misinformation Curve in the Digital Age. Comments at the 7th Annual and 1st Virtual Conference of the Association of Communication Scholars & Professionals of Nigeria (ACSPN) ON Communicating Identities and Nationhood: Promoting Inclusive Democracy through Access to Information and Knowledge Societies, from September 2-3, 2020

Oso L, Olatunji R., Odunlami D. & Adesina K. (2020). Overview of Media Freedom and Governance in the Digital Age. In Oso L., Olatunji W. O., Odunlami D. & Adesina K (Eds.). Media Freedom, National Interest and Good Governance. Association of Communication Scholars & Professionals of Nigeria (ACSPN) Book Series 5. Canada University Press, Canada.

Click to comment

Leave a Reply

More in Information Disorder Analysis Centre