It is scary that not all trends on social media are authentic. While this fact may be unsettling, the last comments you made on the post you came across online might just have been placed by a fake social media account.
BoTs, designed to run automated tasks over the internet, are now tools for social media misinformation agents, generating thousands of fake accounts to sell an ideology; promote a certain movement or even provoke chaos in the society.
This scheme uses multiple social media accounts or pages with concealed identities of the persons or organisations running the accounts, in order to mislead or influence unsuspecting members of the public for mostly political or financial reasons.
In 2020, Facebook announced it was aware of these activities, tagging them as “Coordinated inauthentic Behaviour (CIB)” and as of April of the same year, the company reported it had removed thousands of accounts for such activity.
“We view influence operations as coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation,” Facebook says.
“There are two tiers of these activities that we work to stop: 1) coordinated inauthentic behaviour in the context of domestic, non-state campaigns (CIB) and 2) coordinated inauthentic behaviour on behalf of a foreign or government actor (FGI).”
Back in 2020, when the overall human rights situation in Myanmar deteriorated and heightened restrictions on freedom of expression were enforced by the military, Facebook reportedly removed over 425 pages, 17 groups, 135 Facebook accounts and 15 Instagram accounts in Myanmar for engaging in coordinated inauthentic behaviour on Facebook.
The company added that as part of its continuous investigations into this type of behaviour in Myanmar; they have found that some seemingly independent news, entertainment, beauty and lifestyle articles were linked to the Myanmar military, and these were part of pages Facebook removed for coordinated inauthentic behaviour.
Although Facebook has clearly announced that this kind of behaviour is not allowed on the blue app, it still remains hard to track or even identify such activities independently. However, In their misrepresentation policy, the company has said that it does not want people or organisations creating networks of accounts to mislead others about who they are, or what they’re doing.
The activity is even more prevalent on Twitter, possibly because it’s a little difficult to pin fake accounts and, probably, because the app also allows Bots (the main character in coordinated inauthentic behavior) to operate.
An Army BoTs does the Job
New accounts with usually scant details are mostly part of a larger scheme on Twitter. In a filing earlier in May this year, Twitter estimated that fewer than 5 percent of its monetizable daily active users in the 1st quarter were bots or spam accounts. But Elon Musk felt otherwise. He challenged that around 20 percent (he is even concerned that the number could be even higher.) of the accounts on Twitter are fake or spam accounts run by Bots.
On Twitter, bots are automated accounts that can do the same things as real human beings: send out tweets, follow other users, and like and retweet postings by others.
An article by Bloomberg outlined that:
“Spam bots use these abilities to engage in potentially deceptive, harmful, or annoying activity. Spam bots programmed with a commercial motivation might tweet incessantly in an attempt to drive traffic to a website for a product or service. They can be used to spread misinformation and promote political messages. In the 2016 presidential election, there were concerns that Russian bots helped influence the race in favour of the winner, Donald Trump. Spam bots can also disseminate links to fake giveaways and other financial scams. After announcing his plans to acquire Twitter, Musk said one of his priorities is cracking down on spam bots that promote scams involving cryptocurrencies.”
There is so little you can do
As election season heats up in Nigeria, there are surely going to be substantial activities online that may include varying degrees of coordinated and inauthentic messages. , And as things stand, tech companies have more to do. However, users may just have to be the ones to report and pinpoint these activities.
There is only so much an algorithm can do. Users will have to be more observant with posts and tweets they come across, especially the ones that carry the same words and message. On Facebook, checking the transparency of a page or group, the date and details of creation are all a good starting point.