Post by Bonobo on Oct 7, 2018 22:32:37 GMT 1
I have been reading various comment sections on news portals and forums for over 10 years. Many times I have had an impression that something suspicious was going on, especially when topics concerned important issues of Western or European context. Now it has been proved that Russian state trolls are a real entity. They are often wiser than simply propagate pro-Russian propaganda. Their primary aim is to stir trouble and provoke chaos and enmity in European societies. United Europe has always been considered a serious threat by Russian authorities, so it is obvious they are ready to do everything to break this unity. Then it will be easier for the Kremlin to play European states seperately, according to its wishes and needs.
The same applies to the United States which is feared and hated by Russians on a much larger scale that the EU.
www.wired.co.uk/article/brexit-russia-influence-twitter-bots-internet-research-agency
Here's the first evidence Russia used Twitter to influence Brexit
Russia-based Twitter accounts that targeted the US presidential election also used divisive and racist rhetoric in an attempt to disrupt politics in the UK and Europe
By Matt Burgess
Friday 10 November 2017
WIRED
Russian interference in Brexit through targeted social media propaganda can be revealed for the first time. A cache of posts from 2016, seen by WIRED, shows how a coordinated network of Russian-based Twitter accounts spread racial hatred in an attempt to disrupt politics in the UK and Europe.
A network of accounts posted pro and anti-Brexit, anti-immigration and racist tweets around the EU referendum vote while also targeting posts in response to terrorist attacks across the continent. The accounts amplified their own messages to reach a greater audience and their impact raises questions about the full extent of Russia's propaganda campaign.
In a small snapshot of what is likely to be a much bigger issue, 139 tweets from 29 accounts show Russian trolls using hashtags related to the Brexit vote, pictures of London Mayor Sadiq Kahn, anti-Muslim language around European terror attacks and racial slurs against refugees.
The accounts, which were all confirmed as being Russian-backed by Twitter when it provided data to the US Congress, were followed by 268,643 people and some of their posts were retweeted hundreds of times. They were primarily created to disrupt the US presidential election but dabbled in wider issues around Brexit and European politics. All of the accounts have been suspended by Twitter.
Russian trolls live-tweeted Manchester and London attacks
The revelations come as a UK parliamentary inquiry begins its own investigations into Russian interference in the Brexit vote. Twitter and Facebook have both been asked to submit evidence of accounts directly related to the EU referendum and 2017's general election. Damian Collins, the leader of the parliamentary inquiry into fake news, says this first batch of tweets, though primarily focussed on US politics, "confirms what we've always believed" and that the accounts were trying to "influence political debate in the UK and also to insight hatred and turn communities against each other".
"I think it shows that Russian-controlled accounts have been politically active in the UK as well as America," Collins says. "This could just be the tip of the iceberg because we've only really just started looking and doing a proper detailed study of what accounts linked to Russian organisations have been doing politically."
The cache of tweets was provided by US security startup New Knowledge and were collected as part of a larger cache of information looking at extremism online.
Targeting Europe
When a Muslim woman was photographed crossing Westminster Bridge in the wake of a terror attack, an image went viral. Instead of showing her horror at the incident, the picture of her looking at her phone was taken out of context.
"Muslim woman pays no mind to the terror attack, casually walks by a dying man while checking phone #PrayForLondon #Westminster #BanIslam," tweeted the account @southlonestar, the bio of which proclaimed the user was a "Proud TEXAN and AMERICAN patriot". The tweet was widely shared in news reports at the time.
@southlonestar was identified as a Russian account by Twitter in response to its US inquiries about the country's influence in the 2016 presidential election. In June 2016, the account, which had 16,826 followers, also tweeted: "I hope UK after #BrexitVote will start to clean their land from muslim invasion!" and "UK voted to leave future European Caliphate! #BrexitVote". These posts were made after the referendum vote.
"The account occasionally wades into a European political discussion, which is not what I would expect a domestically-focused Conservative Texan to do under any circumstance," says Jonathon Morgan, CEO of New Knowledge. Morgan explains his team was researching alt-right behaviour on Twitter in the build-up to the US election, collecting 7,500 tweets from 40 accounts, all of which were run as Russian propaganda tools.
All the tweets, including those seen by WIRED, were gathered at the time they were posted using Twitter's API. The accounts appear in the Russian Twitter list published by the US Democrats. Some of the Tweets are archived by the Internet Archive, with unique user and tweet identification numbers provided by New Knowledge.
Twitter says it works proactively to stop "bots" from posting content, tries to check suspicious content, is working to improve how it detects single and cluster accounts created by suspicious sources and accounts that break its terms and conditions.
Surprisingly, all the posts around Brexit in this small snapshot were posted after the June vote. They included: "Brits said NO to prison of multiculturalism! Happy Independence Day!" (from @rightnpr) and "Let's hope that #Brexit will help Julian Assange. But I have a feeling that US won't let this happen" (@jeblary2016, 8,054 followers).
"The accounts were focussed on posing as far-right activists in the US but they were using every opportunity they could to spread the pro-Kremlin message and its narrative," says Ben Nimmo, a senior fellow at the Atlantic Council, who has been researching Russian influence online and issues around the country's defence. "These Russian trolls are doing the same thing [as alt-right accounts], they are entirely in character but that's also the Kremlin narrative".
As well as focussing on European politics, one post from @southlonestar reads, "France is turning into a shithole thanks to weak immigration laws and EU regulations" – there is a strong anti-Muslin and anti-Islam approach. After the July 2016 Munich terror attack, where nine people were killed the Russian account @pamela_Moore13 tweeted, "Third attack in #Europe in 8 days. Multiple deaths in #Munich shooting. Europe is enjoying 'cultural enrichment'". @tpartynews (13,728 followers) tweeted "...Europe turns into Iraq! Very sad! #Munich #PrayForMunich".
Racial slurs were a popular theme: @priceforpierce, which had 1,841 followers, tweeted, "ill welcome a European to the USA any day, refugees are not welcome in my eyes #IslamKills #StopIslam". And @archieolivers (1,920 followers) wrote, "Have you seen the crimes many Syrian refugees R committing in Europe? #IslamKills #StopIslam".
After the terror attack on Brussels, which killed 31 people, @leroylovesusa (1,201 followers) tweeted using the city's hashtag and #IslamKills: "Why can't EU just close the borders?" The message was sent on March 22 2016, the same day that so-called Islamic State claimed responsibility for the killings. A number of other Russian accounts posted similar messages at the time.
"There's this consistent and relentless focus on anti-immigrant sentiment, attacking Muslims in particular. [They are] making this sort of broad-brush, blanket link between Islam and terrorism," says Morgan, who collected the data and built a system to explore connections between accounts.
All the accounts seen by WIRED posted using either the Twitter web client of TweetDeck. The @jenn_Abrams account had the most followers, 54,467, and the data pulled from the Twitter API shows some of the accounts were created as far back as 2013.
The fallout
On both sides of the Atlantic, Twitter, Facebook and Google have come under-fire for allowing Russian-linked accounts to utilise their networks to run disinformation campaigns.
Senior representatives from the companies have been grilled by members of the US Congress on the extent of the problem. "I think that almost certainly when Twitter is pushed to reveal similar types of activity going on around Brexit, it will find a number of accounts that were behaving in a similar way," Morgan says, adding that he expects Russian propagandists specifically focussed on the UK to adopt right-wing British views, not right-wing American views.
At present, the UK inquiry into fake news is waiting to receive evidence back from Facebook and Twitter on UK-focussed Russian accounts. Collins, who is leading the inquiry, says both firms will take part in oral hearings by early-2018 at the latest. "I think we have a right to know if organisations in a foreign country, particularly in the case of Russia, are politically active," he says. "What is frightening, if you look at some of the studies that have come out of America, is just how many people a well run campaign can reach at a relatively low cost and how you can target and bombard people with highly partisan messages and fake news."
Separately, a study by academics at City University, reported on by BuzzFeed, found 13,000 Twitter bots were sending out pro-Brexit messages in the run-up to the vote. The bots were more likely to tweet pro-Leave rather than pro-Remain content. The Electoral Commission is also investigating whether Arron Banks, the prominent Leave campaigner, broke financial rules in the run-up to the vote as questions mount about where the money came from.
In the US, Russian adverts on Facebook reached 126 million Americans, almost half the country's population. On Twitter there were 2,752 accounts linked to Russia's Internet Research Agency and Google-owned YouTube found more than 1,000 videos from 18 accounts.
For Nimmo, this first glimpse at Russian interference in the Brexit vote is just the tip of the iceberg. "What are the people who were running those accounts doing now? "It's unlikely they've put their feet on the desk and said, 'We've lost that one, now let's go home.'"
Russian trolls actively contribute in European and US vaccination discussions:
www.theguardian.com/society/2018/aug/23/russian-trolls-spread-vaccine-misinformation-on-twitter
Russian trolls 'spreading discord' over vaccine safety online
Study discovered several accounts, now known to belong to the same Russian trolls who interfered in the US election, tweeting about vaccines
Jessica Glenza in New York
@jessicaglenza
Thu 23 Aug 2018 21.00 BST
Last modified on Thu 23 Aug 2018 22.37 BST
Trolls used the vaccination debate to try to sow discord during the US election, researchers say.
Bots and Russian trolls spread misinformation about vaccines on Twitter to sow division and distribute malicious content before and during the American presidential election, according to a new study.
Scientists at George Washington University, in Washington DC, made the discovery while trying to improve social media communications for public health workers, researchers said. Instead, they found trolls and bots skewing online debate and upending consensus about vaccine safety.
The study discovered several accounts, now known to belong to the same Russian trolls who interfered in the US election, as well as marketing and malware bots, tweeting about vaccines.
Russian trolls played both sides, the researchers said, tweeting pro- and anti-vaccine content in a politically charged context.
“These trolls seem to be using vaccination as a wedge issue, promoting discord in American society,” Mark Dredze, a team member and professor of computer science at Johns Hopkins, which was also involved in the study, said.
“By playing both sides, they erode public trust in vaccination, exposing us all to the risk of infectious diseases. Viruses don’t respect national boundaries.”
The study, published in the American Journal of Public Health, comes as Europe faces one of the largest measles outbreaks in decades, one which has been partly attributed to falling vaccination rates. In the first six months of 2018, there were 41,000 cases of measles across the continent, more than in the entirety of 2017. Meanwhile, the rate of children not receiving vaccines for non-medical reasons is climbing in the US.
“The vast majority of Americans believe vaccines are safe and effective, but looking at Twitter gives the impression that there is a lot of debate,” said David Broniatowski, an assistant professor in George Washington’s School of Engineering and Applied Science.
“It turns out that many anti-vaccine tweets come from accounts whose provenance is unclear. These might be bots, human users or ‘cyborgs’ – hacked accounts that are sometimes taken over by bots. Although it’s impossible to know exactly how many tweets were generated by bots and trolls, our findings suggest that a significant portion of the online discourse about vaccines may be generated by malicious actors with a range of hidden agendas.”
Russian trolls appeared to link vaccination to controversial issues in the US. Their vaccine-related content made appeals to God, or argued about race, class and animal welfare, researchers said. Often, the tweets targeted the legitimacy of the US government.
“Did you know there was secret government database of #Vaccine-damaged child? #VaccinateUS,” read one Russian troll tweet. Another said: “#VaccinateUS You can’t fix stupidity. Let them die from measles, and I’m for #vaccination!”
“Whereas bots that spread malware and unsolicited content disseminated anti-vaccine messages, Russian trolls promoted discord,” researchers concluded. “Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination.”
Researchers examined a random sample of 1.7m tweets collected between July 2014 and September 2017 – the height of the American presidential campaign that led to Donald Trump’s victory. To identify bots, researchers compared the rate at which normal users tweeted about vaccines with the rate at which bots and trolls did so.
“We started looking at the Russian trolls, because that data set became available in January,” said Broniatowski. “One of the first things that came out was they tweet about vaccines way more often than the average Twitter user.”
Broniatowski said trolls tweeted about vaccines about 22 times more often than regular Twitter users, or about once every 550 tweets, versus every 12,000 tweets for human accounts.
Researchers found different kinds of bots spread different kinds of misinformation. So-called “content polluters” used anti-vaccine messages as bait to entice their followers to click on advertisements and links to malicious websites.
The study comes as social media companies struggle to clean their houses of misinformation. In February, Twitter deleted 3,800 accounts linked to the Russian government-backed Internet Research Agency, the same group researchers at George Washington examined. In April, Facebook removed 135 accounts linked to the same organization.
This week, Facebook removed another 650 fake accounts linked to Russia and Iran meant to spread misinformation. Researchers did not study Facebook, though it remains a hub of anti-vaccination activity.
“To me it’s actually impressive how well-organized and sophisticated the anti-vax movement has become,” said Dr Peter Hotez, the director of the Texas children’s hospital center for vaccine development at Baylor College of Medicine, and the father of an autistic child. Hotez, who maintains an active Twitter presence, said he struggled to identify whether Twitter accounts were human or bots.
“There are clearly some well-known anti-vax activists that I know to look out for and I know to block or to mute, but that’s a minority,” said Hotez. “A lot of it just seems to come out of nowhere, and I’m always surprised by that.”
One of the most striking findings, Broniatowski said, was an apparent attempt by Russian trolls to Astroturf a vaccine debate using the hashtag #VaccinateUS. Accounts identified as controlled by the Internet Research Agency, a troll farm backed by the Russian government, were almost exclusively responsible for content emerging under #VaccinateUS.
Some of the Russian trolls even specifically used a hashtag associated with Andrew Wakefield, the discredited former physician who published fraudulent papers linking vaccines with autism, such as #Vaxxed and #CDCWhistleblower.
The Guardian requested comment from Twitter and was referred to a blogpost in which the company said its “focus is increasingly on proactively identifying problematic accounts”, and that its system “identified and challenged” more than 9.9m potential spam accounts a week in May 2018.
en.wikipedia.org/wiki/Russian_web_brigades
The web brigades (Russian: Веб-бригады), also known as Russia's troll army, Russian bots, Kremlinbots,[1] troll factory,[2][3] or troll farms are state-sponsored anonymous Internet political commentators and trolls linked to the Russian government. Participants report that they are organized into teams and groups of commentators that participate in Russian and international political blogs and Internet forums using sockpuppets and large-scale orchestrated trolling and disinformation campaigns to promote pro-Putin and pro-Russian propaganda.[4][5][6][7] It has also been found that articles on Russian Wikipedia concerning the MH17 crash and the 2014 Ukraine conflict were targeted by Russian internet propaganda outlets.[8][9][10]
www.rferl.org/a/how-to-guide-russian-trolling-trolls/26919999.html
RFE/RL: Did they immediately offer you a salary of 45,000 rubles, or did you get gradual rises before you reached that point?
Burkhard: No, I got it immediately -- as long as I met my quota. It's a real factory. There are production quotas, and for meeting your quota you get 45,000. The quota is 135 comments per 12-hour shift.
www.theguardian.com/world/2015/apr/02/putin-kremlin-inside-russian-troll-house
The Russian troll factory at the heart of the meddling allegations
Former workers tell how hundreds of bloggers are paid to flood forums and social networks at home and abroad with anti-western and pro-Kremlin comments
The same applies to the United States which is feared and hated by Russians on a much larger scale that the EU.
www.wired.co.uk/article/brexit-russia-influence-twitter-bots-internet-research-agency
Here's the first evidence Russia used Twitter to influence Brexit
Russia-based Twitter accounts that targeted the US presidential election also used divisive and racist rhetoric in an attempt to disrupt politics in the UK and Europe
By Matt Burgess
Friday 10 November 2017
WIRED
Russian interference in Brexit through targeted social media propaganda can be revealed for the first time. A cache of posts from 2016, seen by WIRED, shows how a coordinated network of Russian-based Twitter accounts spread racial hatred in an attempt to disrupt politics in the UK and Europe.
A network of accounts posted pro and anti-Brexit, anti-immigration and racist tweets around the EU referendum vote while also targeting posts in response to terrorist attacks across the continent. The accounts amplified their own messages to reach a greater audience and their impact raises questions about the full extent of Russia's propaganda campaign.
In a small snapshot of what is likely to be a much bigger issue, 139 tweets from 29 accounts show Russian trolls using hashtags related to the Brexit vote, pictures of London Mayor Sadiq Kahn, anti-Muslim language around European terror attacks and racial slurs against refugees.
The accounts, which were all confirmed as being Russian-backed by Twitter when it provided data to the US Congress, were followed by 268,643 people and some of their posts were retweeted hundreds of times. They were primarily created to disrupt the US presidential election but dabbled in wider issues around Brexit and European politics. All of the accounts have been suspended by Twitter.
Russian trolls live-tweeted Manchester and London attacks
The revelations come as a UK parliamentary inquiry begins its own investigations into Russian interference in the Brexit vote. Twitter and Facebook have both been asked to submit evidence of accounts directly related to the EU referendum and 2017's general election. Damian Collins, the leader of the parliamentary inquiry into fake news, says this first batch of tweets, though primarily focussed on US politics, "confirms what we've always believed" and that the accounts were trying to "influence political debate in the UK and also to insight hatred and turn communities against each other".
"I think it shows that Russian-controlled accounts have been politically active in the UK as well as America," Collins says. "This could just be the tip of the iceberg because we've only really just started looking and doing a proper detailed study of what accounts linked to Russian organisations have been doing politically."
The cache of tweets was provided by US security startup New Knowledge and were collected as part of a larger cache of information looking at extremism online.
Targeting Europe
When a Muslim woman was photographed crossing Westminster Bridge in the wake of a terror attack, an image went viral. Instead of showing her horror at the incident, the picture of her looking at her phone was taken out of context.
"Muslim woman pays no mind to the terror attack, casually walks by a dying man while checking phone #PrayForLondon #Westminster #BanIslam," tweeted the account @southlonestar, the bio of which proclaimed the user was a "Proud TEXAN and AMERICAN patriot". The tweet was widely shared in news reports at the time.
@southlonestar was identified as a Russian account by Twitter in response to its US inquiries about the country's influence in the 2016 presidential election. In June 2016, the account, which had 16,826 followers, also tweeted: "I hope UK after #BrexitVote will start to clean their land from muslim invasion!" and "UK voted to leave future European Caliphate! #BrexitVote". These posts were made after the referendum vote.
"The account occasionally wades into a European political discussion, which is not what I would expect a domestically-focused Conservative Texan to do under any circumstance," says Jonathon Morgan, CEO of New Knowledge. Morgan explains his team was researching alt-right behaviour on Twitter in the build-up to the US election, collecting 7,500 tweets from 40 accounts, all of which were run as Russian propaganda tools.
All the tweets, including those seen by WIRED, were gathered at the time they were posted using Twitter's API. The accounts appear in the Russian Twitter list published by the US Democrats. Some of the Tweets are archived by the Internet Archive, with unique user and tweet identification numbers provided by New Knowledge.
Twitter says it works proactively to stop "bots" from posting content, tries to check suspicious content, is working to improve how it detects single and cluster accounts created by suspicious sources and accounts that break its terms and conditions.
Surprisingly, all the posts around Brexit in this small snapshot were posted after the June vote. They included: "Brits said NO to prison of multiculturalism! Happy Independence Day!" (from @rightnpr) and "Let's hope that #Brexit will help Julian Assange. But I have a feeling that US won't let this happen" (@jeblary2016, 8,054 followers).
"The accounts were focussed on posing as far-right activists in the US but they were using every opportunity they could to spread the pro-Kremlin message and its narrative," says Ben Nimmo, a senior fellow at the Atlantic Council, who has been researching Russian influence online and issues around the country's defence. "These Russian trolls are doing the same thing [as alt-right accounts], they are entirely in character but that's also the Kremlin narrative".
As well as focussing on European politics, one post from @southlonestar reads, "France is turning into a shithole thanks to weak immigration laws and EU regulations" – there is a strong anti-Muslin and anti-Islam approach. After the July 2016 Munich terror attack, where nine people were killed the Russian account @pamela_Moore13 tweeted, "Third attack in #Europe in 8 days. Multiple deaths in #Munich shooting. Europe is enjoying 'cultural enrichment'". @tpartynews (13,728 followers) tweeted "...Europe turns into Iraq! Very sad! #Munich #PrayForMunich".
Racial slurs were a popular theme: @priceforpierce, which had 1,841 followers, tweeted, "ill welcome a European to the USA any day, refugees are not welcome in my eyes #IslamKills #StopIslam". And @archieolivers (1,920 followers) wrote, "Have you seen the crimes many Syrian refugees R committing in Europe? #IslamKills #StopIslam".
After the terror attack on Brussels, which killed 31 people, @leroylovesusa (1,201 followers) tweeted using the city's hashtag and #IslamKills: "Why can't EU just close the borders?" The message was sent on March 22 2016, the same day that so-called Islamic State claimed responsibility for the killings. A number of other Russian accounts posted similar messages at the time.
"There's this consistent and relentless focus on anti-immigrant sentiment, attacking Muslims in particular. [They are] making this sort of broad-brush, blanket link between Islam and terrorism," says Morgan, who collected the data and built a system to explore connections between accounts.
All the accounts seen by WIRED posted using either the Twitter web client of TweetDeck. The @jenn_Abrams account had the most followers, 54,467, and the data pulled from the Twitter API shows some of the accounts were created as far back as 2013.
The fallout
On both sides of the Atlantic, Twitter, Facebook and Google have come under-fire for allowing Russian-linked accounts to utilise their networks to run disinformation campaigns.
Senior representatives from the companies have been grilled by members of the US Congress on the extent of the problem. "I think that almost certainly when Twitter is pushed to reveal similar types of activity going on around Brexit, it will find a number of accounts that were behaving in a similar way," Morgan says, adding that he expects Russian propagandists specifically focussed on the UK to adopt right-wing British views, not right-wing American views.
At present, the UK inquiry into fake news is waiting to receive evidence back from Facebook and Twitter on UK-focussed Russian accounts. Collins, who is leading the inquiry, says both firms will take part in oral hearings by early-2018 at the latest. "I think we have a right to know if organisations in a foreign country, particularly in the case of Russia, are politically active," he says. "What is frightening, if you look at some of the studies that have come out of America, is just how many people a well run campaign can reach at a relatively low cost and how you can target and bombard people with highly partisan messages and fake news."
Separately, a study by academics at City University, reported on by BuzzFeed, found 13,000 Twitter bots were sending out pro-Brexit messages in the run-up to the vote. The bots were more likely to tweet pro-Leave rather than pro-Remain content. The Electoral Commission is also investigating whether Arron Banks, the prominent Leave campaigner, broke financial rules in the run-up to the vote as questions mount about where the money came from.
In the US, Russian adverts on Facebook reached 126 million Americans, almost half the country's population. On Twitter there were 2,752 accounts linked to Russia's Internet Research Agency and Google-owned YouTube found more than 1,000 videos from 18 accounts.
For Nimmo, this first glimpse at Russian interference in the Brexit vote is just the tip of the iceberg. "What are the people who were running those accounts doing now? "It's unlikely they've put their feet on the desk and said, 'We've lost that one, now let's go home.'"
Russian trolls actively contribute in European and US vaccination discussions:
www.theguardian.com/society/2018/aug/23/russian-trolls-spread-vaccine-misinformation-on-twitter
Russian trolls 'spreading discord' over vaccine safety online
Study discovered several accounts, now known to belong to the same Russian trolls who interfered in the US election, tweeting about vaccines
Jessica Glenza in New York
@jessicaglenza
Thu 23 Aug 2018 21.00 BST
Last modified on Thu 23 Aug 2018 22.37 BST
Trolls used the vaccination debate to try to sow discord during the US election, researchers say.
Bots and Russian trolls spread misinformation about vaccines on Twitter to sow division and distribute malicious content before and during the American presidential election, according to a new study.
Scientists at George Washington University, in Washington DC, made the discovery while trying to improve social media communications for public health workers, researchers said. Instead, they found trolls and bots skewing online debate and upending consensus about vaccine safety.
The study discovered several accounts, now known to belong to the same Russian trolls who interfered in the US election, as well as marketing and malware bots, tweeting about vaccines.
Russian trolls played both sides, the researchers said, tweeting pro- and anti-vaccine content in a politically charged context.
“These trolls seem to be using vaccination as a wedge issue, promoting discord in American society,” Mark Dredze, a team member and professor of computer science at Johns Hopkins, which was also involved in the study, said.
“By playing both sides, they erode public trust in vaccination, exposing us all to the risk of infectious diseases. Viruses don’t respect national boundaries.”
The study, published in the American Journal of Public Health, comes as Europe faces one of the largest measles outbreaks in decades, one which has been partly attributed to falling vaccination rates. In the first six months of 2018, there were 41,000 cases of measles across the continent, more than in the entirety of 2017. Meanwhile, the rate of children not receiving vaccines for non-medical reasons is climbing in the US.
“The vast majority of Americans believe vaccines are safe and effective, but looking at Twitter gives the impression that there is a lot of debate,” said David Broniatowski, an assistant professor in George Washington’s School of Engineering and Applied Science.
“It turns out that many anti-vaccine tweets come from accounts whose provenance is unclear. These might be bots, human users or ‘cyborgs’ – hacked accounts that are sometimes taken over by bots. Although it’s impossible to know exactly how many tweets were generated by bots and trolls, our findings suggest that a significant portion of the online discourse about vaccines may be generated by malicious actors with a range of hidden agendas.”
Russian trolls appeared to link vaccination to controversial issues in the US. Their vaccine-related content made appeals to God, or argued about race, class and animal welfare, researchers said. Often, the tweets targeted the legitimacy of the US government.
“Did you know there was secret government database of #Vaccine-damaged child? #VaccinateUS,” read one Russian troll tweet. Another said: “#VaccinateUS You can’t fix stupidity. Let them die from measles, and I’m for #vaccination!”
“Whereas bots that spread malware and unsolicited content disseminated anti-vaccine messages, Russian trolls promoted discord,” researchers concluded. “Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination.”
Researchers examined a random sample of 1.7m tweets collected between July 2014 and September 2017 – the height of the American presidential campaign that led to Donald Trump’s victory. To identify bots, researchers compared the rate at which normal users tweeted about vaccines with the rate at which bots and trolls did so.
“We started looking at the Russian trolls, because that data set became available in January,” said Broniatowski. “One of the first things that came out was they tweet about vaccines way more often than the average Twitter user.”
Broniatowski said trolls tweeted about vaccines about 22 times more often than regular Twitter users, or about once every 550 tweets, versus every 12,000 tweets for human accounts.
Researchers found different kinds of bots spread different kinds of misinformation. So-called “content polluters” used anti-vaccine messages as bait to entice their followers to click on advertisements and links to malicious websites.
The study comes as social media companies struggle to clean their houses of misinformation. In February, Twitter deleted 3,800 accounts linked to the Russian government-backed Internet Research Agency, the same group researchers at George Washington examined. In April, Facebook removed 135 accounts linked to the same organization.
This week, Facebook removed another 650 fake accounts linked to Russia and Iran meant to spread misinformation. Researchers did not study Facebook, though it remains a hub of anti-vaccination activity.
“To me it’s actually impressive how well-organized and sophisticated the anti-vax movement has become,” said Dr Peter Hotez, the director of the Texas children’s hospital center for vaccine development at Baylor College of Medicine, and the father of an autistic child. Hotez, who maintains an active Twitter presence, said he struggled to identify whether Twitter accounts were human or bots.
“There are clearly some well-known anti-vax activists that I know to look out for and I know to block or to mute, but that’s a minority,” said Hotez. “A lot of it just seems to come out of nowhere, and I’m always surprised by that.”
One of the most striking findings, Broniatowski said, was an apparent attempt by Russian trolls to Astroturf a vaccine debate using the hashtag #VaccinateUS. Accounts identified as controlled by the Internet Research Agency, a troll farm backed by the Russian government, were almost exclusively responsible for content emerging under #VaccinateUS.
Some of the Russian trolls even specifically used a hashtag associated with Andrew Wakefield, the discredited former physician who published fraudulent papers linking vaccines with autism, such as #Vaxxed and #CDCWhistleblower.
The Guardian requested comment from Twitter and was referred to a blogpost in which the company said its “focus is increasingly on proactively identifying problematic accounts”, and that its system “identified and challenged” more than 9.9m potential spam accounts a week in May 2018.
en.wikipedia.org/wiki/Russian_web_brigades
The web brigades (Russian: Веб-бригады), also known as Russia's troll army, Russian bots, Kremlinbots,[1] troll factory,[2][3] or troll farms are state-sponsored anonymous Internet political commentators and trolls linked to the Russian government. Participants report that they are organized into teams and groups of commentators that participate in Russian and international political blogs and Internet forums using sockpuppets and large-scale orchestrated trolling and disinformation campaigns to promote pro-Putin and pro-Russian propaganda.[4][5][6][7] It has also been found that articles on Russian Wikipedia concerning the MH17 crash and the 2014 Ukraine conflict were targeted by Russian internet propaganda outlets.[8][9][10]
www.rferl.org/a/how-to-guide-russian-trolling-trolls/26919999.html
RFE/RL: Did they immediately offer you a salary of 45,000 rubles, or did you get gradual rises before you reached that point?
Burkhard: No, I got it immediately -- as long as I met my quota. It's a real factory. There are production quotas, and for meeting your quota you get 45,000. The quota is 135 comments per 12-hour shift.
www.theguardian.com/world/2015/apr/02/putin-kremlin-inside-russian-troll-house
The Russian troll factory at the heart of the meddling allegations
Former workers tell how hundreds of bloggers are paid to flood forums and social networks at home and abroad with anti-western and pro-Kremlin comments