¿Practican 48 países el trolling de Estado?


In recent years, a kind of structure specialized in these maneuvers has emerged: the “troll farms”. This interview was conducted on March 20, 2019, as part of the symposium “Democracies in the test of infoxes” jointly organized by INA and BNF. Jane Lytvynenko is a journalist at BuzzFeed News, he explains:

What is a “Troll Farm”?

Jane Lytvynenko: When we talk about state-sponsored troll farms, we are really talking about people who are paid by countries to spread disinformation to affect public discourse and who use the Internet primarily for propaganda purposes. They are also known as cyber troops.

How long have these farms been around?

Jane Lytvynenko: As far as we know, the first farms started operating in 2014. But over time, we’ve seen more and more of them appear around the world. So, for example, in 2017, the Oxford Internet Institute counted 28 state-sponsored troll farms around the world. However, in a follow-up report published in 2018, we found a sharp increase from 28 to 48 in the number of countries that engage in state-sponsored trolling. This means that this problem is growing in importance and that the phenomenon is most likely effective.

Are there different types of “Troll Farms”?

Jane Lytvynenko: Different countries approach online propaganda in different ways. Russia is of course the most famous example because their methods are the most extensive. They are really the pioneers in this field. But what we are seeing is that some of the state-sponsored troll farms are being directed inward. Some are trying to influence opinion and spread propaganda in a more direct way, while others are trying to stir up anger about certain issues, against certain ethnic groups or certain conflicts. So the Internet can really be exploited endlessly by troll farms, just as we ourselves can use the Internet in different ways.

Why are we so interested in these structures?

Jane Lytvynenko: There’s no simple answer to why we talk about them so much, but I think the short answer is: because it works. For example, in 2014 we saw the first signs of activity on Facebook in influencing opinion in Myanmar towards the Rohingya Muslims. In 2015, we saw the Russians trying to influence Ukrainian opinion after the revolution. And in 2016, things really got out of hand when foreign state-sponsored troll farms exported to the West by moving into democracies and trying to influence the results of democratic elections. From that point on, after these campaigns were discovered, we really realized that this was going on all over the world: not just locally, but internationally. And now it’s up to us to ask the questions: why is this effective? How does it work? And what can we do to reduce the impact of these types of campaigns?

How do we explain the power of these “Troll Farms”?

Jane Lytvynenko: Think about how you interact with social networks – you turn on your phone, you open a website and you access information. On Instagram, it’s an image; on Twitter, a message; on Facebook, a post. The question is: how are these images selected? How do we know what is being presented to us and who is presenting it to us? We don’t really have an answer to this question because the algorithms used by social networks are not disclosed. We know that they are tailored to our preferences, but we also know that these systems are not designed for news: they are designed to share baby pictures or pictures of your dog. So in essence we have the perfect cocktail for spreading misinformation. We don’t know why something is put in front of us, but we generally tend to react and not investigate why we see what we see, or whether the information is presented to us accurately.

As individuals, we could very well use, replicate the techniques of the “Troll Farms”, couldn’t we?

Jane Lytvynenko: Yes, that’s right. One of the most interesting things about troll farms, I think, is that they play on our emotions because they make us react and physically interact with the Internet. So, for example, stirring up anger is much more effective than presenting a sterile report on the economy or climate change. And that’s precisely what makes them so effective, because when we see something that makes us angry, we want to shout, argue, get involved in some way. It’s something we do as individuals. Trolls understand this and use the same technique.

Do these “Troll Farms” only operate on Facebook?

Jane Lytvynenko: The reason we talk about Facebook so much is that the platform supposedly has two billion users, which is more people than any country in the world. That’s a huge audience. But of course, Facebook is not the only place where these problems occur. For example, we know that YouTube’s recommendation algorithms contribute to radicalization, not only politically, but also in areas such as anti-vaccination. Twitter can be very easily manipulated through the use of bots and propaganda through computer systems. Channels like Instagram and Snapchat are very visual and therefore very difficult for researchers to study, which means some of these issues are hard to detect. We’ve also seen an increase in misinformation in private messages, a phenomenon blamed for causing violence in other parts of the world. It’s even harder for journalists and researchers to spot misinformation in messaging because it happens in group discussions.

What happens is that someone you like or someone you trust transfers something to you and you transfer it to someone else. It’s sharing made private. It’s very hard for us to understand the magnitude of that because unless you’re part of that group, you can’t see it. So this ecosystem works because each platform offers new and interesting ways to threaten our information environment.

What exactly is the responsibility of social networks?

Jane Lytvynenko: Within social networks, I would say that one of the main responsibilities is to first identify bad actors, proactively rather than reactively, and try to remove them from the network. In the United States, there is a big debate about freedom of speech, but freedom of speech doesn’t necessarily mean that everyone has the right to expose their message to a thousand people. Another thing that social networks could do, but don’t, is reveal how their algorithms work. Researchers at MIT have discovered what they call a “nutrition label” for algorithms. This would be something that would tell us how much and how much. We know that many of the people Facebook classifies politically as left or right, far left or far right, actually disagree with how Facebook identifies them. But because we don’t know how that classification works, because we don’t really have a way to change it or to say to Facebook, “Hey, my political views have changed a little bit, could you do an update?”, we have no way of controlling the information environment we find ourselves in.

Do you think that the situation can change for the better?

Jane Lytvynenko: While there is negativity on the Internet, whether through misinformation or the targeting of vulnerable communities, the Web has also given people a voice and given many previously ignored communities a way to express themselves. I think we’re in a transition period right now where we understand that something is wrong. We understand that this beast that we have created needs to be tamed. And I think if we really take this seriously, if regulators, technology companies, and individual users take it very seriously and to heart, then yes, I am optimistic. But if we ignore the problem and hope it goes away, or if we individually decide that “no, this doesn’t concern me, I’m smart, I don’t get caught up in this stuff,” then this problem will persist and affect the way our democracy works.