NewsPronto

 

News

  • Written by Daniel Baldino, Senior Lecturer in Politics and International Relations, University of Notre Dame Australia

This article is the third in a five-part series exploring Australian national security in the digital age. Read parts one and two here.

The internet’s precise role in the process of radicalisation remains vexing. You can lead a person to a bomb-making manual, but you can’t make them use it.

Radicalisation is a social process. It refers to a means by which an individual or group embraces an extreme ideology and rejects or undermines the “status quo”. This process can then lead to an increased willingness to condone or use violence.

“Safety” in the digital era

The internet allows previously alienated and disaffected people to find and connect with each other. It also provides a space for those looking for acceptance, recognition and a sense of approval. Information is often unfiltered and some of the most extreme forms of dialogue, including dehumanising and hateful ideas that target our biases, can become self-reinforcing.

In that regard, it can be argued the internet could be a mechanism to enable or facilitate radicalisation. Nobody is born a terrorist.

But the process of radicalisation is more complex than that. It involves a combination of online and offline communication and a fluid mix of different political, psychological and social factors.

Read more: Online extremism: UK government's Islamic State blocking tool is neat but incomplete

Unfortunately, the official response to the internet’s role in radicalisation is far too simplistic and doesn’t take into account these other factors. A quick Google search of “online+radicalisation+Australia” makes clear the current policy priorities. Some of the first resources you’ll encounter are government initiatives such as the Attorney-General Department’s Living Safe Together content reporting mechanism, and the Office of the eSafety Commissioner’s resources to help develop critical thinking and “digital literacy”.

This official response can be broadly broken into three streams:

• Limiting access to harmful (or inflammatory) content that leads to dangerous behaviour.

• Protecting users from the improper use of their personal data.

• Educating users on how to navigate the online world safely.

Unsafe assumptions

Intuitively, it makes sense to focus on these above areas, but it also leaves open some big questions.

Firstly, how do we define content that promotes terrorism? Depending on one’s world view, this could include vastly different things. It could be specific calls for violence against particular groups. It could also be very real accounts of civilian death tolls in war zones. Or, on the flip side, it could be fake and conspiratorial news stories about paedophile rackets being run out of pizzerias. Or what about videos comparing certain groups to “cancer” or “dogs”?

To be sure, dehumanising language is one of the most telling precursors of violence between different groups of people. But where is the line between content-fuelling extremism and violence, and that which is simply trolling, tasteless expressions, sarcasm and so forth?

Read more: Radicalisation is not just a terrorist tactic – street gangs do it every day

A second question is the effect this content has on those consuming it. Is it actually resulting in an uptick in violence? The link between exposure to online content and radicalisation to violence is ambiguous.

Certainly, the internet as an enabling technology can be dangerous, particularly for those who come to it with a specific purpose or preconceived worldview. But it could also be argued that despite the sophistication of marketing strategies used by known extremist groups, and the number of people exposed to their messages, the vast majority of young people have proven resistant to radicalisation and extremism.

Ultimately, if the focus of policymakers is too heavy-handed and simplistic, they are likely to end up basing countermeasures on a flawed assumption: that mere exposure to extremist content is the core issue. We then fall into the trap of treating radicalisation like every child is Charles Manson listening to The Beatles’ Helter Skelter.

Or to use a homegrown example, one need only look at Jake Bilardi. By some reports, Bilardi was an intelligent, introverted young man, actively looking for the most effective way to challenge what he saw as oppression and injustice. He originally aspired to becoming a political journalist, before research on different resistance groups led him to radicalise.

We need to ask, though, why extremism appealed to him more than anything else. Perhaps, a narrative of injustice espoused by an organisation like the Islamic State provided Bilardi a framework with which to interpret his own troubled life: the death of his mother, the breakdown of his family and his internalised sense of social and political isolation.

This is not a justification of terrorism. The point is that censorship of the internet would likely have just caused Bilardi to look elsewhere for the same type of reinforcement and kinship he was seeking.

The internet is neither good nor bad – it’s an opportunity

A broad-brush crackdown on social media platforms and additional restrictions – captured in emotive soundbites like Malcolm Turnbull’s “The privacy of a terrorist can never be more important than public safety” - might be politically expedient, but will ultimately be a highly ineffective counter-terrorism action.

The debate should instead focus on encouraging challenges to extremist narratives, building community resilience to radicalisation and preventing the spread of misinformation online. Tech companies will also need to be more introspective about their roles, and consider when to make commercial sacrifices in the name of social responsibility.

In short, governments must avoid disproportionate responses and the relentless hyping of the threat of the internet for political purposes. Policy overreactions could result in an undermining of freedom of expression and the promotion of an entrenched surveillance society fixed on a disjointed and reactive whack-a-mole mindset.

Read more http://theconversation.com/this-isnt-helter-skelter-why-the-internet-alone-cant-be-blamed-for-radicalisation-94825