Facebook's push for end-to-end encryption is good news for user privacy, as well as terrorists and paedophiles
- Written by Roberto Musotto, Cyber Security Cooperative Research Centre Postdoctoral Fellow, Edith Cowan University
Facebook is planning end-to-end encryption on all its messaging services to increase privacy levels.
The tech giant started experimenting with this earlier this year. Soon, end-to-end encryption will be standard for every Facebook message.
But Australian, British and United States governments and law makers aren’t happy about it. They fear it will make it impossible to recover criminal conversations from Facebook’s platforms, thus offering impunity to offenders.
For instance, this was a major concern following the 2017 London terror attacks. Attackers used WhatsApp (Facebook’s end-to-end encrypted platform), and this frustrated police investigations.
But does Facebook’s initiative place the company between a political rock and an ethical hard place?
What is end-to-end encryption?
End-to-end encryption is a method of communicating more securely, compared to non-encrypted communications.
It involves using encryption (via cryptographic keys) that excludes third parties from accessing content shared between communicating users.
When the sender wants to communicate with the receiver, they share a unique algorithmic key to decrypt the message. No one else can access it, not even the service provider.
Read more: Social media and crime: the good, the bad and the ugly
The real incentive
Facebook’s plan to enact this change is paradoxical, considering the company has a history of harvesting user data and selling it to third parties.
Now, it supposedly wants to protect the privacy of the same users.
One possible reason Facebook is pushing for this development is because it will solve many of its legal woes.
With end-to-end encryption, the company will no longer have backdoor access to users’ messages.
Thus, it won’t be forced to comply with requests from law enforcement agencies to access data. And even if police were able to get hold of the data, they would still need the key required to read the messages.
Only users would have the ability to share the key (or messages) with law enforcement.
Points in favour
Implementing end-to-end encryption will positively impact Facebook users’ privacy, as their messages will be protected from eavesdropping.
This means Facebook, law enforcement agencies and hackers will find it harder to intercept any communication done through the platform.
And although end-to-end encryption is arguably not necessary for most everyday conversations, it does have advantages, including:
1) protecting users’ personal and financial information, such as transactions on Facebook Marketplace
2) increasing trust and cooperation between users
3) preventing criminals eavesdropping on individuals to harvest their information, which can render them victim to stalking, scamming and romance frauds
4) allowing those with sensitive medical, political or sexual information to be able to share it with others online
5) enabling journalists and intelligence agencies to communicate privately with sources.
Not foolproof
However, even though end-to-end encryption will increase users’ privacy in certain situations, it may still not be enough to make conversations completely safe.
Read more: End-to-end encryption isn't enough security for 'real people'
This is because the biggest threat to eavesdropping is the very act of using a device.
End-to-end encryption doesn’t guarantee the people we are talking to online are who they say they are.
Also, while cryptographic algorithms are hard to crack, third parties can still obtain the key to open the message. For example, this can be done by using apps to take screenshots of a conversation, and sending them to third parties.
A benefit for criminals
When Facebook messages become end-to-end encrypted, it will be harder to detect criminals, including people who use the platform to commit scams and launch malware.
Others use Facebook for human or sex trafficking, as well as child grooming and exploitation.
Facebook Messenger can also help criminals organise themselves, as well as plan and carry out crimes, including terror attacks and cyber-enabled fraud extortion hacks.
The unfortunate trade-off in increasing user privacy is reducing the capacity for surveillance and national security efforts.
Read more: Can photos on social media lead to mistaken identity in court cases?
End-to-end encryption on Facebook would also increase criminals’ feeling of security.
However, although tech companies can’t deny the risk of having their technologies exploited for illegal purposes – they also don’t have a complete duty to keep a particular country’s cyberspace safe.
What to do?
A potential solution to the dilemma can be found in various critiques of the UK’s 2016 Investigatory Powers Act.
It proposes that, on certain occasions, a communications service provider may be asked to remove encryption (where possible).
However, this power must come from an authority that can be held accountable in court for its actions, and this should be used as a last resort.
In doing so, encryption will increase user privacy without allowing total privacy, which carries harmful consequences.
So far, several governments have pushed back against Facebook’s encryption plans, fearing it will place the company and its users beyond their reach, and make it more difficult to catch criminals.
End-to-end encryption is perceived as a bulwark for surveillance by third parties and governments, despite other ways of intercepting communications.
Many also agree surveillance is not only invasive, but also prone to abuse by governments and third parties.
Freedom from invasive surveillance also facilitates freedom of expression, opinion and privacy, as observed by the United Nations High Commissioner for Human Rights.
In a world where debate is polarised by social media, Facebook and similar platforms are caught amid the politics of security.
It’s hard to say how a perfect balance can be achieved in such a multifactorial dilemma.
Either way, the decision is a political one, and governments - as opposed to tech companies - should ultimately be responsible for such decisions.