The concept of ‘fake news’ has plagued media for the past couple of years, but what is its impact upon business-level cybersecurity?
This week Infosecurity was invited to join a roundtable, hosted by Anomali, on the concept of ‘Hybrid Threats’, which was chaired by former GCHQ director turned professor Sir David Omand, and attended by a selection of industry names.
The concept of Hybrid Threats, according to Omand, was not one he is keen on and he claimed that he preferred the “old fashioned term subversion, where one country tries to influence another.” He said that this usually involved three tactics: intimidation, propaganda and dirty tricks.
The use of these tactics, he claimed, leads to “erosion in confidence,” while propaganda “restricts free speech when government has to have a reputation for truth." All of this may not seem to be too relevant to cybersecurity, but in recent instances where arrests have been made over interference in the 2016 US election, the concept of what is real and what is not needs to be a serious factor – and in the case of threat intelligence, knowing what is a genuine alert and what is fake can paint a much clearer picture of a threat.
This moved the discussion on to the concept of fake news, as Omand said: “Threat intelligence is needed to pull together different bits of information from different sectors and see the pattern, or if it is just a coincidence.”
Valentina Soria, head of intelligence at Morgan Stanley, said that as a threat intelligence practitioner, her job is to interpret threat intelligence and make sure that the difference between fake and real news is determined.
She claimed that hybrid threats have further complicated the ability to make sense of it all, as at the heart of threat intelligence is credibility of the information you rely on, and propagation of fake news has made it much more difficult.
“We can see the potential impact that the fake news phenomena can have,” she said, claiming that threat intelligence can help a business understand threats and form a strategy, but ‘false flags’ and threat actors using different tools mean it can be difficult for a business to focus on what is a genuine threat to the business.
In an open discussion on fake news, cyber psychologist Dr Mary Aiken said that there is not a way to legislate around this, but there is an opportunity for ‘cyber ethics’ in the social media models and for the social media platforms to become more responsible players.
“When we look to having fact checking, and false claims on what a politician said, there is an absence of critical thinking to the bombardment of young people with fake and false information and it becomes a modus operandi of ‘that’s ok, everything is fake’. When a group of people who will become the policy makers in time, what will their frame of reference be and what will be their critical thinking?”
Hugh Njemanze, CEO of Anomali, said that there was a responsibility for social media algorithms to be more transparent to see what has happened and investigate, and transparency will enable that.
Soria added that it is quite hard to demonstrate the real life cause, impact and effect of fake news, especially when this could have influenced voters ahead of an election.
Aiken said that a real world sophisticated model can be created, but are they still fit for purpose in cyber-environments? That is why models need to be invested in that can make sense of human behavior and manipulation.
The conversation moved on to filtering and the need for automated tools to do such a job, but ultimately the problem remains the same when it comes to determining what is a genuine threat and what is not. Whether it is fake news or a false flag, it requires a person’s attention to determine what is important for the business.
As Soria said, threat intelligence processing involves determining a pattern that affects your business, and fake news could be the square peg trying to fit into the round hole.