Dr Victoria Baines: Cybersecurity

Photo of Dr Victoria Baines

An Oxford University alumna, for undergraduate study Victoria read classics at Trinity College, she then undertook a Masters study at Oxford and, subsequently, a Roman Literature PhD at the University of Nottingham. According to Secure Computing Magazine, Dr Baines is one of the top 50 women of influence in cyber security. Her previous work experience includes working with the Child Exploitation and Online Protection command of the Police, leading the strategy team at Europol’s European Cybercrime Centre, where she was responsible for the EU’s cyber threat analysis, and working as Facebook’s Trust and Safety Manger for Europe, the Middle East and Africa. She is currently a visiting associate at the Oxford Internet Institute looking at the Rhetoric of Insecurity, to be explored in her upcoming book, ‘Rhetoric of InSecurity: The Language of Danger, Fear and Safety in National and International Contexts – Law, Language and Communication’.

The interview begins looking at Dr Baines’ background to cybersecurity before moving onto her work and current research.

Could you give an introduction to what is meant by the term’s ‘cybersecurity’ and ‘cybercrime’?

One of the problems with cybersecurity is that it is a bit of a catch-all term and it means different things to different people. At a very basic level, if you are doing a job as a cyber threat analyst or an incident response manger, or you work in a security operations centre, in a company or in another organisation, you are there to protect the networks and the data from intrusion and misuse. At a very simply level that’s what the day-to-day job is.

Cybercrime quite often has financial motivation. If you think about all of the scam emails we get and all of the phishing attacks, most of those are financially motivated. Somebody in a criminal gang somewhere is trying to make money out of it. But equally what we have seen increase over the past ten years has been ideologically motivated cybercrime. This has the same attack vectors, but a different motivation. For instance, when we think of what cybersecurity analysts would call ‘influence operations’, but we quite often hear termed as ‘fake news’, there is no obvious profit motive in trying to manipulate people’s perceptions on social media or to sow discord in a community about the Black Lives Matter movement, for instance. That is much more state sponsored activity. Some of the groups doing that will be paid, but they will be paid by a government that has a political, or geopolitical motive, for doing that.

Then we also have things like attacks on critical infrastructure. This takes us into the whole military and disarmament sphere. It’s almost like a spectrum. At one end you’ve got the state sponsored attacks that are all about crippling critical infrastructure or testing whether you have the power to disable, for instance, a country’s electricity grid. At the other end you’ve got the hacker in their bedroom seeing for the fun or it whether they can get into a system or a network, perhaps getting paid to do so. Cybercrime is a strange thing, at one end it’s almost like a cottage industry. You go through all of the organised crime networks in the middle there, and at the other end it can be state sponsored activity. If we think about the WannaCry attack in 2017 which people always use as a good example of lots of different things. For me, it is an example of how a piece of software like ransomware that is essentially about generating money actually appears to be attributed to groups linked to the North Korean government! So that raises the question whether the North Korean government was making money out of the NHS being crippled by the WannaCry ransomware attack. It also means you have a lot of unintended targets as well. I appreciate that is a really confused answer to your question. So, cybersecurity itself, means really different things to really different people. I think that’s one of the problems, as the nebulousness of the term means that trying to work out what someone is talking about when they are talking about cybersecurity is difficult. If you are an analyst working in a company or an organisation trying to defend those networks and protect that data, you actually have no idea most of the time, in the moment at least, where those attacks are coming from and what their motivation is. It took seven months for the FBI to identify who was behind the WannaCry attacks, so most of the people working in cybersecurity don’t know on a daily basis whether they are disrupting state sponsored attacks, or espionage attempts, or whether its kids in their bedrooms trying to have a go at hacking, or whether its criminal gangs trying to make money. 

Could you talk a bit about your educational background and whether or not you had in mind a career in cyber security?

I absolutely didn’t have a specific career in cybersecurity in mind, and actually cybersecurity wasn’t really a career option when I was an undergraduate. I read the greats, I read classics and I got so into it that I decided to do a PhD. My particular area of interest was Roman satire, and I did my PhD on Juvenal and his take on ancient Rome. That sounds like nothing like cybersecurity, but it was only really ten or twenty years down the line when somebody asked me about my career that I realised that I had spent the last twenty years working on security issues from one perspective or another. My particular interest in classical literature was all about insecurity and urban environments. Juvenal’s third satire is all about the dangers of living in contemporary Rome; it is satire in the sense that we understand it now, it just happens to be in Latin hexameter verse. It is pretty much like picking up a copy of private eye just in a slightly different format and different language. All the same human preoccupations with safety and security are there; security from fire, from the ‘dreaded immigrants’, women taking over masculine roles of society. Even though I transitioned after my PhD to work in law enforcement, working firstly in local police force in the UK, then the predecessor of the national crime agency, then Europol, then Facebook, I was still focused on safety, security and the criminal justice response.

From what you just described, it sounds as though your career trajectory was primarily about following your interests through your academia and then security work rather than starting out with a specific career in mind. Would you say that is a fair assessment?

I think that’s a very fair assessment. I had no grand plan, and I certainly had no understanding that I would end up working on cyber security issues when I was an undergraduate. I also try as hard as possible to resist some of the notions that we see in the cybersecurity industry that you need a computer science degree. Computer science degrees are great for software engineering, but they’re not necessarily great for communicating. For example, communicating what is the most important threat to a board of non-executives at a big corporation or being able to translate a cyber threat into the appropriate policy response at a government level. So cyber security needs communicators, it needs people who understand how to construct an argument, how to deconstruct an argument, how to critique an argument. I would argue that we absolutely still need humanities students and linguists in cyber security. The more that we confine cybersecurity to computer scientists the more we are only going to have a single faceted response to what is essentially a very human problem.

That’s very interesting, I think for a lot of people they will see the prefix ‘cyber’ and they will immediately dismiss the profession, because they will see it as STEM exclusive, whereas it’s apparent from what you said that cybersecurity as an ‘overarching’ industry is a lot more dynamic and not necessarily STEM exclusive.

You mentioned about the ‘cyber’ prefix and this is super important, and its actually an entire chapter of my book. The way we represent cybersecurity is so hugely disempowering and alienating for ordinary people. This is the rhetorical aspect of it, we use words like ‘dark’ to describe certain places on the internet and on the web. That immediately makes it somewhere that naughty schoolboys want to go and everyone else thinks is really shady and they should avoid. Cyber issues are consistently represented as issues that are beyond our knowledge, control and capability, and that someone else needs to deal with for us.

There is also this lingo of ‘cyber’, and, in reality, ‘cyber’ is a made-up word in itself. It comes from popular science fiction; it doesn’t really mean anything of itself. But it then gets added to things like bullying, and it somehow makes bullying feel worse, but more remote and more anonymous. Others include cyber- harassment, and cybercrime, and you are left with all these made-up words and acronyms that only the cyber security industry uses. I think that shuts ordinary people out. That’s not to dismiss ordinary people, these are citizens with voting rights and decision-making power, and it shuts them out of understanding it. I argue in my book that this doesn’t have to be the only way that we do this. There are different ways of depicting cybersecurity that actually empower people to protect themselves and their family, and I would love to see more of that.

Moving onto your current research, which employs a comparative approach to chart similarities between security rhetoric in the ancient world and today. You cover how cybercrime is politicised. But why, what is the motivation behind this, and why is fear particularly seen as a useful tool?

It depends who you are. I think we have to be totally honest, and I may be doing myself out of a job in the future, but if you work in the cybersecurity industry and you are a vendor, so selling protection products to other people, it does no harm to present a hyperbolic representation of the threat. The overriding message is ‘you can’t cope with this, let us cope with it for you’. Without exception, particularly in the context of the COVID pandemic, that kind of fear and doubt rhetoric might not be appropriate anymore for the cyber security industry. Also, it might be less effective, because it’s just part of the general noise of fear and uncertainty at the moment. For politicians there are all sorts of things going on.

In terms of where we are with US security rhetoric what’s been really interesting for me to see is the extent to which there is consistency across the millennia and the interdisciplinary aspect to it. In my book I wanted to be able to show people verbal and rhetorical effects within the original Latin, so that has meant publishing the Latin and English. The reason I mention that specifically is that the emperor Augustus in his account of his own achievements uses a lot of the language of restoration. A lot of the language from this work, which was etched into stone monuments all across the empire, is about making Rome safe and secure again. He uses the verbal echoes of Donald Trump. What Trump does in the 2017 security strategy is so resonant of Augustus’ recounting of his own achievements in his reign. It’s that idea of constructing a personal security cult. What Trump shows is that threat inflation has at least a 2000-year history. Trump’s 2017 security strategy is a masterpiece in how to present a very pithy report card on your own achievements, while also showing how terrifying the world is. It’s that initial sensitisation of the fear through hyperbole, while saying ‘don’t worry, I’ve fixed everything, I’ve made America great and secure again’.

Finally, you have presented security rhetoric as a marketing technique in the cybersecurity industry, and national security rhetoric as highly politicised. I think an obvious question to then ask is whether the language surrounding cybersecurity entirely unjustified?

The main purpose of my research is to highlight to people who might not work in the security sphere, who might not be specialists, or if they are specialists, to remind them that utterances and public statements on security are very rarely just neutral statements of fact. I think that’s one thing we have lost sight of in all of the panic about fake news and disinformation. All information that we receive is packaged before we receive it so objective fact is a very difficult thing to find these days. So, by giving people the tools to recognise when a metaphor is being used or recognising when someone is appealing to their emotion rather than their logic or their ethical sense of what is right. At least if you can spot when somebody is doing that, you can identify when somebody might be piling on the fear, rather than just presenting you with the story as it is.

It’s not necessarily to discredit all representations of security issues as threatening because there are such things as security threats, and they definitely exist. But by calling out very specific representation, we can start to peel away those various layers of artifice to get down to the facts and what is actually happening. There was a particular debate in 2014 debating the findings of the committee looking into the murder of the soldier Lee Rigby and in this particular exchange between David Cameron and Jack Straw, the language that is used to represent tech company executives represents them as ideologically opposed to modern democracy, effectively painting them as terrorists. Once you know that people do that, you start to see it elsewhere. I started to see it in Mike Pompeo’s descriptions of China’s tech policy. He gave a speech quite recently where he was using words like ‘tyranny’ and ‘freedom loving’, that are listed straight from the Bush doctrine, advocating and justifying war in Iraq and Afghanistan. These things get recycled and they get applied to different contexts. Once you know that these rhetorical cycles and tropes are being used it gives you an opportunity to question how these people are seeking to persuade their audiences of a particular point of view. It’s just giving people an opportunity to see that to reach the neutral representation of facts around security concerns you often need to dig under layers of artifice rhetoric.

Dr Baines’ upcoming book, ‘Rhetoric of InSecurity: The Language of Danger, Fear and Safety in National and International Contexts – Law, Language and Communication’, can be pre ordered here.


By Bożena Fanner Brzezina.

Further reading: