Skip to content
Menu

Navigating Cybersecurity in an AI-Driven World

Payal Bhattar, Associate Editor, Spoon Finland

Beamex / Resources / For a safer and less uncertain world / Navigating Cybersecurity in an AI-Driven World

In the rapidly evolving landscape of artificial intelligence, the prevalence of AI-driven systems is casting an ominous shadow: the increasing complexity and sophistication of data breaches that have become harder to detect.

Well-known cyber-attacks now include input and poisoning attacks, which cause algorithms to yield wrong results. In an input attack, data fed to the AI algorithm during the testing phase is manipulated, and in a poisoning attack, the process by which the AI algorithm is trained is tampered with.

“AI-based decisions/recommendations can be biased and, therefore, be unfair. For example, these decisions/recommendations may favour specific groups of individuals compared to others. Also, AI algorithms may breach privacy as they leak information about the data used for training the algorithms,” says Elisa Bertino, Professor of Computer Science at Purdue University.

“Another major risk of AI is that it can be very good at generating data, such as images and text, which look realistic. Therefore, parties can misuse AI to carry out misinformation attacks.”

From social engineering attacks like phishing and CEO fraud to adversarial attacks that corrupt the data of AI models, it is apparent that systems can be broken into, system behaviour replicated, and detection can get delayed or may go unnoticed.

The weak links

While the risks of AI-driven cyber-attacks are business-specific, sectors like transport, healthcare, and public services are most vulnerable. An attack on these sectors can devastate and impact human life.

In business functions, operational technology is most susceptible to cyber risks. Stefano Zanero, Cybersecurity Professor at Politecnico di Milano, gives the example of a cyber breach in San Francisco where people used traffic cones to stop automated self-driven vehicles.

“It was an attack where someone understood that the machine learning system developed for self-driving cars stops because it cannot understand the environment anymore. Ultimately, industrial control or any industrial plant where we use machine learning will have the same case.

“Operational technology that uses machine learning for analysing data or anomaly detection is already intrinsically vulnerable to adversarial attacks against these machine learning technologies,” he explains.

The World Economic Forum’s Global Cybersecurity Outlook 2023 report points out that multiple organisations share the same technologies and may have the same weaknesses and dependencies. This means that the impact of cybersecurity incidents can cascade from one organisation to another and possibly spill across borders. These risks, the report states, are potentially systemic, often contagious, and frequently beyond the understanding or control of any single entity.

Within an organisation, too, there is often a gap in conversations about cyber security between business leaders and cyber leaders, resulting in a focus on the news about the cyber incident rather than on its importance for the business and how businesses can help cyber leaders manage responses.

Transparency, collaboration and calibration

All this calls for a shift in perspective on cybersecurity. Experts say that the focus now needs to move towards more transparency, collaboration, and calibration. Zanero recalls the example of a ransomware attack on Norsk Hydro, a Norwegian aluminium manufacturer, a few years ago to demonstrate that.

“They went public because they were attacked and gave details of what they did, what worked and what didn’t. That has been so helpful throughout the industry. If we could do that, the industry would improve, and everyone’s part of the changed environment.”

“We only learn collectively, and this is true for companies. We don’t need to make the same mistakes if we can share those details. We should shift to a no-blame culture where the victim of the attack is recognised as the victim, not as the stupid person who got themselves broken. Because we are going to be the next one.”

Companies must step up on data integrity to avoid being the next one under cyber threat to enable better decision-making. Bertino says that requires looking at the problem of AI security from a systemic point of view.

“One wants to make a certain system secure and reliable by including different AI techniques and models and using data from independent sources. Such an approach would enhance security, timeliness, and coverage of decisions, predictions, for example.”

She concludes that the ultimate goal should not be just the protection of the AI itself: “Rather, it should be to make accurate decisions, forecasts and analyses, and to achieve this goal, we need to think in terms of systems security.”

About the author

Payal Bhattar, Associate Editor at Spoon Finland, has more than 20 years of experience in journalism and content development. She has worked in India’s largest media conglomerates, including CNBC TV18 and Bennett Coleman and Company. Her ability to communicate complex business subjects in an engaging way appeals to readers. She loves traveling, reading and cooking.

You might also find interesting

For a safer and less uncertain world

Welcome to our series of topical articles where we discuss the impact that accurate measurement and calibration has on the world and our everyday lives.