Vulnerabilities inherent in AI – GCN

0


INDUSTRY OVERVIEW

Cybersecurity blind spot: vulnerabilities inherent in AI

Cyber ​​security is widely viewed as the biggest strategic challenge facing the United States. Recent headlines only confirm this trend, as every day seems to come with news of a new vulnerability, hack, or breach. Since 2013, the U.S. intelligence community has ranked cybersecurity as the # 1 threat the country faces in each of its annual global threat assessments. It wasn’t until 2021, at the height of a global pandemic, that cybersecurity lost its top spot.

However, there is one major flaw with the commonly accepted wisdom about cybersecurity: it has a blind spot.

Specifically, traditional cybersecurity measures too often fail to address data science methodologies and vulnerabilities unique to artificial intelligence systems. The policies developed and deployed to secure software systems do not take into account the data science and AI systems activities to which they give rise, i.e. the need of the user or the system to access many sets of data. big data in a way that often doesn’t align with today’s cybersecurity. fundamentals and implementations. This means that just as emerging technologies such as AI and data analytics are gaining traction – motivating policy after policy while defending its benefits – today’s software security practices are fundamentally blind to the challenges they face. create. This is because new technologies require and receive unrestricted access to the underlying data and rely on reliable data and high-quality data to ensure the accuracy of the resulting algorithms and data science products.

We can’t have more AI simultaneously and more security – at least not without significantly adjusting our approach to securing software and data.

The recently issued executive order by the Biden administration on improving the nation’s cybersecurity is an ambitious and thoughtful attempt to resolve this paradox. However, it contains significant gaps that reflect how the impact of data science on cybersecurity is often overlooked. Ultimately, we need to help the right hand of cybersecurity better understand what the left hand of data science is doing.

Embrace zero trust

How can agencies maintain security in an environment plagued by threat actors? An important response is to adopt a zero trust model – a concept at the heart of the decree – which requires assuming violations in almost all scenarios.

What exactly this means in practice is clear in the traditional software and control environment: implement risk-based access controls, ensure least privileged access is implemented by default, and integrate resiliency requirements in network architectures to minimize single points of failure.

However, the problem is that none of this takes into account data science, which requires continuous access to data. Rarely do data scientists even know all the data required at the start of an analytics project. Instead, they often need access to all the data available to provide a model that sufficiently addresses the problem at hand.

So how does zero trust fit into this environment, where users who build AI systems actively need access to massive amounts of data? The simple answer is no. The more complicated answer is that zero trust works for production-ready AI applications and models, but not for training AI.

A new kind of supply chain

The idea that software systems suffer from a supply chain problem is also common wisdom. These systems are complex and it can be easy to hide or obscure vulnerabilities within this complexity. This is, at least in part, why the decree so strongly emphasizes the importance of managing the supply chain, both the physical hardware and the software that runs there.

However, the problem is once again one of mismatch. Efforts to focus on software security do not apply to data science environments, which rely on data access which, in turn, forms the basis of AI code. While humans painstakingly program software line by line in traditional systems, much of AI is “programmed” by the data it is formed on, creating new vulnerabilities and cybersecurity challenges.

What then can one do about these types of security issues? The answer, like so many other things in the AI ​​world, is to focus on data. Knowing where the data came from, how it was accessed and by whom, and tracking access in real time are the only long-term ways to monitor and address these evolving vulnerabilities. To ensure software and AI are secure, organizations need to add efforts to track data to the already complicated supply chain.

A new kind of ladder – and emergency

Perhaps more importantly, as AI becomes more widely adopted, I think vulnerabilities in cybersecurity are likely to grow in proportion to a system’s underlying code base. As we move towards a world in which data itself is Code, these vulnerabilities will scale in proportion to the data on which AI systems are trained, meaning that threats will increase exponentially in proportion to the code required in the system. By simply relying on the increasing volume of data we generate as we deploy more AI, we simultaneously create an ever-expanding attack surface.

The good news is that this new AI-powered world will bring unlimited opportunities for innovation. The intelligence community will know more about adversaries in as close to real time as possible. The armed forces will benefit from a new type of strategic intelligence, which will reshape the boundaries of battlefields and increase their speed of response. However, that future is also likely to be plagued with insecurities that are destined to grow at a faster rate than human understanding allows.

To take cybersecurity seriously, agencies need to understand and understand how AI creates and exacerbates these vulnerabilities. The same goes for strategic investments in AI. The long-term success of the country’s cybersecurity policies will depend on how precisely they apply to the world of AI.

About the Author

Matthew Carroll is CEO of Immuta.


Share.

Leave A Reply