Using AI-powered cyber tools? Read the fine print first

CISOs that don't read the fine print could be making a deadly mistake.

Using AI-powered cyber tools? Read the fine print first
Photo Credit: Unsplash/Mario Dobelmann

CISOs that use AI cybersecurity tools and don't read the fine print could be making a deadly mistake.

AI is making its way into every facade of our lives, including the cybersecurity tools that cyber professionals need to do their jobs.

As these tools gain popularity, CISOs and CIOs must carefully review the fine print around any data-sharing agreements, says Damian Leach, CIO of Seaco.

Here's why.

The good and the bad

AI offers tremendous opportunity and is being rolled out quickly for cybersecurity tasks such as:

  • Rapid summarisation of log files.
  • Scour the network to find potential weak points.
  • Scanning to ensure company security standards are met.

On the other hand, risks can cascade if AI is used to automate certain deployments, and it makes a wrong decision.

And you can count on the bad guys trying to exploit AI models - at a time when we are still relatively new to protecting them from abuse.

Don't use AI for its own sake

For these reasons, Damian cautions against rolling out AI solutions simply for its own sake.

  • Evaluate the business value of AI.
  • Build MVPs, test use cases first.
  • Run pilots before deployment.

And in case you were wondering, humans are not optional for now.

Damian said: "AI is not going to replace the need for ‘human in the loop’ decision-making for most companies anytime soon.”

Mind the data

But how can CISOs and IT professionals evaluate their AI-powered tools? I'm glad you asked, as this is where it gets interesting.

Damian classifies these solutions as either AI-integrated or AI-infused. The latter contains its own AI model, while the former relies on an external AI provider.

For both, businesses must be mindful of their data:

  • What is the data journey?
  • What type of information is shared?
  • Can opt out of certain AI features for data privacy?
Damian cautioned: "In some instances... vendors don’t just share the patterns to improve AI models but also the raw data to external parties – this is certainly not good practice."

The future of AI

We are still in the early days of generative AI, so expect a new generation of "AI-native" cybersecurity tools that will solve more problems eventually.

It was a really enjoyable conversation with Damian, who proved to be a fount of knowledge about both AI and cybersecurity.

And no wonder: I later learned how under his leadership, Seaco signed the AI Verify pledge to build trustworthy AI.

Are you paying attention to the fine print of the AI-powered tools you use?

Read the full article "AI in Cybersecurity: The Good and the Bad" on GovWare here.