CISOs can no longer ignore disruptive effects of AI
AI security not well understood by cybersecurity teams.
AI is disrupting everything, including cybersecurity. And CISOs can no longer get away with ignoring it.
Unfortunately, AI security is not well understood by cybersecurity teams, added Prof Chien Siang Yu, in a standing room-only presentation at GovWare this afternoon.
Prof Yu, who cofounded GovWare, has an illustrious career in Mindef and MHA before joining the commercial sector. Today, he is the CITO of Amaris AI.
The AI cyber problem
Some of the threats posed by AI:
- Weaponisation of AI.
- Deepfake deception attacks.
- AI vulnerabilities in critical infrastructure.
A basic example would be the "patch" visual adversarial attack where an AI-defeating design displayed on an iPad can thwart human-detecting AI algorithms.
Perfect to render oneself "invisible" to AI systems. And such attacks were developed over 3 years ago, notes Prof Yu.
AI security legislation
𝟮/ 𝗔𝗜 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗹𝗲𝗴𝗶𝘀𝗹𝗮𝘁𝗶𝗼𝗻
One unusual development of generative AI is how legislations are emerging faster than standards, before industry standards were even created.
This underscores the seriousness of AI threats, says the professor.
Some AI regulations:
- EU AI Act.
- US California AI Act (SB1047).
- US Presidential Directive on use of AI (2023).
- Singapore Guidelines on securing AI systems*.
Fun fact. Did you know the penalties specified by the EU AI Act is higher than that specified for breaching the GPDR?
*Released yesterday by the Cyber Security Agency of Singapore (CSA).
CISOs can no longer ignore AI
In a nutshell, CISOs can no longer ignore AI but must pay attention to addressing AI-based threats.
Other pointers from his presentation:
- Use AI to remake coding.
- Attack is easy; defence is hard.
- Use AI as driver of innovation to boost CyberOps.
Finally, testing AI systems is complicated due to the sheer number of techniques available.
LLMs could be susceptible to attacks such as:
- Inference.
- Obfuscation.
- Code injection.
- Morality issues.
- Payload splitting.
- Needle in haystack.
- ... many more.
His advice?
Perform red teaming to poke holes into existing AI systems - then move to fix detected vulnerabilities.
How are you enjoying GovWare 2024 so far?