Security

AI-powered IT security seems cool – until you clock miscreants wielding it too

Field both embraced, feared by enterprise


Comment We're hearing more about AI or machine learning being used in security, monitoring, and intrusion-detection systems. But what happens when AI turns bad?

Two interesting themes emerged from separate recent studies: the growth of artificial intelligence coupled with concerns about their potential impact on security.

A survey of 5,000 IT professionals released late last month revealed three major threats techies believe they will face over the next five years: malicious AI attacks in the form of social engineering, computer-manipulated media content, and data poisoning. Just four in 10 pro quizzed believed their organizations understood how to accurately assess the security of artificially intelligent systems.

That was according to the Information Systems Audit and Control Association's (ISACA) second annual Digital Transformation Barometer, which named AI and machine learning among the top three technologies likely to be deployed in the next year.

They were also listed in the top five technologies likely to face resistance.

Interestingly, ISACA highlighted the different perceptions of AI risk between the digitally informed and business leaders who are technically illiterate.

"For AI, having digitally literate leaders correlates to lower perceived risks, which can be key when making the case for deploying technologies," ISACA noted. "33 per cent of companies whose leaders do not possess technological expertise perceive AI to be high-risk, while just 25 per cent of companies with digitally literate leaders perceive AI to be high-risk. Organisations led by digitally literate leaders were almost twice as likely to deploy AI than other organizations (33 per cent compared to 18 per cent)."

When it came to emerging technologies, a decision on whether or not to deploy was found to be largely affected by familiarity. Using AI as an example, 76 per cent of enterprises testing it said that it was worth the risk, with just nine per cent saying it was not. In enterprises that were not testing AI, the confidence in it being worth the risk dropped by a third, while the proportion of respondents who said it is not worth the risk more than doubled.

Rise of the Machines

Are the ISACA members right to be concerned about AI security risks, or does simply understanding a tech make you fear it less?

A paper published earlier this year, titled The New Frontiers of Cybersecurity, backed by the National Natural Science Foundation of China, sided with the former statement.

AI quickly cooks malware that AV software can't spot

READ MORE

It asserted that machine-learning is capable of transforming security by mining information and learning from various types of data – such as spam emails, messages and videos – and then evolving an autonomous detection or defense system. Continuous self-training will continue to promote the performance of AI-powered systems, including their stability, accuracy, efficiency, and scalability. But this also works the other way round.

"AI is pushing the boundaries of the abilities of hackers," the paper noted. "Autonomous hacking machines powered by AI can craft sensitive information and find vulnerabilities in computer systems, thus making it much more difficult to fight hackers. Worse yet, AI is able to learn sensitive information, such as personal preferences, from a vast amount of seemingly insensitive data.

"These facts lead us to believe that hackers weaponized by AI will create more sophisticated and increasingly stealthy automated attacks that will demand effective detection and mitigation techniques."

Knowing AI and not fearing it has its place; understanding it as an tool in the hands of the enemy, however, is also worthwhile. Luckily, so far, miscreants prefer to run relatively simple attacks, usually involving phishing or automated exploitation of known vulnerabilities, than training and developing sophisticated machine-learning cyber-weapons. ®

We'll be examining machine learning, artificial intelligence, and data analytics, and what they mean for you, at Minds Mastering Machines in London, between October 15 and 17. Head to the website for the full agenda and ticket information.

Send us news
20 Comments

Google Cloud chief is really psyched about this AI thing

We're on a highway to ML

AI spam is winning the battle against search engine quality

'Not all AI content is spam, but I think right now all spam is AI content'

Arm flexes silicon muscles to push generative AI at the edge

Ethos-U85 microNPU boasts 4x performance boost over previous gen

Developers are calling the shots on AI planning, judging by your experience

And American CIOs keep a closer eye on the purse strings than European equivalents

What's up with AI lately? Let's start with soaring costs, public anger, regulations...

'Obtaining genuine consent for training data collection is especially challenging' industry sages say

Why making pretend people with AGI is a waste of energy

Industrial revolution didn't give us human mimics, so why should AI think like us, this computer scientist wonders

Psst, hey. It's the NSA. You want some AI security advice?

You can trust us, we're the good guys

Intel CEO suggests AI can help to create a one-person Unicorn

And possibly replace entire business units too

Microsoft puts ex-DeepMind boffin in charge of London AI hub

Follows £2.5 billion pledge to 'upskill' British workers for the new world order

US House mulls forcing AI makers to reveal use of copyrighted training data

Proposed law doesn't include any ban on use of such stuff to build models, mind you

Hailo's latest AI chip shows up integrated NPUs and sips power like fine wine

All your PC needs for 40 TOPS is an M.2 slot

British watchdog has 'real concerns' about the staggering love-in between cloud giants and AI upstarts

Billions in investment? Yeeeah, right – looks more like ensuring only select few developers thrive