SANS: 5 Most Dangerous New Attacks in 2024

A gist of the latest SANS briefing on 5 most dangerous new attacks in 2024.

Each year at the RSA Conference, the SANS Institute provides an authoritative briefing on the most dangerous new attack techniques used by threat actors, including cyber criminals and nation-state actors. In 2022, SANS highlighted cloud abuse, MFA bypass attack, ghost backup attack etc. as top threats, while SEO-boosted attacks, malvertising, weaponised artificial intelligence (AI) etc. got the nod in 2023. In each case, SANS listed some interesting attacks that often get insufficient coverage in mainstream media and cyber security publications alike.

Source: sans.org
Source: sans.org

The keynote at RSA Conference 2024 was no different - with SANS capturing the hype around large language models (LLMs) as well as making a surprising pick. Let's take a look at the five most dangerous attacks for 2024.

#1: Security Cost of Technical Debt

Technical debt usually refers to the cost and consequences of choosing a simpler or quicker solution over a more robust, long-term solution when developing software. Over time, it becomes a huge problem, especially when experienced developers leave or companies merge, making it hard to fix or update the code. From a security perspective, this can lead to a poorly secured architecture, overlooked vulnerabilities, outdated dependencies, difficulty with patches/updates, and challenges during incident response.

#2: Digital Identity Verification

In our increasingly complex and interconnected world, verifying digital identities has become more complicated, with several headlines of bots or malicious actors easily bypassing security tests like CAPTCHAs. With advances in generative AI technologies making it trivial for bad actors to pretend to be real users (see deepfake CFO), the cat-and-mouse game is now veering strongly in favour of Jerry.

#3: Child Sextortion

There has been an alarming rise in the use of AI-generated content (e.g. fake nudes) to trick teens and young adults into making payments or doing illegal activities. Children are spending an inordinate amount of time on TikTok, Instagram and other social media platforms, sharing personal details freely online, and greatly aiding would-be extortionists in the process. Without immediate intervention, this threat could snowball into bad outcomes quickly.

#4: GenAI Impact on US Elections

Clearly, generative AI is a major theme this year, with nation-state adversaries weaponising deepfakes and AI-generated content to spread misinformation and sow doubt in the minds of citizens. This is pretty bad in itself, but makes it even more challenging in an election year for the US. Large social media platforms are tackling this issue in earnest, exploring novel ways of identifying and removing fake content at scale, before it becomes viral. One thing is for sure - this topic is going to get livelier through the year.

#5: Offensive Use of GenAI

Cybersecurity was already in a difficult position, with far more open job postings than the number of skilled professionals available to secure existing technologies. Sprinkle some generative AI on it, and suddenly the attackers have superpowers! Identifying vulnerabilities, developing and testing exploits, sending realistic phishing emails etc. have all been turbocharged with generative AI. While AI can and is being used in defensive security applications, mitigations must also be applied during development and deployment of the underlying models.

Subscribe to alphasec

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe