Security

Fighting AI with AI: Social media, deepfakes, and cybersecurity

Jonathan Jacobi

,

Jonathan Jacobi, CTO Office, Dazz

,

Intro

On April 9th, I had the opportunity to take part in Cybertech Tel Aviv, where I joined the 8200 Alumni Association Panel. Our focus? "Social Media Reshaping Modern Warfare.”

The 8200 Alumni Association orchestrated this discussion, gathering former members of Israel's Elite Intelligence Corps Unit - Unit 8200, to hear their unique viewpoints on social media and war. As the panel’s cybersecurity expert and the most recent 8200 alumni (just a few weeks ago!), I delved into the role of cybersecurity within the broader context of social media's impact on modern warfare.

In this post, I’ll walk you through the panelists’ different perspectives on social media & war, and we’ll see how both cybersecurity and AI play a huge role.

Social Media - A cybersecurity arms race platform 👨‍💻

It’s not a surprise that nowadays the cybersecurity dimension is another dimension of war.

We’ve all seen headlines on cyber attacks that shape our realities; some good examples are attacks on critical infrastructures (such as healthcare), affecting elections, malware and ransomware, and many more.

The common denominator for all of them is cybersecurity experts orchestrating and performing the attacks, and the accessibility of knowledge and information on social media plays a huge role in creating them.

I think about it as a sort of an arms race— a long process that essentially builds more “weapons.” And in this case, those “weapons” are human beings. 

How to make a hacker

So how does  social media create hackers? Take a look at  my experience from a young age. 

Phase 1 - “Video Games” 🎮

Video games have become a huge use of free time for today’s youth—spending countless hours  playing everything from Fortnite to Call of Duty.  According to one study, as of 2024, more than 618 million young adults under 18 make up 20% of all the gamers in the US. 

A common trend that has emerged in gaming for kids is to cheat their way to winning. Many kids get involved in the game hacking scene from a very young age, using tools that people built and put online in order to become better at the game they’re playing—without  understanding how the hacks work behind the scenes. Naturally, the young people engaging in those hacks don’t necessarily realize any legal or ethical ramifications—on the contrary, their game play benefits from learning and implementing the hacks as they help them to win. In their minds, cheating = winning = good. 

There are huge online communities on different social media platforms that are easily accessible to everyone when it comes to game hacking. 

Phase 2 - Programming 💻

Game hacking is not where it ends!

With worldwide emphasis on coding at an earlier and earlier age, and understanding technology in depth, kids get curious about how game hacking tools work— which leads them to programming.

It doesn’t come as  a surprise that almost everything there is to know about programming is available online and is easily accessible. Close-to-zero knowledge required! This means that kids with even the smallest exposure to the inner workings of technology can learn how to build their own applications or write their own game hacks, and this hands-on experience opens up a new world for them to explore.

Phase 3 - Cybersecurity ⚔️

Cybersecurity is a next logical step from programming, and this is where it gets extra interesting!

Social media platforms and many more publicly available communities play a huge role in the cybersecurity arms race— from YouTubers dedicating their work to education in the cybersecurity field (e.g: LiverOverflow), hacking competitions (Capture the Flag), blog posts explaining state-of-the-art 0day vulnerabilities, and so much more. With this influx of readily-available—and often celebrated— information, it’s completely possible to get to a point where one could, for example, find and exploit security vulnerabilities with real world impact. 

My journey was somewhat similar—from video gamer, to eager learner, to cybersecurity enthusiast—and the cybersecurity communities on social media played a huge role. That journey eventually led me to be one of the youngest Microsoft employees, and to actually find critical 0day vulnerabilities with real-world impact. 

Social media - fake news & deepfakes 📰

Along with the evolution from gamer to hacker, our panel discussed how deepfakes and fake news on social media are creating a new warfare dimension.

A known problem we’ve been facing for years is fake news — people spreading misinformation and by doing so, impacting perceptions of reality for others. 

This leads to an unfortunate consequence: What people think often becomes more important than what actually happened.

Worse yet, understanding if something is fake or not just got a whole lot harder with AI.

Picture this: AI writes a post that is completely fake. Then, AI produces more posts at a rapid pace to support those original posts. Along with the content, AI generates realistic images using deepfake technologies. Then there’s amplification—social media bots reply to those false posts and support them. 

As you can see, discerning what is and is not real is becoming harder and harder. This problem is serious, and my fellow panelists discussed its different implications and also what we think the future holds in terms of tackling this problem from laws and regulations to technological solutions.

Where AI meets attackers 🥷

We’ve covered that there’s enough information on security and hacking for a teenager to simply roam around the internet and study his way to becoming a vulnerability researcher. Imagine how beneficial and helpful a GenAI model fed with all this information would be? It could benefit hackers, too, in many ways, but let's look at some examples: 

AI-assisted exploits - 1day vulnerabilities & CVEs

Finding vulnerabilities is a very difficult task. But 1days, as we know, are vulnerabilities that were fixed by the company responsible for the vulnerable product - and that makes them somewhat “public.” 

This lowers the difficulty bar for attackers and allows them to create exploits with less effort. I see two main concerning implications with that:

  1. More attackers: If the bar for an attacker is lowered and attacks require less effort, it follows that there will no doubt be more attackers. 
  2. More attacks: Attackers will utilize this public information to build more attacks at scale, as AI would provide possibilities for automation of different parts of an attack.

More vulnerabilities…

AI not only lowers the bar for attackers, but also helps attackers become more sophisticated by assisting with vulnerability research or by performing tasks that the attacker used to have to do.

AI introduces many new threats—on social media, in cybersecurity, and elsewhere. Instead of being afraid of it, we should utilize it to help ourselves advance as well. And that’s exactly what we’re doing.

Fighting AI with AI: The Dazz way 

At Dazz, we’ve decided to tackle the rapid growth of AI technology by utilizing it for the good side!

The problem we’re solving at Dazz is quite difficult. Remediating vulnerabilities is a complex task, but we’re here to solve it - so we’re using all tools available!

We came up with innovative techniques to utilize Generative AI models in various different parts of our solutions - and by doing so we managed to breach some walls that might have never been breached otherwise!

We’re super happy with our utilization of cutting-edge technologies, and we’re always thinking and trying to come up with the next eureka 🙂

Summing it up

Tools can be used for good and they can be used for bad. A knife can be used by a chef to create a gourmet meal, but it can also be used for violence. Social media, too, is a double-edged sword. It’s an inescapably fundamental part of our society, and while it’s provided many benefits to society, many problems have cropped up as a result as well. 

Even so, I remain optimistic. I see a future where we as a society and as human beings, are going to be able to come up with solutions and make the internet a source of truth—and perhaps AI will have quite a large role to play in that. 

We need not be scared of it, but rather learn how to manage it properly and use its capabilities for good.

See Dazz for  yourself.

Get a demo