|
GLTR.life

Living in Korea, Decoded

cut_01 image
cut_02 image
cut_03 image
cut_04 image

Has the age of AI hacking really begun? 7 scenes to understand the Mythos shock

Starting from JTBC's report on Mythos, this explainer carefully shows why autonomous hacking AI looks dangerous and why governments and companies need to move fast.

Updated Apr 20, 2026

There are growing concerns about 'Mythos,' the latest model from the US AI company Anthropic. This model is known to be very strong at finding security vulnerabilities. There was also an explanation that it can calculate attack paths by itself without human help. According to the report, this AI even found an operating system flaw that experts missed for 27 years. The cost was said to be about 50 dollars. Because of this, warnings grew that important facilities like power grids, water systems, and banking systems could become targets. Because of these risks, Anthropic did not release Mythos to the public. Instead, it started a project to test defense systems by showing it only in a limited way to about 40 places, including Apple and Google. JTBC reported that while the whole world is tense, the Korean government has also begun an emergency response.

원문 보기
Key points

Why Mythos feels scary is not its 'words' but its 'actions'

When you first hear it, you may feel like this. People have said 'AI is dangerous' many times before, so what is so different this time? But the core of the Mythos controversy is not a chatbot that gives good answers. It is that, when given a goal, it is an agent-type AI that acts by connecting several hacking steps in a row.

A normal conversational AI is good at taking questions, writing text, or explaining code. But in reports and explainers about Mythos, functions like vulnerability detection (finding weak points in a system), attack path reasoning (calculating where to enter and how it will spread), and choosing another route when it fails are emphasized. Simply put, it looks less like 'a human hacker's notepad' and more like 'a chess player that keeps choosing the next move.'

That is why even strong words like 'cyberterror weapon' are coming up. The reason it is dangerous is not that AI suddenly became evil, but that it can push through a hacking process that was originally hard, expensive, and slow much more cheaply and quickly. This change is not really like a horror movie story. It is more accurate to see it as a sudden gap in the speed difference of security.

⚠️Key point in one line

The core of the worry about Mythos is not 'AI is smart' but 'it keeps choosing steps by itself.'

In other words, this is the moment when the kind of risk is said to move from 'harmful answers' to 'real attack automation.'

Comparison

What is different if you put a normal chatbot and Mythos side by side?

CategoryNormal chatbotAutonomous hacking AI like Mythos
Main purposeAnswering questions, summarizing, writing, code helpVulnerability detection, attack path calculation, intrusion process automation
How it worksReactive type that responds to user inputAgent type that connects multiple steps when given a goal
Failure responseIf you ask again, it answers againIf it gets blocked, it can find another path and change the next action
Main risksGenerating harmful information, wrong answersFaster attacks on real systems, lower barrier to entry
Deployment methodMany public servicesLimited release, focused on selected partner testing
Path

The phrase 'finds the attack path by itself' works like this in hacking

Hacking usually does not end in one try. It is a process of opening several doors one by one, and autonomous AI is closer to the side that tries to plan that order by itself.

1

Step 1: Reconnaissance

First, it collects system information. It checks what operating system is used, what services are open, and which devices are old. It is easy to think of this as the stage where a human hacker opens a map.

2

Step 2: Environment modeling

Based on the collected information, it makes a guess like 'what weaknesses might be here?' This is where ideas like attack graph (a map showing possible intrusion paths) come in.

3

Step 3: Create possible paths

It does not look at only one vulnerability. It creates several paths that continue from initial intrusion to privilege escalation, credential theft, and internal movement. In hacking, the really dangerous part is this 'connection'.

4

Step 4: Choose the next action

It picks the next move with the highest chance of success. If existing automation tools are closer to 'run a fixed button,' an autonomous agent tries to go further to 'look at the situation and choose the next button.'

5

Step 5: Detour when failed

If one path is blocked, it does not just end there and can calculate another route again. That is why people say the risk becomes bigger from internal spread than from a single vulnerability.

6

Step 6: Reach the goal

In the end, it moves toward goals like data theft, service shutdown, and securing control. So it is understood less like a 'tool' and more like 'an attack operator that moves partly by itself'.

Burden

Vulnerabilities keep pouring out, so why is the defense team getting more and more out of breath?

The numbers below show how much the 'speed of discovery' and the 'speed of analysis and action' do not match. You can think that a bigger value means a bigger operation burden.

Year 2025 CVE disclosures48,185cases or %
Daily average New CVE131cases or %
NVD full analysis Completion rate28cases or %
Structure

Attackers only need to find one place, but defenders have to protect everything

CategoryChanges on the attacker sideBurden on the defender side
Finding vulnerabilitiesWith AI, even weaknesses hidden for a long time can be found faster and more cheaplyThe list to sort what is truly dangerous grows quickly
Success conditionIf just one place is breached, it can move to the next stageYou need to keep checking all assets and set priorities
Patch speedAttackers only need to move first before the patchDefenders have to finish fixing, deploying, and verifying
Vulnerable targetsIt is easy to target legacy equipment, OT, embedded systems, and devices that cannot be patchedSystems that cannot be stopped are especially hard to replace and apply security patches to
ResultLower barrier to entry for intrusionSecurity operations shift from a problem of 'finding' to a problem of 'processing bottlenecks'
History

There is a reason why the power grid, water, and banks are always mentioned first

These areas are scary not just because of movie-like imagination. Real events in history have repeatedly shown that 'digital attacks can stop social functions.'

1

1960s: Spread of SCADA

SCADA (remote control system for industrial facilities), which remotely monitors and controls wide facilities like electricity, water, and gas, spread widely. At that time, the design philosophy was closer to stable operation and efficiency than to security.

2

1996: The concept of critical infrastructure became institutionalized

US Executive Order EO 13010 grouped electricity, finance, water, and transportation as core infrastructure that the state must protect. It is the starting point that policy-wise shows why these fields are always mentioned first.

3

2010: The shock of Stuxnet

Stuxnet (malware targeting industrial control systems) showed that it could do more than steal information inside computers; it could also damage real physical facilities. It was a turning point showing that cyber attacks can affect machines in the real world.

4

2015: Ukraine power grid hacking

A real blackout happened. It became clear that if the power grid is hacked, it can shake the functioning of all society, not just cause inconvenience.

5

2021: Colonial Pipeline incident

An attack on a business IT network led to disruption in fuel supply. It showed how closely IT and OT, meaning office computer networks and operational facility networks, are connected in actual operations.

6

2020s: Resilience-centered security

Now people think it is not enough to only 'prevent breaches.' The key has become how quickly you recover and keep social functions running even after an attack, that is, resilience.

Infrastructure

Why are infrastructure systems weak in the first place? Comparing vulnerabilities of the power grid, water, and banks

SectorWhy it is mentioned firstWhy it is vulnerable
Power gridIf there is a blackout, it causes chain impacts on industry, transportation, and communicationDependence on industrial control systems, old equipment, and the need for nonstop operation
Water systemDirectly connected to public health and daily survivalRemote control equipment, aging field equipment, and difficulty of replacement and inspection
Banks and financeIf payments and fund movement are shaken, economic anxiety spreads right awayLegacy computer networks and modern services are mixed together, and connectivity is high
Control

So why do they show this kind of AI only to some companies?

ItemOpen weight · broad public releaseAPI · limited release to selected partners
StrengthsResearch validation, spread of innovation, wider accessEasy to track use, limit speed, restrict accounts, and reflect updates
WeaknessesOnce it is released, recall and control are almost impossiblePower concentrated in companies, possible unclear standards
Release decision standardFocus on openness and ecosystem growthPut more focus on risk review like cyber abuse, autonomous actions, and bypassing safety devices
Mythos contextIf released to the public, the impact of misuse can be too bigAn approach to test first with selected companies and check the defense system first
Response

What the government really needs to do is not 'ban AI' but reform response speed

Even if this kind of threat comes, it does not end with one law. Real response needs several tracks moving at the same time.

1

Step 1: Redo threat classification

We need to add AI features into the existing cyber threat framework. New items like model security, training data protection, and AI incident reporting need to be managed separately.

2

Step 2: Fix laws and procurement standards

The government needs to put the 'secure by design( designed safe from the start )' principle into procurement and regulation. Security requirements need to be included through the full cycle of development, release, operation, and disposal to make it really work.

3

Step 3: Upgrade the public-private information sharing system

If AI finds vulnerabilities faster, companies and the government also need to share incident information and signs faster. Late sharing can quickly lead to wider damage.

4

Step 4: Use AI to raise detection and response speed

If attackers use AI, the defense side also needs to strengthen AI-based analysis and automatic response. With only people, it is getting hard to keep up with a speed battle that moves by the minute.

5

Step 5: Diplomacy and international cooperation

There are already warnings that state actors use AI for information warfare and cyberattacks. So diplomacy is becoming important so allied countries can align evaluation standards, incident information, and common rules.

6

Step 6: People and training

Lastly, the part that is easy to miss is the people issue. AI security does not end just by buying tools. You need people who can actually run them, verify them, and reduce wrong judgment.

Two sides

Defense technology turning into attack technology is actually not something new

Even if Mythos looks unfamiliar, in the big picture it is not a completely new story. Security technology has always grown with dual use(good use and bad use together) from the start.

1

Ancient to modern: cryptography was originally state technology

Cryptography developed first as technology to protect military and diplomatic secrets before personal messenger apps. From the beginning, defense and information warfare were connected.

2

Early 20th century: cipher machines and codebreaking war

The same cipher system became a shield on one side and a target that had to be broken on the other side. The wartime competition in cipher machines shows the two sides of security technology very clearly.

3

1970s: expansion of public key cryptography

Security technology spread beyond the military area into civilian networks and commercial services. As good uses grew, the sensitivity as a strategic technology also grew together.

4

1990s: Crypto Wars

Strong encryption was seen as a way to protect citizens, but at the same time, it also made state control harder. This is a case where a technical issue quickly became a political and system issue.

5

2000s to now: abuse of red team tools

Tools like Cobalt Strike and Metasploit were originally for penetration testing and defense training. But real attackers also used the same tools. The Mythos controversy can be seen as the moment this pattern moves up to the AI stage.

Meaning

So the Mythos shock is not an 'AI horror movie' but a sign that the security speed race has started

In short, this is the point. It does not mean Mythos is a monster that will destroy the world right away. Public information is still limited, and media reports may also include some exaggeration. But even with that in mind, one clear thing is that the world is moving toward faster speed in finding and linking vulnerabilities.

This change is closer to our daily life than you may think. If electricity stops, water supply becomes unstable, bank systems stop, and old systems in companies or hospitals cannot keep up with patches, in the end, ordinary people are the ones who suffer inconvenience. So this issue is not just 'news inside the AI industry' but also a story about the rising maintenance cost of daily infrastructure.

From now on, the important question will be closer to 'who can adapt faster' than 'can we stop AI.' Attackers gain speed, and defenders need to reduce bottlenecks. It is most realistic to read the Mythos shock as exactly that starting signal.

ℹ️Conclusion of this article

The core of the Mythos controversy is not fear of AI itself, but the rising speed of hacking automation.

So the solution is also closer to faster patching, information sharing, resilience, and international cooperation than to bans.

We will show you how to live in Korea

Please give lots of love to gltr life

community.comments 0

community.noComments

community.loginToComment

Has the age of AI hacking really begun? 7 scenes to... | GLTR.life