There are growing concerns about 'Mythos,' the latest model from the US AI company Anthropic. This model is known to be very strong at finding security vulnerabilities. There was also an explanation that it can calculate attack paths by itself without human help. According to the report, this AI even found an operating system flaw that experts missed for 27 years. The cost was said to be about 50 dollars. Because of this, warnings grew that important facilities like power grids, water systems, and banking systems could become targets. Because of these risks, Anthropic did not release Mythos to the public. Instead, it started a project to test defense systems by showing it only in a limited way to about 40 places, including Apple and Google. JTBC reported that while the whole world is tense, the Korean government has also begun an emergency response.
원문 보기Why Mythos feels scary is not its 'words' but its 'actions'
When you first hear it, you may feel like this. People have said 'AI is dangerous' many times before, so what is so different this time? But the core of the Mythos controversy is not a chatbot that gives good answers. It is that, when given a goal, it is an agent-type AI that acts by connecting several hacking steps in a row.
A normal conversational AI is good at taking questions, writing text, or explaining code. But in reports and explainers about Mythos, functions like vulnerability detection (finding weak points in a system), attack path reasoning (calculating where to enter and how it will spread), and choosing another route when it fails are emphasized. Simply put, it looks less like 'a human hacker's notepad' and more like 'a chess player that keeps choosing the next move.'
That is why even strong words like 'cyberterror weapon' are coming up. The reason it is dangerous is not that AI suddenly became evil, but that it can push through a hacking process that was originally hard, expensive, and slow much more cheaply and quickly. This change is not really like a horror movie story. It is more accurate to see it as a sudden gap in the speed difference of security.
The core of the worry about Mythos is not 'AI is smart' but 'it keeps choosing steps by itself.'
In other words, this is the moment when the kind of risk is said to move from 'harmful answers' to 'real attack automation.'
What is different if you put a normal chatbot and Mythos side by side?
| Category | Normal chatbot | Autonomous hacking AI like Mythos |
|---|---|---|
| Main purpose | Answering questions, summarizing, writing, code help | Vulnerability detection, attack path calculation, intrusion process automation |
| How it works | Reactive type that responds to user input | Agent type that connects multiple steps when given a goal |
| Failure response | If you ask again, it answers again | If it gets blocked, it can find another path and change the next action |
| Main risks | Generating harmful information, wrong answers | Faster attacks on real systems, lower barrier to entry |
| Deployment method | Many public services | Limited release, focused on selected partner testing |
The phrase 'finds the attack path by itself' works like this in hacking
Hacking usually does not end in one try. It is a process of opening several doors one by one, and autonomous AI is closer to the side that tries to plan that order by itself.
Step 1: Reconnaissance
First, it collects system information. It checks what operating system is used, what services are open, and which devices are old. It is easy to think of this as the stage where a human hacker opens a map.
Step 2: Environment modeling
Based on the collected information, it makes a guess like 'what weaknesses might be here?' This is where ideas like attack graph (a map showing possible intrusion paths) come in.
Step 3: Create possible paths
It does not look at only one vulnerability. It creates several paths that continue from initial intrusion to privilege escalation, credential theft, and internal movement. In hacking, the really dangerous part is this 'connection'.
Step 4: Choose the next action
It picks the next move with the highest chance of success. If existing automation tools are closer to 'run a fixed button,' an autonomous agent tries to go further to 'look at the situation and choose the next button.'
Step 5: Detour when failed
If one path is blocked, it does not just end there and can calculate another route again. That is why people say the risk becomes bigger from internal spread than from a single vulnerability.
Step 6: Reach the goal
In the end, it moves toward goals like data theft, service shutdown, and securing control. So it is understood less like a 'tool' and more like 'an attack operator that moves partly by itself'.
Vulnerabilities keep pouring out, so why is the defense team getting more and more out of breath?
The numbers below show how much the 'speed of discovery' and the 'speed of analysis and action' do not match. You can think that a bigger value means a bigger operation burden.
Attackers only need to find one place, but defenders have to protect everything
| Category | Changes on the attacker side | Burden on the defender side |
|---|---|---|
| Finding vulnerabilities | With AI, even weaknesses hidden for a long time can be found faster and more cheaply | The list to sort what is truly dangerous grows quickly |
| Success condition | If just one place is breached, it can move to the next stage | You need to keep checking all assets and set priorities |
| Patch speed | Attackers only need to move first before the patch | Defenders have to finish fixing, deploying, and verifying |
| Vulnerable targets | It is easy to target legacy equipment, OT, embedded systems, and devices that cannot be patched | Systems that cannot be stopped are especially hard to replace and apply security patches to |
| Result | Lower barrier to entry for intrusion | Security operations shift from a problem of 'finding' to a problem of 'processing bottlenecks' |
There is a reason why the power grid, water, and banks are always mentioned first
These areas are scary not just because of movie-like imagination. Real events in history have repeatedly shown that 'digital attacks can stop social functions.'
1960s: Spread of SCADA
SCADA (remote control system for industrial facilities), which remotely monitors and controls wide facilities like electricity, water, and gas, spread widely. At that time, the design philosophy was closer to stable operation and efficiency than to security.
1996: The concept of critical infrastructure became institutionalized
US Executive Order EO 13010 grouped electricity, finance, water, and transportation as core infrastructure that the state must protect. It is the starting point that policy-wise shows why these fields are always mentioned first.
2010: The shock of Stuxnet
Stuxnet (malware targeting industrial control systems) showed that it could do more than steal information inside computers; it could also damage real physical facilities. It was a turning point showing that cyber attacks can affect machines in the real world.
2015: Ukraine power grid hacking
A real blackout happened. It became clear that if the power grid is hacked, it can shake the functioning of all society, not just cause inconvenience.
2021: Colonial Pipeline incident
An attack on a business IT network led to disruption in fuel supply. It showed how closely IT and OT, meaning office computer networks and operational facility networks, are connected in actual operations.
2020s: Resilience-centered security
Now people think it is not enough to only 'prevent breaches.' The key has become how quickly you recover and keep social functions running even after an attack, that is, resilience.
Why are infrastructure systems weak in the first place? Comparing vulnerabilities of the power grid, water, and banks
| Sector | Why it is mentioned first | Why it is vulnerable |
|---|---|---|
| Power grid | If there is a blackout, it causes chain impacts on industry, transportation, and communication | Dependence on industrial control systems, old equipment, and the need for nonstop operation |
| Water system | Directly connected to public health and daily survival | Remote control equipment, aging field equipment, and difficulty of replacement and inspection |
| Banks and finance | If payments and fund movement are shaken, economic anxiety spreads right away | Legacy computer networks and modern services are mixed together, and connectivity is high |
So why do they show this kind of AI only to some companies?
| Item | Open weight · broad public release | API · limited release to selected partners |
|---|---|---|
| Strengths | Research validation, spread of innovation, wider access | Easy to track use, limit speed, restrict accounts, and reflect updates |
| Weaknesses | Once it is released, recall and control are almost impossible | Power concentrated in companies, possible unclear standards |
| Release decision standard | Focus on openness and ecosystem growth | Put more focus on risk review like cyber abuse, autonomous actions, and bypassing safety devices |
| Mythos context | If released to the public, the impact of misuse can be too big | An approach to test first with selected companies and check the defense system first |
What the government really needs to do is not 'ban AI' but reform response speed
Even if this kind of threat comes, it does not end with one law. Real response needs several tracks moving at the same time.
Step 1: Redo threat classification
We need to add AI features into the existing cyber threat framework. New items like model security, training data protection, and AI incident reporting need to be managed separately.
Step 2: Fix laws and procurement standards
The government needs to put the 'secure by design( designed safe from the start )' principle into procurement and regulation. Security requirements need to be included through the full cycle of development, release, operation, and disposal to make it really work.
Step 3: Upgrade the public-private information sharing system
If AI finds vulnerabilities faster, companies and the government also need to share incident information and signs faster. Late sharing can quickly lead to wider damage.
Step 4: Use AI to raise detection and response speed
If attackers use AI, the defense side also needs to strengthen AI-based analysis and automatic response. With only people, it is getting hard to keep up with a speed battle that moves by the minute.
Step 5: Diplomacy and international cooperation
There are already warnings that state actors use AI for information warfare and cyberattacks. So diplomacy is becoming important so allied countries can align evaluation standards, incident information, and common rules.
Step 6: People and training
Lastly, the part that is easy to miss is the people issue. AI security does not end just by buying tools. You need people who can actually run them, verify them, and reduce wrong judgment.
Defense technology turning into attack technology is actually not something new
Even if Mythos looks unfamiliar, in the big picture it is not a completely new story. Security technology has always grown with dual use(good use and bad use together) from the start.
Ancient to modern: cryptography was originally state technology
Cryptography developed first as technology to protect military and diplomatic secrets before personal messenger apps. From the beginning, defense and information warfare were connected.
Early 20th century: cipher machines and codebreaking war
The same cipher system became a shield on one side and a target that had to be broken on the other side. The wartime competition in cipher machines shows the two sides of security technology very clearly.
1970s: expansion of public key cryptography
Security technology spread beyond the military area into civilian networks and commercial services. As good uses grew, the sensitivity as a strategic technology also grew together.
1990s: Crypto Wars
Strong encryption was seen as a way to protect citizens, but at the same time, it also made state control harder. This is a case where a technical issue quickly became a political and system issue.
2000s to now: abuse of red team tools
Tools like Cobalt Strike and Metasploit were originally for penetration testing and defense training. But real attackers also used the same tools. The Mythos controversy can be seen as the moment this pattern moves up to the AI stage.
So the Mythos shock is not an 'AI horror movie' but a sign that the security speed race has started
In short, this is the point. It does not mean Mythos is a monster that will destroy the world right away. Public information is still limited, and media reports may also include some exaggeration. But even with that in mind, one clear thing is that the world is moving toward faster speed in finding and linking vulnerabilities.
This change is closer to our daily life than you may think. If electricity stops, water supply becomes unstable, bank systems stop, and old systems in companies or hospitals cannot keep up with patches, in the end, ordinary people are the ones who suffer inconvenience. So this issue is not just 'news inside the AI industry' but also a story about the rising maintenance cost of daily infrastructure.
From now on, the important question will be closer to 'who can adapt faster' than 'can we stop AI.' Attackers gain speed, and defenders need to reduce bottlenecks. It is most realistic to read the Mythos shock as exactly that starting signal.
The core of the Mythos controversy is not fear of AI itself, but the rising speed of hacking automation.
So the solution is also closer to faster patching, information sharing, resilience, and international cooperation than to bans.
We will show you how to live in Korea
Please give lots of love to gltr life




