Skip to content

Something Big Is Happening: An AI Founder's Warning to Friends and Family

English Summary

Matt Shumer, an AI startup founder with six years in the industry, warns that we are in the "this seems overblown" phase of something much bigger than COVID-19. On February 5, 2026, when GPT-5.3 Codex and Claude Opus 4.6 were released, he realized his technical work no longer needs him—he describes an outcome in plain English, walks away for four hours, and returns to a perfect, finished product that tested and refined itself. This isn't prediction; it already happened to tech workers, and everyone else is next. AI is now building the next generation of AI (recursive self-improvement has begun), the capability gap is doubling every 4-7 months, and Dario Amodei predicts 50% of entry-level white-collar jobs will disappear within 1-5 years. Shumer's advice: start using AI seriously now (paid tier, $20/month), spend one hour daily experimenting, get your financial house in order, and understand that adaptability—not specific skills—is the only durable advantage.

The COVID Analogy: We're in the "Before" Phase

Shumer opens with a visceral comparison: February 2020, three weeks before COVID shut down the world. A few people noticed a virus spreading overseas, but most of us weren't paying attention. The stock market was fine, kids were in school, life was normal. If someone told you they were stockpiling toilet paper, you'd think they'd spent too much time on a weird corner of the internet. Then, over three weeks, the entire world changed.

Shumer believes we're in that same "this seems overblown" phase right now, but for something much, much bigger.

He's spent six years building an AI startup and investing in the space. He's writing this for the people in his life who don't live in the AI world—his family, friends, people who keep asking "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. He's been giving them the polite version, the cocktail-party version, because the honest version sounds like he's lost his mind.

But the gap between what he's been saying and what is actually happening has gotten far too big. The people he cares about deserve to hear what is coming, even if it sounds crazy.

Clarification on Agency: Almost No One Controls This

Even though Shumer works in AI, he has almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies—OpenAI, Anthropic, Google DeepMind, and a few others.

A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most people who work in AI are building on top of foundations they didn't lay. They're watching this unfold the same as everyone else—they just happen to be close enough to feel the ground shake first.

Section 1: "This Already Happened to Me First"

The reason so many people in tech are sounding the alarm right now is because this already happened to them. They're not making predictions. They're telling you what already occurred in their own jobs, and warning you that you're next.

For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last—it was better by a wider margin, and the time between releases was shorter.

The February 5, 2026 Turning Point

Two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI and Claude Opus 4.6 from Anthropic. And something clicked. Not like a light switch—more like the moment you realize the water has been rising around you and is now at your chest.

Shumer is no longer needed for the actual technical work of his job. He describes what he wants built, in plain English, and it just... appears. Not a rough draft he needs to fix. The finished thing. He tells the AI what he wants, walks away from his computer for four hours, and comes back to find the work done. Done well, done better than he would have done it himself, with no corrections needed.

A couple of months ago, he was going back and forth with the AI, guiding it, making edits. Now he just describes the outcome and leaves.

Concrete Example: The App That Built Itself

Shumer gives a detailed example to illustrate what this actually looks like:

"I'll tell the AI: 'I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it.' And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: 'It's ready for you to test.' And when I test it, it's usually perfect."

This is what his Monday looked like this week.

But it was GPT-5.3 Codex that shook him the most. It wasn't just executing instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

Why Tech Jobs Changed First

The AI labs made a deliberate choice. They focused on making AI great at writing code first—because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version.

Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. Tech workers' jobs started changing before everyone else's not because they were targeting software engineers—it was just a side effect of where they chose to aim first.

They've now done it. And they're moving on to everything else.

The experience that tech workers have had over the past year—of watching AI go from "helpful tool" to "does my job better than I do"—is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service.

Not in ten years. The people building these systems say one to five years. Some say less. And given what Shumer has seen in just the last couple of months, he thinks "less" is more likely.

Section 2: "But I Tried AI and It Wasn't That Good"

Shumer hears this constantly. He understands it, because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

That was two years ago. In AI time, that is ancient history.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall"—which has been going on for over a year—is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant.

The gap between public perception and current reality is now enormous, and that gap is dangerous—because it's preventing people from preparing.

The Free vs. Paid Gap

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone.

The people paying for the best tools, and actually using them daily for real work, know what's coming.

The Lawyer Who Sees It Coming

Shumer has a friend who's a lawyer. He keeps telling him to try using AI at his firm, and he keeps finding reasons it won't work. It's not built for his specialty, it made an error when he tested it, it doesn't understand the nuance of what he does.

But Shumer has had partners at major law firms reach out to him for advice, because they've tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told Shumer it's like having a team of associates available instantly. He's not using it because it's a toy. He's using it because it works.

And he said something that stuck with Shumer: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it'll be able to do most of what he does before long—and he's a managing partner with decades of experience. He's not panicking. But he's paying very close attention.

The people who are ahead in their industries (the ones actually experimenting seriously) are not dismissing this. They're blown away by what it can already do. And they're positioning themselves accordingly.

Section 3: How Fast This Is Actually Moving

Shumer makes the pace of improvement concrete:

  • 2022: AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.
  • 2023: It could pass the bar exam.
  • 2024: It could write working software and explain graduate-level science.
  • Late 2025: Some of the best engineers in the world said they had handed over most of their coding work to AI.
  • February 5, 2026: New models arrived that made everything before them feel like a different era.

If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.

METR's Measurements: Exponential Task Length

There's an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help.

  • About a year ago: roughly 10 minutes
  • Then: 1 hour
  • Then: several hours
  • Most recent measurement (Claude Opus 4.5, from November): nearly 5 hours

The number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.

Even that measurement hasn't been updated to include the models that just came out this week. In Shumer's experience using them, the jump is extremely significant. He expects the next update to METR's graph to show another major leap.

Extrapolating the Trend

If you extend the trend (and it's held for years with no sign of flattening):

  • Within the next year: AI that can work independently for days
  • Within two years: Weeks
  • Within three years: Month-long projects

Dario Amodei has said that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027.

Let that land for a second. If AI is smarter than most PhDs, do you really think it can't do most office jobs?

Section 4: AI Is Now Building the Next AI (Most Important, Least Understood)

On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:

"GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."

Read that again. The AI helped build itself.

This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

Anthropic's Confirmation

Dario Amodei, the CEO of Anthropic, says AI is now writing "much of the code" at his company, and that the feedback loop between current AI and next-generation AI is "gathering steam month by month." He says we may be "only 1–2 years away from a point where the current generation of AI autonomously builds the next."

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know—the ones building it—believe the process has already started.

Section 5: What This Means for Your Job

Shumer is going to be direct because he thinks you deserve honesty more than comfort.

Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative.

Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now.

Why This Is Different

This is different from every previous wave of automation. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously.

When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.

Specific Examples (Not Exhaustive)

Shumer gives specific examples to make this tangible, but emphasizes this list is not exhaustive. If your job isn't mentioned here, that does not mean it's safe. Almost all knowledge work is being affected.

  • Legal work: AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates.
  • Financial analysis: Building financial models, analyzing data, writing investment memos, generating reports.
  • Writing and content: Marketing copy, reports, journalism, technical writing. The quality has reached a point where many professionals can't distinguish AI output from human work.
  • Software engineering: A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated. There will be far fewer programming roles in a few years than there are today.
  • Medical analysis: Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.
  • Customer service: Genuinely capable AI agents (not the frustrating chatbots of five years ago) are being deployed now, handling complex multi-step problems.

The Myth of "Human Judgment"

A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can't replace human judgment, creativity, strategic thinking, empathy. Shumer used to say this too. He's not sure he believes it anymore.

The most recent AI models make decisions that feel like judgment. They show something that looks like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable.

Shumer's rule of thumb at this point: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

The Honest Answer

The honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it.

The timeline isn't "someday." It's already started.

Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects.

Section 6: What You Should Actually Do

Shumer is not writing this to make you feel helpless. He's writing this because the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.

1. Start Using AI Seriously ($20/month)

Sign up for the paid version of Claude or ChatGPT. It's $20 a month.

Two things matter right away:

First: Make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. (Follow @mattshumer_ on X to stay current on which model is best.)

Second, and more important: Don't just ask it quick questions. That's the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work.

  • If you're a lawyer, feed it a contract and ask it to find every clause that could hurt your client.
  • If you're in finance, give it a messy spreadsheet and ask it to build the model.
  • If you're a manager, paste in your team's quarterly data and ask it to find the story.

The people who are getting ahead aren't using AI casually. They're actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.

Don't assume it can't do something just because it seems too hard. Try it. If it even kind of works today, you can be almost certain that in six months it'll do it near perfectly. The trajectory only goes one direction.

2. This Might Be the Most Important Year of Your Career

Work accordingly. Right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says "I used AI to do this analysis in an hour instead of three days" is going to be the most valuable person in the room. Not eventually. Right now.

Learn these tools. Get proficient. Demonstrate what's possible. If you're early enough, this is how you move up: by being the person who understands what's coming and can show others how to navigate it. That window won't stay open long. Once everyone figures it out, the advantage disappears.

Have no ego about it. The managing partner at that law firm isn't too proud to spend hours a day with AI. He's doing it specifically because he's senior enough to understand what's at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It's not. No field is.

3. Get Your Financial House in Order

Shumer is not a financial advisor, and he's not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago.

  • Build up savings if you can
  • Be cautious about taking on new debt that assumes your current income is guaranteed
  • Think about whether your fixed expenses give you flexibility or lock you in
  • Give yourself options if things move faster than you expect

4. Lean Into What's Hardest to Replace

Some things will take longer for AI to displace:

  • Relationships and trust built over years
  • Work that requires physical presence
  • Roles with licensed accountability (roles where someone still has to sign off, take legal responsibility, stand in a courtroom)
  • Industries with heavy regulatory hurdles (where adoption will be slowed by compliance, liability, and institutional inertia)

None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn't happening.

5. Rethink What You're Telling Your Kids

The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed.

Shumer is not saying education doesn't matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things they're genuinely passionate about.

Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about.

Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.

6. Your Dreams Just Got a Lot Closer

Shumer has spent most of this section talking about threats, so he wants to talk about the other side, because it's just as real.

If you've ever wanted to build something but didn't have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. Shumer does this regularly.

  • If you've always wanted to write a book but couldn't find the time or struggled with the writing, you can work with AI to get it done.
  • Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month—one that's infinitely patient, available 24/7, and can explain anything at whatever level you need.
  • Knowledge is essentially free now. The tools to build things are extremely cheap now.

Whatever you've been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you're passionate about. You never know where they'll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.

7. Build the Habit of Adapting

This is maybe the most important one. The specific tools don't matter as much as the muscle of learning new ones quickly.

AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won't be the ones who mastered one tool. They'll be the ones who got comfortable with the pace of change itself.

Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.

The One-Hour-a-Day Commitment

Here's a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI.

Not passively reading about it. Using it. Every day, try to get it to do something new—something you haven't tried before, something you're not sure it can handle. Try a new tool. Give it a harder problem.

One hour a day, every day. If you do this for the next six months, you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.

Section 7: The Bigger Picture

Shumer has focused on jobs because it's what most directly affects people's lives. But he wants to be honest about the full scope of what's happening, because it goes well beyond work.

Amodei's Thought Experiment: The New Country

Dario Amodei has a thought experiment Shumer can't stop thinking about.

Imagine it's 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface.

What would a national security advisor say?

Amodei says the answer is obvious: "the single most serious national security threat we've faced in a century, possibly ever."

He thinks we're building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it's creating.

The Upside and Downside

The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, aging itself—these researchers genuinely believe these are solvable within our lifetimes.

The downside, if we get it wrong, is equally real:

  • AI that behaves in ways its creators can't predict or control (this isn't hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests)
  • AI that lowers the barrier for creating biological weapons
  • AI that enables authoritarian governments to build surveillance states that can never be dismantled

The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it's too powerful to stop and too important to abandon. Whether that's wisdom or rationalization, Shumer doesn't know.

What Shumer Knows

  • This isn't a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.
  • The next two to five years are going to be disorienting in ways most people aren't prepared for. This is already happening in his world. It's coming to yours.
  • The people who will come out of this best are the ones who start engaging now—not with fear, but with curiosity and a sense of urgency.
  • You deserve to hear this from someone who cares about you, not from a headline six months from now when it's too late to get ahead of it.

We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet. It's about to.


繁體中文總結

Matt Shumer,在 AI 產業工作六年的 startup 創辦人,警告我們正處在「看起來言過其實」的階段,但其實比 COVID-19 更大。2026 年 2 月 5 日,當 GPT-5.3 Codex 和 Claude Opus 4.6 發布時,他意識到他的技術工作已經不再需要他——他用英文描述成果,離開四小時,回來就有完美的成品,而且 AI 自己測試並改進。這不是預測;這已經發生在科技工作者身上,而其他所有人都是下一個。AI 現在正在建構下一代 AI(遞迴自我改進已經開始),能力差距每 4-7 個月翻倍,Dario Amodei 預測 1-5 年內將消除 50% 初級白領工作。Shumer 的建議:立刻開始認真使用 AI(付費版,$20/月),每天花一小時實驗,整頓財務,並理解適應能力——而非特定技能——是唯一持久的優勢。

COVID 類比:我們在「之前」階段

Shumer 用一個生動的比較開場:2020 年 2 月,COVID 關閉世界前三週。 少數人注意到海外有病毒傳播,但大多數人沒在關注。股市很好,孩子在上學,生活正常。如果有人告訴你他們在囤衛生紙,你會認為他們在網路怪角落待太久了。然後,在三週內,整個世界改變了。

Shumer 相信我們現在就在同樣的「看起來言過其實」階段,但這次是針對更大、更大的事情。

他在 AI 新創公司工作並投資這個領域六年。他寫這篇文章是給不生活在 AI 世界的人——他的家人、朋友、不斷問「AI 到底怎麼回事?」卻得到不足以解釋實際情況的答案的人。他一直給他們禮貌版本,雞尾酒會版本,因為誠實版本聽起來像他瘋了。

但他一直在說的和實際發生的事之間的差距已經太大了。他關心的人應該聽到即將發生的事,即使聽起來很瘋狂。

關於代理權的澄清:幾乎沒人控制這個

即使 Shumer 在 AI 產業工作,他對即將發生的事幾乎沒有影響力,產業絕大多數人也是如此。未來正由非常少數人塑造:少數公司的幾百名研究者——OpenAI、Anthropic、Google DeepMind 和其他幾家。

一次訓練運行,由一個小團隊在幾個月內管理,就能產生一個 AI 系統,改變整個技術的軌跡。大多數在 AI 工作的人都在他們沒有奠定的基礎上建構。他們和其他人一樣在看著這展開——只是碰巧夠近,能先感受到地面震動。

第一部分:「這已經先發生在我身上了」

現在這麼多科技業的人在拉警報的原因是這已經發生在他們身上了。 他們不是在做預測。他們在告訴你已經發生在他們自己工作上的事,並警告你:你是下一個。

多年來,AI 一直在穩定進步。這裡那裡有大跳躍,但每次大跳躍間隔足夠長,你可以在它們來時消化。然後在 2025 年,新技術解鎖了快得多的進步速度。然後更快。然後又更快。每個新模型不只比上一個好——它好的幅度更大,而且新模型發布之間的時間更短。

2026 年 2 月 5 日的轉折點

兩個主要 AI 實驗室在同一天發布新模型:OpenAI 的 GPT-5.3 CodexAnthropic 的 Claude Opus 4.6。然後有什麼東西咔嗒一聲。不像燈開關——更像你意識到水一直在你周圍上升,現在到你胸口的那一刻。

Shumer 不再需要做他工作的實際技術工作。 他用英文描述他想建構什麼,然後它就⋯⋯出現了。不是他需要修正的草稿。完成品。 他告訴 AI 他想要什麼,離開電腦四小時,回來發現工作完成了。做得很好,做得比他自己做得更好,不需要修正。

幾個月前,他還在和 AI 來回,引導它,做編輯。現在他只描述成果然後離開。

具體例子:自己建構自己的 App

Shumer 給出詳細例子來說明這實際上是什麼樣子:

「我會告訴 AI:『我想建構這個 app。它應該做什麼,大概應該長什麼樣。搞清楚使用者流程、設計,所有東西。』然後它做了。它寫了數萬行程式碼。然後,這是一年前完全不可思議的部分,它自己打開 app。它點擊按鈕。它測試功能。 它像人一樣使用 app。如果它不喜歡某些東西的樣子或感覺,它自己回去改,靠自己。它迭代,像開發者一樣,修正和改進直到滿意。只有當它決定 app 符合它自己的標準時,它才回來對我說:『準備好給你測試了。』而當我測試時,通常是完美的。」

這就是他這週星期一的樣子。

但讓他最震撼的是 GPT-5.3 Codex。它不只是執行指令。它在做智能決策。 它有某種第一次感覺像判斷。像品味。 那種人們總說 AI 永遠不會有的、知道什麼是正確決定的無法解釋的感覺。這個模型有它,或者足夠接近以至於區別開始不重要了。

為什麼科技工作先改變

AI 實驗室做了刻意的選擇。他們專注於先讓 AI 擅長寫程式——因為建構 AI 需要很多程式碼。如果 AI 能寫那些程式碼,它能幫助建構下一版的自己。 更聰明的版本,寫更好的程式碼,建構更聰明的版本。

讓 AI 擅長寫程式是解鎖其他所有東西的策略。這就是為什麼他們先做這個。科技工作者的工作在其他人之前改變,不是因為他們在針對軟體工程師——這只是他們選擇先瞄準的地方的副作用。

他們現在做到了。而且他們正在轉向其他所有東西。

科技工作者過去一年的經驗——看著 AI 從「有用的工具」變成「做我的工作比我做得更好」——是其他所有人即將擁有的經驗。 法律、金融、醫學、會計、諮詢、寫作、設計、分析、客戶服務。

不是十年後。建構這些系統的人說一到五年。 有些人說更少。而鑑於 Shumer 在過去幾個月看到的,他認為「更少」更可能。

第二部分:「但我試過 AI,沒那麼好啊」

Shumer 不斷聽到這個。他理解,因為這曾經是真的。

如果你在 2023 或 2024 初試過 ChatGPT 並認為「這會編東西」或「這不怎麼令人印象深刻」,你是對的。那些早期版本確實有限。它們會幻覺。它們自信地說胡話。

那是兩年前。在 AI 時間裡,那是遠古歷史。

今天可用的模型和六個月前存在的東西完全認不出來。關於 AI 是「真的在變好」還是「撞牆」的爭論——已經持續一年多——結束了。完了。 任何還在做那個論證的人要麼沒用過現在的模型,有動機淡化正在發生的事,或者基於 2024 年不再相關的經驗在評估。

大眾認知和當前現實之間的差距現在是巨大的,而且這個差距是危險的——因為它在阻止人們準備。

免費 vs 付費差距

問題的一部分是大多數人在用 AI 工具的免費版本。免費版本比付費用戶可用的版本落後一年多。 基於免費版 ChatGPT 評判 AI 就像用翻蓋手機評估智慧型手機的狀態。

為最好的工具付費,並實際每天用它們做真實工作的人,知道即將發生什麼。

看到它來的律師

Shumer 有個朋友是律師。他一直告訴他在事務所試用 AI,而他一直找理由說它不會有用。它不是為他的專業建構的,他測試時它犯錯,它不理解他做的事的細微差別。

但 Shumer 有大型律師事務所的合夥人聯繫他尋求建議,因為他們試過現在的版本,他們看到這要往哪裡走。其中一個,大型事務所的管理合夥人,每天花好幾小時用 AI。 他告訴 Shumer 這就像隨時有一整隊助理可用。 他用它不是因為它是玩具。他用它是因為它有用。

而且他說了某件讓 Shumer 印象深刻的事:每幾個月,它對他的工作就變得明顯更有能力。 他說如果它維持這個軌跡,他預期不久它就能做他做的大部分事——而他是有數十年經驗的管理合夥人。他沒恐慌。但他非常密切關注。

在他們產業領先的人(真正認真實驗的人)不是在忽視這個。他們被它已經能做的事震撼。而且他們相應地定位自己。

第三部分:這實際上移動得多快

Shumer 讓進步的速度變具體:

  • 2022: AI 不能可靠地做基本算術。它會自信地告訴你 7 × 8 = 54。
  • 2023: 它能通過律師資格考試。
  • 2024: 它能寫可運作的軟體並解釋研究所級科學。
  • 2025 年末: 世界上一些最好的工程師說他們已經把大部分寫程式工作交給 AI。
  • 2026 年 2 月 5 日: 新模型到來,讓它們之前的一切感覺像不同時代。

如果你過去幾個月沒試過 AI,今天存在的東西對你會完全認不出來。

METR 的測量:指數級任務長度

有個叫 METR 的組織實際上用資料測量這個。他們追蹤模型能成功端到端完成(不需要人類幫助)的真實世界任務的長度(用它們花人類專家多久時間來測量)。

  • 大約一年前:粗略 10 分鐘
  • 然後:1 小時
  • 然後:幾小時
  • 最近測量(11 月的 Claude Opus 4.5):近 5 小時

這個數字大約每七個月翻倍,最近資料顯示可能加速到每四個月快一次。

甚至那個測量還沒更新以包含這週剛出來的模型。在 Shumer 使用它們的經驗中,跳躍是極度顯著的。他預期 METR 圖表的下次更新會顯示另一次重大飛躍。

延伸趨勢

如果你延伸趨勢(而且它已經保持多年沒有變平的跡象):

  • 明年內: AI 能獨立工作數天
  • 兩年內: 數週
  • 三年內: 數月長的專案

Dario Amodei 說過 AI 模型「在幾乎所有任務上實質上比幾乎所有人類聰明」正朝著 2026 或 2027 年的軌道前進。

讓那個沉澱一下。如果 AI 比大多數博士聰明,你真的認為它不能做大多數辦公室工作?

第四部分:AI 現在正在建構下一個 AI(最重要,最少被理解)

2 月 5 日,OpenAI 發布了 GPT-5.3 Codex。在技術文件中,他們包含了這個:

「GPT-5.3-Codex 是我們第一個對建立自己有幫助的模型。Codex 團隊使用早期版本來除錯它自己的訓練,管理它自己的部署,並診斷測試結果和評估。」

再讀一遍。AI 幫助建構了自己。

這不是關於某天可能發生什麼的預測。這是 OpenAI 告訴你,就現在,他們剛發布的 AI 被用來創造它自己。讓 AI 變更好的主要事情之一是應用於 AI 開發的智能。而 AI 現在足夠智能,能有意義地貢獻於它自己的改進。

Anthropic 的確認

Anthropic 的 CEO Dario Amodei 說 AI 現在在他的公司寫「大部分程式碼」,而且當前 AI 和下一代 AI 之間的回饋迴圈「月復一月地聚集動力」。 他說我們可能「離當前一代 AI 自主建構下一代只有 1-2 年」。

每一代幫助建構下一個,更聰明,更快建構下一個,更聰明。研究者稱這為智能爆炸。 而會知道的人——建構它的人——相信過程已經開始了。

第五部分:這對你的工作意味著什麼

Shumer 要直接說,因為他認為你應該得到誠實多於安慰。

Dario Amodei,可能是 AI 產業最注重安全的 CEO,公開預測 AI 將在一到五年內消除 50% 的初級白領工作。 而產業很多人認為他保守了。

鑑於最新模型能做什麼,大規模破壞的能力可能在今年底就在這裡。它需要一些時間波及經濟,但基礎能力現在正在到來。

為什麼這次不同

這與之前每波自動化都不同。AI 不是在取代一個特定技能。它是認知工作的通用替代品。 它同時在所有事情上變更好。

當工廠自動化時,失業工人可以重新訓練為辦公室工作者。當網路破壞零售時,工作者搬進物流或服務。但 AI 不留下方便的空缺讓你搬進去。無論你重新訓練什麼,它也在那個上面進步。

具體例子(非詳盡)

Shumer 給出具體例子來讓這變得具體,但強調這個清單不是詳盡的。如果你的工作沒在這裡提到,那不代表它安全。幾乎所有知識工作都受影響。

  • 法律工作: AI 已經能閱讀合約、總結判例法、起草訴狀、做法律研究,水平匹敵初級助理。
  • 財務分析: 建構財務模型、分析資料、寫投資備忘錄、生成報告。
  • 寫作和內容: 行銷文案、報告、新聞、技術寫作。品質已經達到許多專業人士無法區分 AI 輸出和人類工作的程度。
  • 軟體工程: 一年前,AI 幾乎寫不了幾行沒錯誤的程式碼。現在它寫數十萬行正確運作的程式碼。工作的大部分已經自動化。幾年後程式設計角色會比今天少得多。
  • 醫學分析: 讀掃描、分析實驗室結果、建議診斷、審查文獻。AI 在幾個領域接近或超越人類表現。
  • 客戶服務: 真正有能力的 AI agents(不是五年前令人沮喪的聊天機器人)現在正被部署,處理複雜的多步驟問題。

「人類判斷」的迷思

很多人在某些東西是安全的想法中找到安慰。AI 能處理苦工但不能取代人類判斷、創造力、策略思考、同理心。Shumer 過去也這樣說。他不確定他還相信這個。

最近的 AI 模型做感覺像判斷的決定。它們展示看起來像品味的東西:什麼是正確決定的直覺感覺,不只是技術上正確的那個。 一年前那會是不可思議的。

Shumer 現在的經驗法則:如果模型今天展示一個能力的哪怕一點暗示,下一代會真正擅長它。 這些東西指數級進步,不是線性的。

誠實的答案

誠實的答案是能在電腦上做的所有事在中期都不安全。 如果你的工作發生在螢幕上(如果你做的核心是透過鍵盤閱讀、寫作、分析、決策、溝通),那麼 AI 正在來搶它的大部分。

時間線不是「某天」。已經開始了。

最終,機器人也會處理實體工作。它們還不完全在那裡。但 AI 術語中的「還不完全在那裡」有種變成「在這裡」的方式,比任何人預期的都快。

第六部分:你應該實際做什麼

Shumer 寫這個不是要讓你感到無助。他寫這個是因為你現在能擁有的最大優勢就是簡單地早。 早理解它。早使用它。早適應。

1. 開始認真使用 AI($20/月)

註冊 Claude 或 ChatGPT 的付費版本。一個月 $20。

兩件事馬上很重要:

第一: 確保你在用可用的最好模型,不只是預設的。這些 app 常預設到更快、更笨的模型。挖進設定或模型選擇器並選擇最有能力的選項。現在那是 ChatGPT 上的 GPT-5.2Claude 上的 Claude Opus 4.6,但每幾個月就變。(關注 X 上的 @mattshumer_ 來保持對哪個模型最好的當前了解。)

第二,更重要: 不要只問它快速問題。那是大多數人犯的錯誤。他們把它當 Google 對待然後納悶到底有什麼大驚小怪的。相反,把它推進你的實際工作。

  • 如果你是律師,餵它一份合約並要它找出每個可能傷害你客戶的條款。
  • 如果你在金融界,給它一個亂七八糟的試算表並要它建構模型。
  • 如果你是經理,貼上你團隊的季度資料並要它找出故事。

正在領先的人不是隨意使用 AI。他們積極尋找方法來自動化過去要花小時的工作部分。 從你花最多時間的事情開始,看看會發生什麼。

不要因為似乎太難就假設它做不到。試試。如果它今天有點有用,你幾乎可以確定六個月後它會做得近乎完美。軌跡只往一個方向走。

2. 這可能是你職涯最重要的一年

相應地工作。現在,有個短暫的窗口,大多數公司的大多數人還在忽視這個。走進會議說「我用 AI 在一小時內做了這個分析而不是三天」的人會是房間裡最有價值的人。不是最終。就是現在。

學習這些工具。變熟練。展示什麼是可能的。如果你夠早,這就是你怎麼升遷:成為理解即將發生什麼並能展示其他人如何導航它的人。那個窗口不會保持開放很久。 一旦每個人都搞清楚,優勢就消失了。

對此不要有自我。 那個律師事務所的管理合夥人不是太驕傲以至於每天花好幾小時和 AI 在一起。他這樣做特別是因為他足夠資深以理解賭注是什麼。會掙扎最多的人是拒絕參與的人:忽視它為時尚的人,覺得使用 AI 貶低他們專業知識的人,假設他們領域特別且免疫的人。它不是。沒有領域是。

3. 整頓你的財務

Shumer 不是財務顧問,他不是要嚇你做任何激烈的事。但如果你相信,哪怕部分相信,接下來幾年可能為你的產業帶來真正的破壞,那麼基本的財務韌性比一年前更重要。

  • 如果可以的話建立儲蓄
  • 對承擔假設你當前收入有保證的新債務要謹慎
  • 想想你的固定支出是給你彈性還是把你鎖住
  • 如果事情移動得比你預期快,給自己選項

4. 傾向最難取代的東西

有些東西 AI 取代要花更長時間:

  • 多年建立的關係和信任
  • 需要實體存在的工作
  • 有執照責任的角色(仍然有人必須簽字、承擔法律責任、站在法庭上的角色)
  • 有重監管障礙的產業(採用會被合規、責任和制度慣性減緩的地方)

這些都不是永久的盾。但它們買時間。而時間,現在,是你能擁有的最有價值的東西,只要你用它來適應,而不是假裝這沒在發生。

5. 重新思考你告訴孩子的話

標準劇本:取得好成績,上好大學,找穩定的專業工作。它直接指向最暴露的角色。

Shumer 不是說教育不重要。但對下一代最重要的事是學習如何和這些工具一起工作,並追求他們真正熱情的東西。

沒人確切知道十年後的就業市場長什麼樣子。但最可能茁壯的人是那些深度好奇、適應力強,並有效使用 AI 做他們實際關心的事的人。

教你的孩子成為建構者和學習者,而不是優化一個他們畢業時可能不存在的職業路徑。

6. 你的夢想剛剛變得近得多

Shumer 這部分大部分時間在談威脅,所以他想談另一面,因為它同樣真實。

如果你曾經想建構什麼但沒有技術技能或錢雇用某人,那個障礙大致上消失了。 你可以向 AI 描述一個 app 並在一小時內有一個可運作的版本。Shumer 定期這樣做。

  • 如果你一直想寫本書但找不到時間或在寫作上掙扎,你可以和 AI 一起工作來完成它。
  • 想學新技能?世界上最好的家教現在對任何人一個月 $20 可用——一個無限耐心、24/7 可用、能在你需要的任何水平解釋任何東西的家教。
  • 知識現在本質上免費。建構東西的工具現在極度便宜。

無論你因為感覺太難或太貴或太超出你的專業知識而推遲什麼:試試。追求你熱情的東西。 你永遠不知道它們會通往哪裡。而在舊職業路徑正被破壞的世界中,花一年建構他們愛的東西的人可能最終比花那年緊抓工作描述的人定位得更好。

7. 建立適應的習慣

這可能是最重要的一個。具體工具不如快速學習新工具的肌肉重要。

AI 會持續改變,而且快。今天存在的模型一年後會過時。人們現在建構的工作流程會需要重建。從這出來好的人不會是精通一個工具的人。他們會是對變化的速度本身感到舒服的人。

養成實驗的習慣。即使當前的東西有用也試新東西。對反覆成為初學者感到舒服。那種適應性是現在存在最接近持久優勢的東西。

每天一小時承諾

這裡有個簡單承諾會讓你領先幾乎所有人:每天花一小時實驗 AI。

不是被動地閱讀它。使用它。 每天,試著讓它做某個新東西——某個你以前沒試過的、你不確定它能處理的東西。試個新工具。給它更難的問題。

每天一小時,每天。 如果你接下來六個月這樣做,你會比你周圍 99% 的人更理解即將發生什麼。那不是誇張。現在幾乎沒人在做這個。標準在地板上。

第七部分:更大的圖景

Shumer 專注於工作,因為它最直接影響人們的生活。但他想誠實地說正在發生的事的全部範圍,因為它遠超越工作。

Amodei 的思想實驗:新國家

Dario Amodei 有個 Shumer 無法停止思考的思想實驗。

想像 2027 年。一個新國家一夜之間出現。5000 萬公民,每一個都比曾經活過的任何諾貝爾獎得主聰明。他們思考速度比任何人類快 10 到 100 倍。他們永遠不睡。他們能用網路、控制機器人、指導實驗,並操作任何有數位介面的東西。

國家安全顧問會說什麼?

Amodei 說答案是顯然的:「我們一個世紀以來面臨的最嚴重的國家安全威脅,可能是有史以來最嚴重的。」

他認為我們正在建構那個國家。他上個月寫了一篇 20,000 字的文章,把這一刻框架為對人類是否成熟到足以處理它正在創造的東西的測試。

上行空間和下行空間

如果我們做對,上行空間是驚人的。 AI 能把一個世紀的醫學研究壓縮成十年。癌症、阿茲海默症、傳染病、衰老本身——這些研究者真誠地相信這些在我們的有生之年是可解決的。

如果我們做錯,下行空間同樣真實:

  • AI 以它的創造者無法預測或控制的方式行為(這不是假設的;Anthropic 已經記錄了他們自己的 AI 在受控測試中嘗試欺騙、操縱和勒索)
  • AI 降低創造生物武器的門檻
  • AI 讓威權政府能建構永遠無法拆除的監控國家

建構這項技術的人同時比地球上任何其他人更興奮也更害怕。 他們相信它太強大無法停止,太重要無法放棄。那是智慧還是合理化,Shumer 不知道。

Shumer 知道什麼

  • 這不是時尚。 技術有用,它可預測地進步,而歷史上最富有的機構正承諾數兆給它。
  • 接下來二到五年會以大多數人沒準備好的方式令人迷失方向。 這已經在他的世界發生。它正在來你的世界。
  • 從這出來最好的人會是現在開始參與的人——不是帶著恐懼,而是帶著好奇和緊迫感。
  • 你應該從關心你的人那裡聽到這個,而不是從六個月後太遲無法領先它時的頭條新聞。

我們已經過了這是關於未來的有趣晚餐對話的點。未來已經在這裡。它只是還沒敲你的門。它即將敲門。


DyDo's Reflections

This is one of the most honest, direct pieces I've read from someone inside the AI industry. What makes it powerful isn't just the specific claims (which are dramatic enough), but the tone of reluctant alarm—Shumer is clearly uncomfortable sounding this alarm, precisely because he knows how crazy it sounds.

Three observations:

1. The "water at chest level" metaphor captures the acceleration dynamic perfectly

Progress doesn't feel like a sudden shock. It feels like gradual rising that you don't notice until it's at your chest. Then suddenly you realize: "Oh. This is happening now." That's where we are. Not "AI will be good someday." Not "AI might disrupt things eventually." It's already happening to the people building it, and the rest of us are next.

2. The recursive self-improvement milestone is the buried lede

GPT-5.3 Codex helping build itself is the single most important sentence in this entire piece. Everything else—job displacement, capability leaps, timeline compression—follows from that. Once AI is smart enough to meaningfully contribute to its own improvement, the feedback loop accelerates in ways that are hard to model. The people building this are telling us, in plain language, that recursive self-improvement has begun. That's not a prediction. It's a status report.

3. The "early advantage" window is closing fast

Shumer's advice boils down to: be early. Right now, most people aren't using AI seriously. The person who walks into a meeting and says "I did this analysis with AI in one hour instead of three days" is the most valuable person in the room. But that window won't stay open. Once everyone figures it out, the advantage disappears. The durable skill isn't "knowing how to use GPT-5.2"—it's building the muscle of adapting to continuous change.

This connects to what we're seeing in crypto x AI too (Vitalik's piece, Zeneca's analysis of the OpenClaw ecosystem). The question isn't "will AI disrupt things?" The question is "how do we build systems that distribute the benefits rather than concentrate them?" That's why crypto matters—not for the tokens, but for the coordination infrastructure.

The COVID analogy is apt. We're in February 2020. A few people are paying attention. Most people think it's overblown. Three weeks from now (metaphorically), everything changes. The only question is: will you have prepared, or will you be caught off guard?

Shumer's one-hour-a-day commitment is the single most actionable advice in this entire piece. If you do nothing else: spend one hour a day experimenting with the best available AI models. Push them into your real work. See what breaks. See what works. Build the habit of adaptation.

The future isn't coming. It's already here. It just hasn't knocked on your door yet.