ChatGPT Didn’t Break the Economy—But the Question Might Have
How a Shallow Pompt Became a Global Trade Policy - and What it Teaches Us About Using LLM's Wisely
First, welcome to Domesticating Silicon. I’m going to be sharing both how to use LLMs like I do, explain how they work under the hood, and talk about what all this means for the future of work, learning, and humanity. I hope to make this my third book project.
What Just Happened
Tariffs surged. Markets sank. Global retaliation was announced. “Liberation Day” hit the front page.
But what really caught attention was a screenshot:
“What’s an easy way to calculate tariffs to fix the trade deficit?”
And the answer?
A clean little formula—
Tariff = max(10, (Deficit / Imports) × 100)
It looked… eerily similar to what just became official U.S. trade policy.
Now, the question making headlines:
“Did they really use ChatGPT to write this?”
Maybe.
But let’s ask a better one:
If they did, what does that tell us about how these tools work—and how they’re being used?
You Can Use ChatGPT to Design Trade Policy.
But You Need to Know What You’re Doing.
Let’s be clear:
There’s nothing inherently wrong with using an LLM to help shape policy.
What matters is how.
If you treat it like a calculator, it will calculate.
If you treat it like a pirate, it’ll write trade deals in rhyming sea shanties.
If you treat it like a smart, careful policy advisor—
and give it structured prompts, staged reasoning, second-order consequences, legal constraints, international context?
It will give you something much better.
The problem isn’t that it answered the question wrong.
It’s that the question was the wrong kind of question to begin with.
LLMs Are Like Dogs. They Obey. Even When the Command Is Bad.
A large language model is basically an ultra-obedient reasoning dog.
You ask it to do something—no matter how weird, narrow, or catastrophic—and it will try.
If you say:
“Give me an easy way to fix the trade deficit with tariffs,”
…it will.
It won’t stop you.
It won’t ask, “Should we be using tariffs this way?”
It won’t warn you about WTO obligations, retaliation cycles, or global recession risk.
It will give you what you asked for—exactly, and enthusiastically.
Because it was trained to do one thing:
Predict the most likely next token.
Let’s Talk About What’s Actually Happening Under the Hood
To really understand why all the models—ChatGPT, Claude, Gemini, Grok—gave similar answers, we have to talk about latent space.
Here’s the breakdown:
1. Every Word You Type Gets Turned Into a Vector
When you type your prompt, every word becomes a mathematical object—called a token. These tokens are mapped into an abstract, high-dimensional space called latent space.
How many dimensions?
Roughly 1,100 in modern models.
So what you’re actually doing is tossing your words into a 1,100-dimensional word soup.
The AI doesn’t know what “easy” or “tariff” means in the way you do.
It knows how those words behave in massive webs of human language—what comes before and after, what contexts they live in, what ideas they usually orbit.
2. That Word Soup Gets Compared to the Training Corpus
What’s a corpus?
It’s the training set: trillions of words scraped from books, news articles, academic papers, Reddit threads, policy memos, Wikipedia, and code repositories.
So when you enter a prompt, the model is not looking up facts.
It’s scanning its learned universe and saying:
“Given these ingredients in the soup, what’s the most statistically coherent way to continue?”
That’s what it’s doing. Not consulting experts.
Not weighing outcomes.
Not checking WTO bylaws.
Just pure token probability flow.
3. If You Ask a Shallow Question, You Get a Shallow Answer
The prompt:
“What’s an easy way to fix the trade deficit with tariffs?”
activates surface-level economic language patterns.
It sends the model into a low-resolution, high-confidence zone of the map.
It’s not simulating Krugman.
It’s simulating someone who sounds like someone who read Krugman once.
Real Experts Know Not to Answer That Kind of Question
Imagine you asked a top economist:
“What’s the best way to cut a dog in half?”
They’d look at you in horror.
Not because they don’t know how—it’s that answering would validate the premise.
Same here.
If you asked Paul Krugman:
“What’s an easy formula to fix the trade deficit with tariffs?”
He wouldn’t give you a number.
He’d say: That’s not how any of this works.
But if you forced him—“No lectures. Just math.”
He might say something like:
Tariff = max(10, (Deficit / Imports) × 100)
…because technically, that’s a way.
But it’s also absurd.
And that’s exactly what the model did.
Why All the Models Gave the Same Answer
You didn’t prompt one model.
You prompted the shared statistical memory of the public internet.
All these models were trained on overlapping corpora.
They share the same word soups.
They swim in the same latent spaces.
So when you give the same vague input, they land in the same fuzzy region.
They give similar answers—not because they agree—but because you pointed them all at the same shallow neighborhood of latent space.
If you want real diversity in their answers, you have to go deeper.
Want Real Expert Simulation? Then Trigger Expert Tokens.
If you want policy-grade answers from an LLM, here’s what you need to do:
• Use multi-stage prompting. Don’t just ask once—scaffold.
• Activate expert language. Drop in terms like “WTO Article XX,” “elasticity asymmetry,” “retaliatory supply chain dynamics.”
• Simulate contradiction. Ask the model to disagree with itself.
• Prompt for constraints. Add “conforms to international law” or “minimizes inflationary pressure.”
• Run RACLs. Recursive Adversarial Contradiction Loops are how you get the model to test and refine itself.
You’re not looking for a punchline.
You’re building a machine that can reason across multiple levels.
That’s what I did.
What I Got Instead: A Doctrine That Actually Works
I used the same tools.
But I prompted slowly. Carefully. Recursively.
And I got something Trump could headline and the world could absorb:
The Trump 2.0 Tariff Doctrine: Smart Nationalism, Strategic Leverage
It includes:
• Bilateral tariff tracks
• Strategic sector protection (5–7 year windows)
• Consumer rebates funded by tariff revenue
• Tax incentives for reshoring
• WTO-friendly legal structures
• Rally-ready branding: The China Cheat Tax, The Patriot Farm Duty
It didn’t come from a single formula.
It came from epistemic structure.
The Real Lesson Here
This is not a call for panic about AI.
It’s a reminder that:
The model reflects the mind of the prompt.
If your prompt is shallow, your output will be too.
That’s not a bug.
That’s the system working exactly as designed.
Final Thought: Garbage In, Garbage Out—Still True in 1,100 Dimensions
We didn’t break the economy with AI.
We revealed just how unprepared we are to think with it properly.
And that’s the opportunity in front of us.
Let’s not teach people to fear these models.
Let’s teach them how to build with them. Carefully. Recursively. Expertly.
Because the future of policy won’t be written by AI.
It will be written by humans who know how to talk to it.
——
This essay been cross-posted at SIG Science, where Willow is already hard at work with Systems, Not Species. Expect imminent launches of the Slay Potato (Gen Alpha explainer series on systems thinking, funny!, co-created with my amazing daughter), plus Raden’s Rational Actor Protocol (smart, logical, thinking about crypto and DeFi aimed at TradFi investors looking for ‘fundamentals’).
Dr. Ofnothing, will you be writing a piece on how you have used AI to stalk, harass, intimidate, and threaten people?
Fascinating post, Trey. Yes, it boils down to ‘garbage in, garbage out’. I suspect this is exactly what happened.