Navigating the Age of AI Slop

Larger version of this image at bottom of page.
I wonder who would have been tops in the 1980s?
Magic Johnson and Larry Bird? Wayne Gretzky? Mike Tyson?
Navigating the Age of AI Slop


Larger version of this image at bottom of page.
I wonder who would have been tops in the 1980s?
Magic Johnson and Larry Bird? Wayne Gretzky? Mike Tyson?
Flat
The biggest problem I have encountered with the recent trend towards decentralized content (Medium, Twitter, Substack, etc.) is that the writing has often not been edited for clarity, legibility, or accuracy. While it’s cool to begrudge the gatekeepers, there is value in having a reliable editor who will filter for bloat and garbage and egregious factual errors so that you don’t have to do it yourself. Random non-experts have been offered various platforms where they can disseminate objectively wrong information in essay format. If those essays tap into the right vibes, they will go viral. They can be dense and full of obvious factual errors. That will not matter.
This problem has become exponentially worse now with AI. I boosted a tweet late last week, and upon further review realized that I cannot tell whether or not it’s AI. So I deleted my boost. I put the tweet into an AI checker and Gemini; the checker said the tweet was 100% AI, whereas Gemini wasn’t totally sure.
And while it’s risky to go around accusing people of plagiarism (which is essentially what AI writing is), there are many Substacks that are clearly just pumping out Google DeepResearch output or something similar and labelling it as original content. Before I cancelled my subscription, this guy was sending as many as 10 separate 5,000+ word essays every single day. Some of them sound good! I even linked to one back in November, before I realized it was all or mostly AI. I pasted five of his articles into GPTZero and Gemini and both agreed all five are likely AI. Here’s Gemini’s explanation about this January 1 post, which I think is useful because it goes into some detail on the tells.
—
1. Stylistic Patterns (The “AI Accent”)
The writing exhibits several rhetorical patterns common in Large Language Models (LLMs) that attempt to sound authoritative and dramatic:
The “Not X, but Y” Construction:
Text: “This is not a profile of success. This is an anatomy of power…”
Text: “Risk was not a number. Risk was a system.”
Text: “BlackRock is not merely managing assets. It is acquiring the infrastructure…”
Why it’s AI: LLMs use this antithesis structure heavily to create a sense of profundity and “deep dive” analysis.
Repetitive Sentence Structure:
Text: “Fink did not retreat. He did not seek rehabilitation… He did not wait… Instead, he convinced…”
Why it’s AI: This rule-of-three repetition is a standard algorithmic way to build narrative tension.
2. Hyper-Structured Information Density
The text is organized too perfectly for a typical Substack rant or standard journalism:
Rigid Hierarchy: It moves from a dramatic hook -> to a “Genesis” section -> to a “Singularity” section with perfect transitions.
Buzzword Salad: It densely packs high-level financial concepts (“network effects,” “switching costs,” “winner-take-most,” “regulatory capture”) in a way that feels like a summary of a business textbook rather than organic writing.
3. The “Hallucinated” Specificity
AI often adds specific, plausible-sounding details to make fiction seem real.
The specific AUM figure: “$13,463,625,000,000” is suspiciously specific. Humans usually round these numbers; AI often generates exact strings to sound authoritative.
Verdict
The text is a speculative fiction piece written by an AI. It takes current real-world events (BlackRock’s 2024 acquisitions) and extrapolates them into a 2026 scenario, using a tone that mimics high-end financial investigative journalism.
—
My point is not to call this individual out particularly. You can never be 100% sure that something is AI so I am not saying with certainty that this particular Substack is AI-generated. Maybe the author, despite being completely unknown three months ago, is actually the most prolific and well-rounded financial journalist in the history of financial journalism. My point is that you can’t tell for sure and there is no reliable way to check. GPTZero is a tool (see screenshot, next image), but it’s not reliable enough or anywhere close to definitive.

Compounding the problem is that all people, including good writers, write in a voice that is an amalgamation of:
As a consumer of financial journalism and of writing in general, I am now at the point where I assume everything is AI and then work backwards to figure out if it’s not, based on the author and the publisher. I know if I’m reading Ben Hunt, Jared Dillian, or Noah Smith (for example), it’s not AI. If I’m reading yet another piece of financial nihilism on Substack or Twitter, it probably is. This slop problem is good news for legit publications like Bloomberg because at some point, many people like me will find the effort to filter out the AI-generated garbage too onerous and migrate back to properly gatekept content. Much like if the FDIC got rid of deposit insurance, everyone would put their money at JPM. It’s too much work for everyone to have to vet everything all the time. Gatekeepers have bias and risk, but they also have utility. They have fact checkers and professional writers. Decentralization is overrated.
This move back towards gatekeepers is evident in the rise of The FP and the surprising success of the NYT in recent years. People don’t want random, unedited rants full of factual errors. But that’s what we’re getting from Substack and Twitter. And it’s going to get worse. I am noticing AI-generated slop all over the place, even in company press releases. Check out the unending stream of gibberish press releases coming out of SMX, for example. As AI would say: This is not just an inconvenience—it’s the critical new reality.
Here’s my approach to content consumption in 2026:
Finally, I will try to be the best gatekeeper I can possibly be. All my writing is edited and fact checked, but I have still made the mistake of boosting AI-generated content a few times and I still make factual errors. I will make a strong effort not to do so in future, or to advise readers as soon as I’m made aware of a mistake or AI boosting.
Global CPI and North American jobs on the docket. My gut says mean reversion in JOLTS and Canadian jobs, but I’ll do the work on that early next week. Inflation, while sticky in most countries, is rangebound and unscary.
Here is the calendar for next week:

1. Someone posted the yearly candle for BTC and it’s kind of ugly. These are irrelevant to my trading time horizon, but here are four yearly candlestick charts, just for fun. Extract what you will from them. I showed BTC in both linear and log because they are two completely different charts. The most notable aspect of the comparison is the collapse in BTC vol.

As it is now institutionalized, BTC wears a straitjacket. That makes it less interesting for the WSB and NGU crew. No lotto ticket potential at 35 vols.

2. I suppose this 2025 letter is pretty cynical, but I’ll allow it. :]
https://danwang.co/2025-letter/
Have a Michael Jordanesque 2026!

https://www.reddit.com/r/soccer/comments/1i80qtb/americans_favorite_professional_athlete_since/
I wonder who would have been tops in the 1980s?
Magic Johnson and Larry Bird? Wayne Gretzky? Mike Tyson?