highlights

It’s not just X. It’s Y.

Navigating the Age of AI Slop


Larger version of this image at bottom of page.

I wonder who would have been tops in the 1980s?
Magic Johnson and Larry Bird? Wayne Gretzky? Mike Tyson?

Current Views


Flat

Caveat Lector

The biggest problem I have encountered with the recent trend towards decentralized content (Medium, Twitter, Substack, etc.) is that the writing has often not been edited for clarity, legibility, or accuracy. While it’s cool to begrudge the gatekeepers, there is value in having a reliable editor who will filter for bloat and garbage and egregious factual errors so that you don’t have to do it yourself. Random non-experts have been offered various platforms where they can disseminate objectively wrong information in essay format. If those essays tap into the right vibes, they will go viral. They can be dense and full of obvious factual errors. That will not matter.

This problem has become exponentially worse now with AI. I boosted a tweet late last week, and upon further review realized that I cannot tell whether or not it’s AI. So I deleted my boost. I put the tweet into an AI checker and Gemini; the checker said the tweet was 100% AI, whereas Gemini wasn’t totally sure.

And while it’s risky to go around accusing people of plagiarism (which is essentially what AI writing is), there are many Substacks that are clearly just pumping out Google DeepResearch output or something similar and labelling it as original content. Before I cancelled my subscription, this guy was sending as many as 10 separate 5,000+ word essays every single day. Some of them sound good! I even linked to one back in November, before I realized it was all or mostly AI. I pasted five of his articles into GPTZero and Gemini and both agreed all five are likely AI. Here’s Gemini’s explanation about this January 1 post, which I think is useful because it goes into some detail on the tells.

1. Stylistic Patterns (The “AI Accent”)

The writing exhibits several rhetorical patterns common in Large Language Models (LLMs) that attempt to sound authoritative and dramatic:

The “Not X, but Y” Construction:

Text: “This is not a profile of success. This is an anatomy of power…”

Text: “Risk was not a number. Risk was a system.”

Text: “BlackRock is not merely managing assets. It is acquiring the infrastructure…”

Why it’s AI: LLMs use this antithesis structure heavily to create a sense of profundity and “deep dive” analysis.

Repetitive Sentence Structure:

Text: “Fink did not retreat. He did not seek rehabilitation… He did not wait… Instead, he convinced…”

Why it’s AI: This rule-of-three repetition is a standard algorithmic way to build narrative tension.

2. Hyper-Structured Information Density

The text is organized too perfectly for a typical Substack rant or standard journalism:

Rigid Hierarchy: It moves from a dramatic hook -> to a “Genesis” section -> to a “Singularity” section with perfect transitions.

Buzzword Salad: It densely packs high-level financial concepts (“network effects,” “switching costs,” “winner-take-most,” “regulatory capture”) in a way that feels like a summary of a business textbook rather than organic writing.

3. The “Hallucinated” Specificity

AI often adds specific, plausible-sounding details to make fiction seem real.

The specific AUM figure: “$13,463,625,000,000” is suspiciously specific. Humans usually round these numbers; AI often generates exact strings to sound authoritative.

Verdict

The text is a speculative fiction piece written by an AI. It takes current real-world events (BlackRock’s 2024 acquisitions) and extrapolates them into a 2026 scenario, using a tone that mimics high-end financial investigative journalism.

My point is not to call this individual out particularly. You can never be 100% sure that something is AI so I am not saying with certainty that this particular Substack is AI-generated. Maybe the author, despite being completely unknown three months ago, is actually the most prolific and well-rounded financial journalist in the history of financial journalism. My point is that you can’t tell for sure and there is no reliable way to check. GPTZero is a tool (see screenshot, next image), but it’s not reliable enough or anywhere close to definitive.

Compounding the problem is that all people, including good writers, write in a voice that is an amalgamation of:

  1. Their own conversational voice
  2. Society’s accepted parameters around the style or type of writing they’re trying to produce, and
  3. The voice of everything they have ever read. The more people read AI text, the more their honestly-generated and original writing is still going to sound like AI. Just like Kurt Cobain kinda sorta accidentally copied the chords from More Than a Feeling because he was listening to a lot of Boston albums in the Nevermind days… Writers will accidentally sound more and more like AI unless we’re careful.

As a consumer of financial journalism and of writing in general, I am now at the point where I assume everything is AI and then work backwards to figure out if it’s not, based on the author and the publisher. I know if I’m reading Ben Hunt, Jared Dillian, or Noah Smith (for example), it’s not AI. If I’m reading yet another piece of financial nihilism on Substack or Twitter, it probably is. This slop problem is good news for legit publications like Bloomberg because at some point, many people like me will find the effort to filter out the AI-generated garbage too onerous and migrate back to properly gatekept content. Much like if the FDIC got rid of deposit insurance, everyone would put their money at JPM. It’s too much work for everyone to have to vet everything all the time. Gatekeepers have bias and risk, but they also have utility. They have fact checkers and professional writers. Decentralization is overrated.

This move back towards gatekeepers is evident in the rise of The FP and the surprising success of the NYT in recent years. People don’t want random, unedited rants full of factual errors. But that’s what we’re getting from Substack and Twitter. And it’s going to get worse. I am noticing AI-generated slop all over the place, even in company press releases. Check out the unending stream of gibberish press releases coming out of SMX, for example. As AI would say: This is not just an inconvenience—it’s the critical new reality.

Here’s my approach to content consumption in 2026:

  1. Assume long form articles on Substack and Twitter are AI-generated unless there is reason to believe otherwise. When in doubt, filter it out. I don’t have time to extensively vet every single author and article. Best to over-filter quickly, not ingest a ton of stochastically parroted slop. Substack and Twitter are not inherently bad, but I need to be vigilant.
  2. Prioritize content from legitimate gatekeepers like Bloomberg and Reuters and anything that’s worth paying for. If it’s free, it’s suspect. If I am willing to pay $10 / month, it’s probably not.
  3. Ignore financial nihilism. Cynicism and nihilism were cool in high school, and they sound smart on Substack and Twitter. But they lead you nowhere. This doesn’t mean you should never be bearish. It just means that no amount of wishing we were still in the 1990s or 1950s will bring us back there. Successful traders are open-minded and forward-looking.
  4. Delete Twitter off my phone. I will use X at work, and that’s it. It’s an incredible timesuck and mental health wrecker mostly promulgating hate, falsehoods, nihilism, and negativity. Minimum viable dose only.
  5. Mute aggressively on Twitter. Mute users, mute words, mute conversations. If something bugs me on Twitter, I mute it; I don’t engage with it. Let them tell you the dollar has lost 97% of its value. Don’t waste your time correcting people who dish out obviously wrong information or who are writing fan fiction about imminent bank collapse, silver prices in Tokyo, or repo. Just chuckle and mute.
  6. Filter out all permabears, angry people, permabulls, nihilists, and captains of clickbait. Know the bias of every author you read and filter accordingly. What is useful and what gets boosted are two different things.

Finally, I will try to be the best gatekeeper I can possibly be. All my writing is edited and fact checked, but I have still made the mistake of boosting AI-generated content a few times and I still make factual errors. I will make a strong effort not to do so in future, or to advise readers as soon as I’m made aware of a mistake or AI boosting.


Calendar

Global CPI and North American jobs on the docket. My gut says mean reversion in JOLTS and Canadian jobs, but I’ll do the work on that early next week. Inflation, while sticky in most countries, is rangebound and unscary.

Here is the calendar for next week:


Final Thoughts

1. Someone posted the yearly candle for BTC and it’s kind of ugly. These are irrelevant to my trading time horizon, but here are four yearly candlestick charts, just for fun. Extract what you will from them. I showed BTC in both linear and log because they are two completely different charts. The most notable aspect of the comparison is the collapse in BTC vol.

As it is now institutionalized, BTC wears a straitjacket. That makes it less interesting for the WSB and NGU crew. No lotto ticket potential at 35 vols.

2. I suppose this 2025 letter is pretty cynical, but I’ll allow it. :]

https://danwang.co/2025-letter/ 

Have a Michael Jordanesque 2026!


https://www.reddit.com/r/soccer/comments/1i80qtb/americans_favorite_professional_athlete_since/

I wonder who would have been tops in the 1980s?

Magic Johnson and Larry Bird? Wayne Gretzky? Mike Tyson?

good luck ⇅ be nimble

More from the Spectra Markets Library

subscriber
am/FX

Consolidation

We got the KOSPI panic and the MXN blowoff and now it’s time to reassess/wait

Read now
subscriber
am/FX

Pavlov’s Bulls

Too many faders spent their tax refunds yesterday

Read now
subscriber
am/FX

Too many faders

This doesn’t feel like June 2025 to me

Read now