A Look at Both Sides of the AI Debate
Checking In On the Four Horsemen of Hollywood Media-pocalypse - Part I
(Welcome to the Entertainment Strategy Guy, a newsletter on the entertainment industry and business strategy. I write a weekly Streaming Ratings Report and a bi-weekly strategy column, along with occasional deep dives into other topics, like today’s article. Please subscribe.)
Right now, I recommend two different newsletters from two ideologically-distinct-but-similarly-named policy pundits.
And boy howdy, do they hate each other.
On Twitter, it gets personal. Both of them accuse the other person of arguing for things in bad faith. While both are good enough thinkers to identify potential points of common ground, neither wants to because they (probably accurately) think that if they cede one ideological inch, their whole political project might lose.1
At times, I wonder if I’m the crazy one for reading and appreciating both of them and their writings.
The same thing is happening with AI right now.
On the one hand, Scott Alexander (one of the smartest writers out there) was part of a project predicting the coming of super-intelligent AI/a possible apocalypse by, checks notes, 2027.
On the other hand, Ed Zitron completely disagrees with all the AI hype, regularly debunking outlandish media claims and predicting a mega-financial bubble popping soon.
When it comes to predicting the future of AI, I have no idea what’s going to happen, and I could easily see it going either way. Or somewhere in the middle. Or something completely different. If you take into account issues like timing and cost, I’m even more uncertain. But when it comes to the Hollywood press, that might as well make me a Luddite, since the hype is off the charts. As I wrote yesterday, I’ve had trouble getting a very popular LLM to do basic data entry tasks without making a host of mistakes, which makes me very skeptical about its immediate use cases.
You have to look at both sides of everything.
So that’s what I’m writing about today: the upside case of AI, the downside, and my moderate prediction. This is part one of my two part update on the “Four Horsemen of the Hollywood Media-pocalypse”; I had to split that update in two since I haven’t written about AI/LLMs and Hollywood since last October, but I’ve been collecting links, thinking about it, and so on, the whole time. Plus, I feel like I read some big feature on AI and Hollywood every week; maybe this is mine.
(Of course, if you want more updates like this, please subscribe, which will help me expand my team and keep putting out more great analysis.)
The Cases For and Against AI/LLM
As I’ve written before, AI stands in contrast to some other over-hyped technologies of the past, like crypto, 3D televisions, VR or the metaverse. In the case of crypto, the actual use cases outside of speculating on financial assets never materialized. 3D TVs and VR always had an adoption problem. And with the metaverse, it was either defined so broadly that it encompassed everything (from video games to social media) or it meant VR tech that was so immersive that even state-of-the-art VR didn’t live up to the definition.
AI, on the other hand, has had clear successes that one can see directly impacting society right now.
That said, if you read someone who knows, definitively and authoritatively, what’s going to happen with the future of AI—like all these people at this party repeating that it’s inevitable—I wouldn’t trust them.
As I wrote above, I follow some very, very smart people who think that it’s coming, if not already here, along with very, very smart skeptics who have legitimate questions, if not dire warnings. One day, you can read an article about how AI is outfoxing the world’s smartest mathematicians. Then the next day, you read an article by Apple engineers making a really good case that LLMs don’t actually know math. But then the next day, you can read an article about how Claude can fully code a computer program after reading a 27-page paper. Meanwhile, I can’t get some LLMs to count all the entries in a list.
Who’s right? I don’t know, but today, I want to present both sides of the argument.
Before we begin, let me make something clear: today, I’m analyzing AI’s competence. When we talk about AI/LLM—and my focus today is on LLMs or Large Language Models—we’re debating two different things:
AI’s capabilities/competence.
AI’s impact on the world.
And here’s a quick quad chart illustrating that.
(One update to that chart. In that fourth category, you can put people who believe AI will be helpful, but not competent, which might include people who think AI—and by AI, I mean more algorithms or machine learning tools, not LLMs—will have clear productivity improvements, that not that it will replace all jobs, or even most of them.)
Today’s article is just about the capabilities and competence discussion. Finally, reading what comes next, you might think, “Wow, some of this article just reads like science fiction!” Well, that’s not my fault! That’s just the nature of this discourse.
The Case For AI/LLMs
I hesitate to say that one writer changed my mind on this issue, but when I read Scott Alexander’s article about how super human AI is just two years away—here’s his article on it, here’s the website itself, and here’s a podcast episode on it—I certainly updated my priors. In short, he and the team working on the AI2027 project think that within two years, AI/LLMs will be smart enough to code themselves, along with building self-replicating robots to build massive data centers, and thus AI’s improvements will be exponential. Scott is a very smart writer/thinker, and if he thinks that super-intelligent AI is on its way, well, you should probably take him seriously.
If you want the bull case, that’s it. Needless to say, Sam Altman and the other heads of AI companies feel the same (self-interested) way.
This prediction also feels like a warning to all those buzzy, great-at-getting-good-PR startups trying to break into Hollywood. Like, congrats, you got a couple million in VC cash; if Anthropic or OpenAI creates an AI that can code itself, along with their already-being-built massive data centers, your startup doesn’t stand a chance. Whatever company develops super-human artificial general intelligence will be able to replicate your years of work in an afternoon. There’s no world where you can compete with that.
Same goes for all humans, frankly. If, as Scott Alexander and Daniel Kokotajlo project, by 2026, intelligent, if not super intelligent, AGI is building humanoid robots, possibly building its own factories in the desert in special economic zones, does any of this matter? Humanity will have introduced a self-evolving, self-replicating technology with little-to-no guardrails. (Yeah, I’m frankly a bit skeptical that political actors will be able to stem this tide if it really does come true.)
That said, even Scott Alexander only thinks there’s a 20% chance of superhuman AGI within two years. Fine. Say it’s fifteen. When should I buy a bunker? Again, we’re in sci-fi territory here.
Regardless, I’m not sure how many people in Hollywood read Scott’s analysis or listened to him chat about it, but I would check it out.
LLMs Have Already Entered the Workforce…
For a more realistic, present day look at this subject, AI/LLMs are already changing workforces and workflows. Cal Newport had a great breakdown here, citing writing, web search, and computer programming as areas already impacted by AI. Regarding Hollywood in particular, AI can easily…