[PREV - DOCTOROWS_ACCOUNTS] [TOP]
AI_SNAKE_OIL
March 10, 2026
"AI Snake Oil" (2024) by
Arvind Narayanan & Sayash Kapoor FIRST13
First paperback edition: 2025
Reading the preface to the paperback
edition I note a few red flags--
the authors are posing as Reasonable and
Open-Minded. This could be genuine,
but in recent years we've seen a lot
jamming by people coping that pose But maybe the Moderate
while looking a lot like stealth Extremists deserve to be
advocates for the right. thought of as a faction
of their own.
I see they identify as programmers
and I wonder what kind of programmer
they are-- and how old they are, which
actually does matter: there have been
generational shifts among programmer's
and one suspects they're definitely
post-Brogrammer era.
From the preface:
"On the other hand, we are cautiously
optimistic about generative AI
applications such as chatbots. ... "
But what follows isn't "cautiously
optimistic", it's a glowing endorsement.
"Turning to actual generative AI, we
say in the book that it will be
useful in some fashion to most
people whose work is cognitive in
nature. We are heavy users of
generative AI ourselves, especially
for computer programming, an area
where capabilities are rapidly
advancing. AI can generate snippets
of code, and even entire
applications, albeit simple ones,
based on a text description of what This endorsement gets less impressive
the code should do. ... For many when you're aware of how programmers
programmers, including us, it's hard can fool themselves about the
to imagine going back to an era effectiveness of the latest fads.
before the availability of AI
assistance for writing code." Consider this result from
Joel Becker et al:
"Before starting tasks, developers
forecast that allowing AI will
reduce completion time by
24%. After completing the study,
developers estimate that allowing
AI reduced completion time by
20%. Surprisingly, we find that
allowing AI actually increases
completion time by 19%--AI tooling
slowed developers down."
https://arxiv.org/abs/2507.09089
"On the other hand the difficulty of
avoiding pitfalls has also been
increasing over time as generative AI
has become much more capable and So it's become "more capable"
products have proliferated. For and yet the difficulties have
example, there can be subtle bugs in increased.
AI-generated code. For now, it is
*caveat emptor*." I don't doubt that there
can be "subtle bugs".
It wouldn't surprise the
hell out of me if they're
can be major bugs.
This can also happen with
human developers, of course
so the queston is frequency
and severity, and whether
you have people on staff who
are competent enough to fix
them, and whether they'll
still be able to after years
of "AI" addiction.
(Though really, that's just
one set of questions. There
are many others.)
"... it takes time and experimentation
to develop a working understanding
that is tailored to the specific ways The subtitle for this book claims
you might want to use AI in your to be able to *answer* that
workflows. Given that there can be a question-- they're supposed to
significant leanring curve, is be able to tell you "how to
generative AI still worth it from a tell the difference" between
productivity perspective?" what AI can do and what it can't.
In this preface, they're backing
off and managing expectations.
Other issues that might be raised-- (they
aren't touched on in this preface, at least):
Will widespread adoption of "generative AI"
pollute the source data with "AI slop" and
reduce the quality of output?
If the massive data centers and their (~p. 273)
current tremendous energy usage are They claim new regs
actually necessary, should they be aren't needed, just
allowed to exist? enforcement of old.
Okay.
The big players are still running at a
big loss, giving away services for
free in hopes of attracting more In other words, we're
interest. What happens when they in stage one of the
switch modes and play for money-- will Enshittification process.
Narayanan and Kapoor suddenly be able
to imagine "going back"? ENSHITTIFICATION
--------
[NEXT - FAILED_FUTURES]