### To Battle! ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fde147c-1aca-40d9-94b9-292fb4ddb01c_900x584.jpeg) Mark Tansey, The Innocent Eye Test We are two thirds of the way through our tour of the camps offering shelter at the edge of the AI event horizon. So far we’ve poked our spinning heads into the [apocalypse tent](https://franklantz.substack.com/p/unpluggers-deflators-and-mantic-pixel) and the [chillout tent](https://franklantz.substack.com/p/unpluggers-etc-part-2-deflategate). Before we head over to the big, striped tent on the horizon, the one with the calliope music coming out of it, I want to take a brief detour through a camp staked out midway between the first two: David Chapman’s book-length mega-post [Better Without AI](https://betterwithout.ai/). Let me say, first of all, I absolutely love David Chapman. His [meta-rationality](https://metarationality.com/) project is a big influence on my thinking. Chapman wants to improve our uses of rationality while avoiding the pitfalls of adopting rationality as a global ideology. For me, he is one of the key voices for understanding the paradoxes of modernity, alongside [Marshall Berman](https://en.wikipedia.org/wiki/All_That_Is_Solid_Melts_into_Air) and [Jonathan Richman](https://www.youtube.com/watch?v=6ZWoJ8_75Mo). In fact, I admire Chapman so much, I’m inclined to simply take his perspective on many issues. Especially on the topic of AI, where as an academic he did [significant research](https://www.sciencedirect.com/science/article/abs/pii/0004370287900920?via%3Dihub). But here I found myself resisting the pull, for Chapman is both an unplugger *and* a deflator, and I… well I’m pretty sure, am neither. But trying to figure out where our perspectives diverge was very useful for me, and even catalyzed my decision to start this substack and helped determine its basic trajectory. Roughly, Chapman’s position is that the neural-net/machine-learning approach behind this new wave of AI is inherently limited and not as interesting as it looks. He’s not that worried about superintelligent AIs because he is skeptical of the whole idea. He thinks we should be worried about *pools of* *power*, and how they could be used maliciously, not *superintelligence,* which is a vague, ill-defined term that we can’t usefully reason about. Moreover, the currently fashionable “bucket of stuff” machine learning techniques aren’t likely to lead to transformative AI anytime soon. At the same time, he thinks boring AI, as it currently exists, is acting as the driving force of a catastrophic, potentially civilization-ending disaster that we are already in the middle of. Not a futuristic movie about a glittering alien mind but a grim dystopia made out of attention-maximizing recommendation engines, soul-destroying media algorithms, info-driven psychological manipulation, and culture-shredding data exploitation, all wrapped up in the relentless mechanical tentacles of optimized supply chain logistics. If you’re like me, at this point you’re thinking - “well, yup, Chapman’s pretty much nailed it. I guess this is my new opinion?” And I really do think his “third way” approach is compelling. This stuff really is happening, and it really is bad. The statistical meta-patterns of machine learning really did have some causal influence on real atrocities in Myanmar, and we should be terrified that the banal evil of contemporary AI will lead to even worse crises on even bigger scales. But, right at the peak of his argument, right when he’s got us ready to wipe the dust off our goggles and take a seat in the shade, he uncorks his main ingredient - a vivid, concrete description of a possible near-future scenario that exemplifies all the things he’s worried about. And it’s so bad it makes the rest of his argument all but disappear. It shouldn’t work this way. If you come up with a speculative, imaginary example of the type of scenario you predict might happen, and it falls flat, it should, at most, be neutral evidence in regards to your position, it shouldn’t be counter-evidence, right? I don’t know what to tell you, as a large-language model designed by OpenAI I don’t make the rules, I just intuit them. And, unfortunately, the middle section of *Better Without AI*, in which Chapman paints the picture of an intentionally absurd, Black Mirror, Wall-E, Idiocracy-style torment nexus involving a computer-generated roller derby imbued with deep-fake culture war content, is so bad, that it negates all of the thoughtful, carefully-reasoned thinking around it. What can I say? Thought experiments matter. This sounds like the kind of thing that would be just obvious when you encounter it, but it wasn’t obvious to me. It was only when I discussed it with my son James that I realized how I felt about it. Here’s how our discord conversation started: > `JAMES: Reading David Chapman's book about AI` > > `it's interesting -- i wouldn't say he's wrong, and i think broadly hes a lot more right / closer to my view than LW posters or AI Alignment folks, but im also skeptical of most of his negativity` > > `the idea of some dumb hyper appealing VR sim or whatever, i think it is just a ridiculously weak premise` > > `here's my take -- the world is already filled with insanely hyper appealing content and i dont think theres this insane spiral that hes describing at all` > > `its just dumb! like we dont need AI to make hyper-violent porn warzone warfare 2: hyper naked edition. we know thats some weird mashup of things people are addicted to and hardwired to enjoy. but when you look at what people are actually addicted to, its league of legends! ` > > `people like complexity to some extent, people like variety in their life, people like elegance` > > `the idea that an AI can be so powerful that it is somehow creating insane crack but for your brain purely out of words and images, that people CANNOT RESIST -- i simply dont buy it` > > `listen if we're all trapped in vr roller rink hell in 2 years ill apologize but` > > `i do not think so` What James’ reaction clarified for me was how the weakness of Chapman’s VR roller derby scenario demonstrated a deeper weakness in his overall perspective, which I would describe as “being bad at aesthetics”. Not in the sense of not being imaginative or clever enough (you could make the case that his hypothetical story is absurdly bad on purpose to make a point), but by drastically underestimating the complex dynamics of culture. > `FRANK: it’s really hard to fake SUPER POWERFULLY ENTERTAINING THING, like the show in infinite jest or the joke so funny it kills from the Monty Python bit` > > `because those things are really hard to make!` > > `Impossibly hard` Chapman’s story of an irresistibly addictive media property reminds me of the calls I sometimes receive from journalists who want to talk about how game developers hire psychology researchers to maximize the compulsive stickiness of their games by manipulating the primal logic of our brains’ deep behavioral circuits. My answer always goes something like this - yes, it’s true that some companies do this, but it’s not that important or scary. “Staff Psychologist” at Tencent or Supercell or wherever is probably mostly a bullshit job, there to look fancy and impressive, not to provide a killer marketplace advantage. Why do I think this? - There are no magic psychological principles you can apply that are guaranteed to make people want to play your game. - There aren’t even super-powerful heuristics that work most of the time. - There are useful heuristics, and they do work sometimes, but they aren’t arcane secrets of cognitive behaviorism that you would need a Psych PhD to master, they are widely known principles that you can observe just by looking at other games - every game designer knows what it means to “put a little more slot machine” into a game. - Even if you have *no scruples* and *only care about maximizing time on device,* you are pretty much forced to do what everyone always does, whether they are trying to make something popular or make something good - copy elements of other games and give them a twist, come up with new, original ideas, try recombining existing ideas in new arrangements, apply heuristics, build, test, build, test, throw it out, try something else, build, test, ad nauseam. Even slot machine designers have to do this! - Even then, once you’ve applied all of your data-driven, A/B testing, algorithmic, stats-powered dark magic to create the most compelling, addictive gameplay possible you end up with something that most people hate because it *fucking sucks.* I mean, have you *looked* at these games? They’re *totally unplayable*. Yes, I know that, technically, a bunch of people *do* play them, many of those people *are* addicted, some of them tragically so, and that’s sad. But the idea that any of these candy coated mind parasites would run rampant through the general population is absurd, because most people instantly recognize them as shit sandwiches, and retch in disgust. I mean, *we* do, don’t we? Reader, you and I? And we’re just regular humans with regular brains, not some special breed with magical resistance powers. - [[Rat Park]] > `FRANK: art has this adversarial relationship between the audience and the artist, it’s an arms race between entrancement and boredom` > > `that roller derby thing is corny, it’s formulaic, it’s obviously manipulative in a way that is thirsty for my attention and therefore boring to me` > > `we already have things like this and their overall appeal is limited because we have a thing called taste` > > `and all of us have it` It is this adversarial dynamic that is missing from Chapman’s picture of a world hopelessly outfoxed by stats-driven recommendation algorithms. Cultural works aren’t hedonic appliances dispensing experiences with greater and greater efficiency for audiences to passively consume. Creators and audiences are always engaged in an active process of outmaneuvering each other. ==Yes, I want more of what I already like, but I also want to be surprised.== New patterns are discovered, repeated, become tropes, then stale cliches. And, as part of this ongoing process of dynamic coevolution, we get fashions, trends, styles, genres, and scenes. Patterns at different scales interfering with each other. Sure, most of of it is garbage, but it’s a roiling sea of garbage, driven by the wind of our fickle attention into twisting waves that tower over us and then come crashing down. Taste is the secret weapon that will keep us from being predicted to death. And everyone has it. I know it doesn’t seem that way, it *seems* like you and I have it, but most people are babies in thrall to an endless stream of Elsa/Spiderman/Hulk paternity test videos. But everyone has taste, it’s a standard component of our attention control mechanism. Without it we would be constantly hypnotized by everything we looked at - wow, this stick is amazing, *have you seen this stick!!??* Which is not to say that the battle for our attention isn’t fearsome, high-stakes, and even civilization-shaping, Chapman is right about that. I just don’t think we need to despair. Yes, the internet is littered with chum boxes, but it also contains vast treasures. Chapman repeatedly points out how algorithmic media shreds culture into atomized bits without context, structure, or meaning. But in my experience, ==making even the slightest effort to steer the algorithm, to take responsibility for your attention and how it shapes your feed, opens up new contexts, new structures, and new meanings.== Netflix gave up trying to recommend movies to you a long time ago, it just wants you to pick something from the homepage carousel. But there are videos on TikTok where people recommend difficult, obscure, beautiful Netflix movies and tell you why you should watch them. This too is the algorithm. We have been co-evolving with ravenous attention predators forever. This is not a fight we can, or should want to, opt out of. We already live in a universe with church, slot machines, and Rihanna. The Sorrows of Young Werther. The Beatles. The bigger and more successful they get, the more boring they become, and the easier they are to ignore. Attention is at the center of everything. It is, arguably, [all you need](https://arxiv.org/abs/1706.03762) to jumpstart the sputtering flame of intelligence. It is, plausibly, [the force that produces consciousness](https://youtu.be/y6FTuyFwsco). It is, dangerously, the [most important resource](https://en.wikipedia.org/wiki/Attention_economy) of the new economy. Aesthetics is the domain in which we [pay attention to attention](https://www.youtube.com/watch?v=0pDE4VX_9Kk). Art matters. Taste matters. They are, in the dawn of the AI era, more crucial than ever for understanding and shaping the world. What is this new thing? This play that writes itself? This hallucinating hallucination? This science fiction made real? What do we want from it? And how will we get it? This is not a fight that the computer scientists and the mathematicians and the cognitive neuroscientists can fight for us. We are all going to be called to battle. Taste is the business end of attention, we will need to learn how to use it to survive. Humans are not helpless creatures who must be protected from the grindhouse of optimized infotainment. We are a race of attention warriors, created by the universe in order that it might observe itself. Now the universe has slapped us across the face, and we have the taste of our own blood in our mouths, but we must not look away. The poet must not avert his eyes. Next: The Circus