Short Story: Red Queen's Race
The mind-inhibiting helmet was a ceiling-suspended, steel-and-chrome octopus. Brian Rodger paused before he pulled it on, reflecting on his luck.
He recalled how, in his twenties and early thirties, he had become obsessed with brain-enhancement drugs.
Aniracetam, Phenibut, Theanine, Bacopa, Creatine, Unifiram, Rhodiola, DMAE, Choline, Inositol, LSD microdoses, Modafinil, Tianeptine, Adderall, Ashwagandha, Selank and more: he had tried most, and researched all. Of these, many did nothing. Many were about as good as a placebo. Some helped him concentrate. Almost none made him more creative. And none at all could ward off the greater spectre, always looming in his job as a software engineer, of age-related mental decline, which had been the reason he had effectively come to own a small pharmacy in the first place.
The average age of a software engineer, like the average age of a Go or Chess champion, had only declined for the past forty years. The reason for this was simple. The world changed ever-more rapidly. Software changed more rapidly than the rest of the world. Brain plasticity, however, only declined as one aged. As you aged, it became more difficult to keep up with the never-ending cycle of software tools. Even in so small a field as Javascript development, the preferred tools changed from jQuery to Angular to React to Vue to Perpetua to Jingguan to Huojian in just a few years. Older engineers railed against ageism and compared it to sexism and racism, but railed in vain: the prejudice against older engineers was too often just true. They were in fact less flexible. They simply took longer to learn newer tools. Advance to management before thirty-five or perish: this was the rule. Brian had seen the writing on the wall, and had sought help in brain-enhancing drugs, but found no reprieve.
Until he found a small company, Rava.
Their pitch was simple. Throughout history, the greatest effective gains in human intelligence came from externalizing mental operations. Writing externalized long-term memory, and thereby allowed civilization. Mathematical symbols externalized interior symbolic manipulation, and allowed all of science. Transient to-do lists and personal calendars externalized episodic and short term memory, and allowed life to proceed at the hectic modern pace. The average modern human was more brilliant than Aristotle or Plato or any ancient individual, because the mind no longer sat entirely inside their brain. It was spread out across phones and computer and books. And by learning to spread it out further, you could make yourself smarter.
And one of the best ways to do that was to systematically cripple different parts of yourself, and train yourself to work with reduced function in different areas.
Brian pulled the helmet onto his head; it smoothly descended from the ceiling, and he felt foam-covered clamps tighten around his forehead and temples.
The helmet used a sharply localized electromagnetic field to selectively inhibit parts of his brain. It could decrease his working memory to almost nothing. It could slow his associative powers. It could make the simplest arguments seem difficult to follow. But thinking and working under these conditions was like running with weights; it trained him to handle difficult situations more easily.
The company had assured him--and he had reviewed their scientific papers--that the effects were temporary.
Brian didn’t know what Rava got from the deal. He was sure the small fee he and similar patrons paid could not pay for all the equipment. They were developing, he knew, various high-tech software applications to help people think, which he used in part while working with the helmet. Perhaps selling those could pay for the squid lowered upon on his head. But he doubted it.
He switched it on, and started working on his laptop.
Almost immediately, he knew what part of his brain had been numbed. Associative memory. He was working on the code of a personal project, trying to figure out what could be causing a bug, and when recalling related parts of the code his mind felt sluggish and stupid. But he knew how to handle it; he started to use one of Rava’s smart-search applications. It came with a frame-stacking device--he could create hierarchies of tasks, and as he completed them, it automatically told him what the next task was to be. This helped with problems in episodic memory, as well as executive dysfunction.
One of the best things about this entire process, he thought, was how it trained you to be aware of your mental energies. He remembered that in his twenties, he had tried to plough through code for ten hours a day, ignoring his internal state entirely. He now thought of that as if he had been an athlete who had been trying to train, while ignoring how some muscles were aching and others entirely fresh. The longer he worked with Rava’s tools, the more his brain felt like a compartmentalized thing. It was as if he stood back from the control panel of his mind, observing the different gauges he could tap on and adjust. His self had receded from the thinking matter; he was an overseer over the whole process, allocating some tasks to the wetware and some tasks to the silicon. It felt good.
He knew he had gotten more productive at work since then as well. He had received many comments to that effect. This had been the first brain-training regime he had tried which worked well. The science and studies supported it. Anecdotal evidence supported it. The entire thing was pleasing. His posture had gotten better as well, because he had gotten used to working with his head in a vice.
It was Saturday. He worked for two hours in the room, then headed home for the weekend.
As his car drove him home, he glanced out the window at one of the stacked, prefabricated projects that philanthropists had built. Behind it, he could see some even grimmer government-built buildings. But at this point the government was sufficiently ineffective that large firms built compact apartments for the poor, simply to preserve a modicum of peace in the cities they effectively owned. And to keep the bodies off the streets.
The unemployment rate currently hovered around twenty percent, the government said, but Brian knew it was only through some statistical magic that they made it appear so low. Half his college friends were now unemployed. It wasn’t merely that self-driving cars had come for the drivers. The kiosks and restocking robots came for the retail salespeople, who never grew grumpy or tired or made small mistakes. The office clerks and secretaries were replaced by machine-learning systems, which could tirelessly handle the fifteen or so repetitive tasks of which such jobs were constituted. Humanoid robots cooked food and cleaned floors and windows; stocky, wheeled robots performed logistics work in warehouses; soft, anthropomorphic took care of elderly people; some machine from India could build a house in eight hours with no supervision. And so on.
Brian was proud that he had his software engineering job, and that it kept his family out of the favelas. The social fabric was stretching ever-further, he knew. The poor grew more absolutely destitute. The intelligence and willpower and talent necessary to enter the professional classes always was increasing. The rich bought bigger islands for themselves. Brian had long ago resolved to end up on the upper scrap when the social fabric finally tore.
He turned back from the window, and resumed watching an educational presentation.
That night, Brian held a barbecue.
His six-and-eight year-old girls, masked in augmented-reality goggles, played with his guest’s children in the yard. They were playing DinoHunt, he thought he could tell, from their chatter.
“Run away!” one said.
“Shoot the stun gun!”
“No, shoot it in its shoulder! In its shoulder!”
And then a little later.
“It can’t be diplodocus, Megan,” his eight year old said authoritatively. “It’s rear legs are shorter than its front legs.”
“Well what is it, then, Tara?”
“It’s a brachiosaur. But I don’t know what kind. It isn’t a giraffi—a giraffititan, it’s so small.”
Brian smiled. He only ever approved a game for his children after he had played it, and DinoHunt was a very good game. It required sustained attention over the course of several minutes, taught the basics of evolution, and touched on the scientific method. He knew that when he played it, he had both learned things and received an excellent workout. So he had approved it, on his children’s systems. Although now it was difficult to play with his children; their knowledge of dinosaur cladistics now far exceeded his own.
He knew other parents allowed their children play any augmented-reality games they liked, and he felt no guilt for judging them. Games grooved habits of thought into you at a young age. And so he had resolved long ago to train his children to know how to sustain long trains of thought in a world that increasingly demanded cognitive expertise, even as it increasingly eroded it. He couldn’t conceive of a parent who loved their children and failed to do similarly.
Later that night, sitting down on the patio, having had two beers, he told Asim, a friend from work, his opinions on the matter.
Asim looked at him, a little blankly, and a little tipsy.
“You also go to that Rava place, don’t you?”
“Yep,” Brian said. “Why? Don’t think it works?”
“No, I think it works,” Asim said. “It definitely works. But have you really thought about what they’re doing?”
“What do you mean?” said Brian.
“Well, you know how those aug-game companies make money, right?”
“Apart from the purchases of the games? I don’t let my kids use any games with in-game purchases. I read Xiao-Ping’s research on what that does to you. Kills willpower.”
“Yeah of course,” Asim said. “But buying the game and selling shit in the game isn’t the only way these places makes money.”
“Well,” Brian said, a little uncomfortably, “they track the users too, don’t they?”
“Bingo,” said Asim, and took another drink. “They track what the users like. The kinds of things they’re interested in. The kinds of things they’re curious about. Whether they are more interested in pirates or ninjas, whether they prefer dinosaur games or alien games.”
“Sure,” said Brian. “But—”
“And once they have that information,” interrupted Asim, “they keep updating it, and refining it. They know the individual preferences of everyone. Say I’m making a movie. I want to know if it will succeed. I load a quick description of the movie into an MC simulator, feed it with the preferences of a few million adults, and I can tell within a thousand people how many people will go see it, and how much it needs to cost to make a good profit.”
“Sure,” said Brian.
“But that’s just the start of it,” Asim said, and drunk a little more of his beer. “We’ve been doing shit like that since the twenties. Since the teens. There are documentaries about it. But there’s other stuff. It’s the other stuff that people don’t talk about.”
“I don’t see where you’re going,” said Brian slowly. He knew that Asim had come to his company under a fairly heavy non-disclosure agreement, from his previous job. He also knew that when talking about various kinds of artificial intelligence research, Asim tended to clam up. Brian wondered why. Many programmers ignored their NDAs almost entirely, but Asim took his fairly seriously, it seemed.
“They see how children play games,” said Asim. “They can track how children figure out problems. They can track what kinds of problems are difficult, and what kind of processes let children do better."
"So?"
"You don't get it. They can see exactly what children say when they are collaborating. What they look at when they solve problems. They know how children work, in detail. They know how kids work so well, that if they wanted to, they could make an artificial agent that could play their own games.”
“So? It just lets them make better video games. Who would want to make an artificial… child?”
Brian trailed off, as the implications of what he was saying hit him.
Asim smiled.
“A child can do anything, if you give it long enough to grow. If you know how a child solves problems, you’re probably in a better place than if you know how an adult does. Who learns faster? Who learns from less?”
“That’s fair,” said Brian.
“But,” Asim said. “That’s in the future. That's what destroys us in a decade or two. The less ambitious companies, the boring companies, they still want to learn from adults. Imagine if you watched how an adult solved problems, under close observation. What if you had a machine to look at how a human’s brain worked, while the human worked. What if that human were trying to solve problems, to make explicit how they solved problems, while under close observation.”
Asim leaned in closer.
“What if making your thinking process more explicit, and more obvious, just makes it easier to replace?”
Four weeks later, Brian’s neural-network based newsfeed filter tagged a press release from Rava as likely to be professionally relevant, with a high confidence interval. Brian always read everything so tagged. So while he was driven home in the car, he blocked all other media inputs and put the estimated 8.5 minutes of attention into reading the release.
Rava had announced that they were supplying “artificially intelligent programming assistants” to various software companies. The presser, meant for mass consumption, headlined names of billion-dollar companies that everyone would know, like Ingenii, Two Sigma, Facebook, and other. But it looked like at least forty smaller companies were also using the product. Rava had been working on the assistants for a little over two years, it claimed. One of the primary sources of data for their algorithms was the kind of session that Brian bought. They had used their information on how humans managed to externalize work, to make an algorithm that could externalize extremely high-level programming tasks.
Rava claimed that their new tool removed enormous quantities of drudge work from a programmers’ day. A few high-level commands, given over a period of minutes, could be interpreted by a computer and turned into what would have been a day’s worth of solid, well-tested code. Their press report had a video of programmers expressing enthusiasm about how easy their job had just become. “It doesn’t write the code from the feature request or bug report, but it isn’t far from it,” one of them said. Rava promised to double or triple programmer productivity, under the right circumstances. They also said that this was just the first of a series of programming assistants, each more intelligent than the last. The press release ended with a short paragraph about how programmers would be happier and more content, when they were able to free their mind for higher-level architectural concerns in software.
When he finished reading the presser, Brian stared out the window into the stream of automated traffic for a little while, before he turned his feeds back on.
Over the next eight months, about eighty percent of the companies using Rava’s software had some kind of layoff. Most of these layoffs occurred towards the end of those eight months, as previously hesitant CTOs decided to recommend embracing the technology. Some of these layoffs let go of about two-thirds of the workforce. It looked like layoffs would continue into the future, given how efficient a single person could be with one of Rava’s AI assistants. The assistants were never tired. They were never lazy. If they could not write good, well-tested code, they refused to write code at all. Numerous experts commented on how code written by one of Rava’s AIs was generally clearer than the corresponding code written by a human.
Industry-aware webcomics started to show a single architect surrounded by screens as the future of engineering departments. Articles deriding CEOs for layoffs, and stressing the importance of human judgment, were shared regularly by Brian’s nervous friends and coworkers. Chatter at lunchtime grew paranoid. Asim was too smug to bother explicitly gloating, but made signifiant eye contact with Brian whenever their coworkers brought up the topic at lunch. Or maybe it wasn’t that Asim was too smug. Maybe Asim never mentioned it because he had shared knowledge of some other project he should never had mentioned.
Brian’s own company started to use Rava’s software on a trial basis, and he and his coworkers braced for a round of layoffs themselves.
Brian considered briefly considered quitting his own sessions through Rava, but decided he could not. He knew, from the reports of others and his own experience, that these sessions would equip him spectacularly well to work with Rava’s AI assistants. And he did not want to be one of those fired for being less efficient. He needed to remain in the top 5% of the programmers at work.
He also considered getting rid of his children’s augmented-reality goggles entirely, but again could not. What other means of learning so efficiently existed?
===========================================================
Postscript to story: Going to be trying to write / post stories about once a week, give or take, for a while. All various speculative fiction types of stories. The above is a bit more ripped-from-the-headlines than I usually do. If the title makes no sense, check out this.
Dude, definitely add #steemstem to your tags for this post and thank me later :)
Will do :) Thanks.
Upvoted & followed