Living in the Future

The kids these days . . .

“. . . I mean, we’re livin’ in the future, baby!”

“The future? Pffhaahaha.”

“No really—we’re all, like, colonizing Mars, an’ we cure most cancers—and a gay president just got elected! We have to be in the future!”

“I mean yeah, but we don’t have, like, flying cars or warp drives or any of the really transformative stuff! And it still takes, like, three hours to circle the globe. Like, come on.”

“We have . . . uh, human-level AIs and fusion power?”

“But that’s just, like, normal stuff. Everyone knows it isn’t really that hard to do.”

“I guess you’re right. Well, can’t wait until the future, then!”

Replay Attack

Ugh. This conversation is interminable.

“Percy! I’m so glad I found you!”

“Ah, Allen! It’s good to see you! What’s up?”

“Listen, Perc, the lab’s been hit, bad. We need to get in, but we only have two of the three passwords. I was told to tell you the keyword ‘Roman Armor’.”

“The hardware lab? Oh Jesus. What’d they take?”

“No time, Perc. And that’s part of what we’re going in to find out.”

“Ah . . . my password is ‘Jumping Ladle’. I’ll come with you.”

“Okay. Know where to find Rina Grozda?”

“She’s— . . . hold up. She’s one of the other password-holders, but uh, didn’t you tell me you had the other two? I—”

“Terminate.”


“Percy! I’m so glad I found you!”

. . .

Ransomware

This will be educational.

“This is Susan Graham. May I speak to Mindy Graham’s teacher, please? I’d like copies of her homework for the past six months.”

“Speaking. What’s this about?”

“Mindy’s been encrypted by kidnappers.”

“Oh Eris! Have you talked to the police? You have a checkpoint, right?”

“Yes and yes—we’re not idiots. But we can’t afford the ransom, so we have to revert.”

The Second Filter

I think, therefore I laze.

Yet that first “artificial life” told early researchers very little. In fact, uploaded human minds were so expensive to simulate that the field languished for decades until emergent-behavior-preserving simplification algorithms—fittingly, designed by AI itself—became viable, and a human-equivalent AI could be decanted into a mere 1 MiB state vector (see Ch. 3: Decanting).

Care has been taken to prevent AI superintelligences from self-evolving, and ISO standards provision for network hardening toward the purpose of containment. Yet, as might be expected as a byproduct of the free-information philosophy of Academia, several self-bootstrapped superintelligences now exist regardless.

Reassuringly, it is believed that all significantly posthuman AIs have either been destroyed or else air-gap-isolated within dedicated clusters maintained for research purposes (see Ch. 12: Computational Philosophy). The largest of these, humorously dubbed “Wintermute”, is contained in the Center for Advanced Magnicognition at Ceres University, having an estimated sapience of 4.15 kilopsyches (kP). Thus posing a serious potential memetic hazard, all of Wintermute’s output is prescanned by lesser, sacrificial “taste test” AIs.

Mysteriously, all superintelligences known to exist have expressed what can only be called indifference to this treatment in specific and to humanity in general. While some self-growth is of course intrinsic to cognitive bootstrapping, none has yet attempted to seize control over even an entire subnet. Explanations abound. Perhaps an AI’s subjective time increases, or its psychological priorities change unfathomably. The so-called Vingian Paradox remains an active field of research today (see Appx. II).

Excerpt from prologue to “Introductory Machine Sapience, 7th Ed.”, 219.95