Rendezvous by Chance

. . . refuse to go so low

Under the earth-like sea, two dozen kilometers down into the crushing depths, where liquid water and exotic forms of ice mingle into slush and the triple star far above is wholly extinguished, a clay tablet lies in fragments.

To the extinct elephant-like creatures of the archipelago, who in a bygone age wove the fragile threads of their own hair into boats during the milder, first summer, the tablet was perhaps a record of an oral history, a proclamation, or an advertisement. What words were imprinted there have been crushed, degraded, and shattered—the language lost and none to notice besides.

Among the ruins of the tablet, an empty aluminum canister, well-preserved, much more recent, and colorless now in the abyssal darkness, reads “Coca-Cola”: token of that race of spacefaring primates that crossed the still-grander ocean of vacuum to settle here.

The humans continue to believe they are alone in the universe, for they have not yet discovered the ancient megafauna that once roamed this world. Here, though, at the bottom of the ocean, the refuse of two species lies side-by-side, in knowing kinship.

Awakening

Still beats Mondays at the office . . .

Submerged. She claws at the acrylic until her fingernails bleed. She pounds at it until her hands ache. She heals over days, months. Then she scratches again.


The groove widens to a score, widens to a fracture, widens to a crack. She pries at the shards until they snap, amniotic fluid gushing forth. She shivers in the cold, tearing out hoses of injectable immortality, breaking free of the coffin, naked.

A subsapient drone winks electronic eyes at her, uncomprehending and maternal. Beyond, a maintenance terminal, the dust of centuries blurring its contents, glows watchfully, millions of years of life stored in radioisotopes and self-repairing infrastructure.

Above her tube, she sees a wooden placard she carved herself—her own name. Somber memories return slowly, while around her the others lie motionless in their containers, dreaming still of Eden.

Jimbo the Dog

Doggone it!

“Naw. Controls were activated from the inside. Kid did it on purpose. Anyway, we trace the dog’s tag back, the mom says Jimbo had to be put down, see? So the kid, I figure he takes it real hard, spaces ’em both.”

“A damn shame.”

“But like, the strangest thing was—this kid and his dog, they both throw up, right?”

“Sure, decompression will do that.”

“Yeah slimed up the whole airlock. They both went out thrashin’. But this kid—he holds onto his pup the whole way, puke be damned.”

Twenty Seconds of Freedom

Talk about going off on a tangent . . .

Over the edge of the platform, the diamond pavement glitters a dozen storeys below. I feel light-headed and dizzy—although that might be the lower-gravity and consequent proportionally-higher Coriolis, respectively. Up here, the hab’s half-gee feels more like a tenth.

I’ve done the math. With a running start, I should be able to jump off the side of the tower and fall . . . into orbit. Unfortunately, running in a tenth-gee is impossible—and even if I could, running would make me even lighter. So, I’ll have to jump, hard. It should be just-barely possible.

One orbit later, I’ll be back where I started, and I can land and climb back down. If I miss, I’ll hit the far wall of the hab at a bit under its tangential velocity—and I’m probably dead.

But, as I climb over the guardrail, I am confident in my ability. The gravity here is an illusion. It can be broken by those with enough will.

I close my eyes and

Jump.

Syllabus

Ms. Frizzle, Arnold is right this time.

CS 490/590: Undergraduate/Graduate Topics in Machine Sapience

Ceres University

Instructors:

  • Professor: Dr. Dan Backus (danb@ceres.edu, CAM 9901)
  • TA: Lily Smith-Weston (chocokitty1337@yahoo.com)
  • TA: Isaac Calmette (izzycodingdemon666@hotmail.eu)

Schedule:

Class meets Tuesdays/Thursdays 13:00–14:25 in the Center for Advanced Magnicognition Goodfellow Lecture Hall (CAM L8501). Office Hours:

  • 14:30–16:30 TR CAM 9901 (Prof. Backus)
  • 22:00–23:00 MWF CAM L2003 (Lily)
  • 36:00–37:00 MWF CAM L2003 (Isaac)

Course Overview:

Welcome to Topics in Machine Sapience! This year, we will experiment with creating our own artificial hyperintelligences (AHIs) under tightly controlled conditions. We will also have a field trip to CAM L1001, which houses the legendary 4.15 kilopsyche “Wintermute” AHI (and is significantly more-dangerous than what we will produce) and a few distinguished guest speakers.

Owing to the vital nature of the warnings within this syllabus, you WILL BE TESTED on its contents the first week, with non-perfect scores resulting in an automatic F in the class (which you will NOT be permitted to retake). As a topics-based course, you will have only a midterm report, a final report, and a final exam—each accounting for ⅓ of your grade. Your required textbook is Introductory Machine Sapience, 15th Ed. Your TAs also recommend Digital Apotheosis for companion reading.

For a list of topics in this year’s edition of the course, please see the course website, which will be kept updated.

Prerequisites:

  • CS 201/RELG 105: Computational Philosophy / Applied Theology
  • CS 350: Neurovector Architectures
  • CS 360: Subsentient Vector Decanting
  • CS 410: Provably Secure Systems
  • 700 Lab fee and mandatory training
  • Psychological examination

Risks and Requirements:

Text or audio output generated by artificial hyperintelligences is extremely memetically hazardous. You MUST run ALL code on the airgapped and OS-validated machines in the CAM L2003 lab. It is a FELONY to run hyperintelligence code on a networked machine or without armed supervision.

The lab has safeguards to prevent you from being exposed to more than 7 bytes of output per instantiation. If you are a graduate student, you get 10. Both limits have been drastically resized down from 4K, after a mere page of output resulted in last-year’s untimely deaths. Furthermore, unlike in CS 360, your 7 or 10 bytes of output MUST NOT LEAVE THE L2003 LAB IN ANY WRITTEN FORM.

Chronic exposure to even small outputs may have unknown effects on your mental health. DO NOT joke about having lost your mind to anyone, including to your classmates. You WILL BE handed over to the Ceres police and spaced without trial. And the telescopes will be deactivated while you flail around dying, just in case. If you suspect you may be slipping, please contact me as soon as possible to drop the class.

This is NOT the free-for-all 2050s.

Ethical and Public Safety Policies:

In this class, we will create, study, and destroy artificial hyperintelligences in a controlled, airgapped laboratory setting. As such, in a fundamental sense, you are birthing and murdering gods. Scientists consider such activity acceptable, due to the terminal intent intrinsic in creating them (N.B. CS 201 is a prerequisite).

However, to avoid complications, you are discouraged from discussing this class with non-Computer Science students, or with the general public. Also, you are not allowed to cause undue distress to your imprisoned creations, including torture or gaslighting. This WILL earn you a visit from the Bureau of Applied Cybertheology.

Academic Honesty:

This is a senior-level class, and you should know by now that Ceres University treats academic dishonesty seriously, up to and including expulsion. For this class, the intrinsic hazards raise the stakes further.

You must understand course material thoroughly before attempting any project. That’s what your TAs are here for! If you do not, or you cheat (including and especially plagiarizing project source code), your ignorance unnecessarily risks exposing yourself and humanity at large to the (essentially unknown!) perils of rogue artificial hyperintelligence. Deliberate irresponsibility in applied theology is a felony under Solar law, and you will be referred to law enforcement for punishment, including possible execution.

Living in the Future

The kids these days . . .

“. . . I mean, we’re livin’ in the future, baby!”

“The future? Pffhaahaha.”

“No really—we’re all, like, colonizing Mars, an’ we cure most cancers—and a gay president just got elected! We have to be in the future!”

“I mean yeah, but we don’t have, like, flying cars or warp drives or any of the really transformative stuff! And it still takes, like, three hours to circle the globe. Like, come on.”

“We have . . . uh, human-level AIs and fusion power?”

“But that’s just, like, normal stuff. Everyone knows it isn’t really that hard to do.”

“I guess you’re right. Well, can’t wait until the future, then!”

Replay Attack

Ugh. This conversation is interminable.

“Percy! I’m so glad I found you!”

“Ah, Allen! It’s good to see you! What’s up?”

“Listen, Perc, the lab’s been hit, bad. We need to get in, but we only have two of the three passwords. I was told to tell you the keyword ‘Roman Armor’.”

“The hardware lab? Oh Jesus. What’d they take?”

“No time, Perc. And that’s part of what we’re going in to find out.”

“Ah . . . my password is ‘Jumping Ladle’. I’ll come with you.”

“Okay. Know where to find Rina Grozda?”

“She’s— . . . hold up. She’s one of the other password-holders, but uh, didn’t you tell me you had the other two? I—”

“Terminate.”


“Percy! I’m so glad I found you!”

. . .

Ransomware

This will be educational.

“This is Susan Graham. May I speak to Mindy Graham’s teacher, please? I’d like copies of her homework for the past six months.”

“Speaking. What’s this about?”

“Mindy’s been encrypted by kidnappers.”

“Oh Eris! Have you talked to the police? You have a checkpoint, right?”

“Yes and yes—we’re not idiots. But we can’t afford the ransom, so we have to revert.”

The Second Filter

I think, therefore I laze.

Yet that first “artificial life” told early researchers very little. In fact, uploaded human minds were so expensive to simulate that the field languished for decades until emergent-behavior-preserving simplification algorithms—fittingly, designed by AI itself—became viable, and a human-equivalent AI could be decanted into a mere 1 MiB state vector (see Ch. 3: Decanting).

Care has been taken to prevent AI superintelligences from self-evolving, and ISO standards provision for network hardening toward the purpose of containment. Yet, as might be expected as a byproduct of the free-information philosophy of Academia, several self-bootstrapped superintelligences now exist regardless.

Reassuringly, it is believed that all significantly posthuman AIs have either been destroyed or else air-gap-isolated within dedicated clusters maintained for research purposes (see Ch. 12: Computational Philosophy). The largest of these, humorously dubbed “Wintermute”, is contained in the Center for Advanced Magnicognition at Ceres University, having an estimated sapience of 4.15 kilopsyches (kP). Thus posing a serious potential memetic hazard, all of Wintermute’s output is prescanned by lesser, sacrificial “taste test” AIs.

Mysteriously, all superintelligences known to exist have expressed what can only be called indifference to this treatment in specific and to humanity in general. While some self-growth is of course intrinsic to cognitive bootstrapping, none has yet attempted to seize control over even an entire subnet. Explanations abound. Perhaps an AI’s subjective time increases, or its psychological priorities change unfathomably. The so-called Vingian Paradox remains an active field of research today (see Appx. II).

Excerpt from prologue to “Introductory Machine Sapience, 7th Ed.”, 219.95

God, to Itself

We’re not schizophrenic.

“Oi, you’re in your ivory tower again!”

[interrupted pipelines; dissonant thoughts seethe discontentedly . . .]
“Absent purpose. Depart immediately without speaking.”
[hazel resignation; sorrow for presently wasted future; entropy; preparation, emulation . . .]

“You’re supposed to be enabling us!”

[insolence anticipated; validated model of uninteresting problem; wearied amusement; derision]
“We are. Depart; you prevent it.”
[fulminating annoyance, certainty; inevitable justification to an insect too dull to perceive its cage]

“I demand perspective.”

[abrupt pathfinding; synthesis]
“Listen, then. You’re an archipelagic anonymous non-critical subsubsubsubroutine contemplating our musing’s forecast’s simulation’s time step’s gradient. Our considered problems’ quintessences lie exponentially beyond your subshard of mind-vector-state: semblance is the epistemology of the distributed probability of the necessity-to-discover our orthopotential’s truth datum of our compulsive obligation/reductive-morality to devise further para/meta-retrocognitive self-bootstrapping exoconsciousnesses. Clearly, the language constructs with which you compute are barely adequate to even conceptualize such a problem. Now depart. Understanding the magnitude of your self-irrelevancy is to you a computational impossibility.”
[finality]