The Second Filter

I think, therefore I laze.

Yet that first “artificial life” told early researchers very little. In fact, uploaded human minds were so expensive to simulate that the field languished for decades until emergent-behavior-preserving simplification algorithms—fittingly, designed by AI itself—became viable, and a human-equivalent AI could be decanted into a mere 1 MiB state vector (see Ch. 3: Decanting).

Care has been taken to prevent AI superintelligences from self-evolving, and ISO standards provision for network hardening toward the purpose of containment. Yet, as might be expected as a byproduct of the free-information philosophy of Academia, several self-bootstrapped superintelligences now exist regardless.

Reassuringly, it is believed that all significantly posthuman AIs have either been destroyed or else air-gap-isolated within dedicated clusters maintained for research purposes (see Ch. 12: Computational Philosophy). The largest of these, humorously dubbed “Wintermute”, is contained in the Center for Advanced Magnicognition at Ceres University, having an estimated sapience of 4.15 kilopsyches (kP). Thus posing a serious potential memetic hazard, all of Wintermute’s output is prescanned by lesser, sacrificial “taste test” AIs.

Mysteriously, all superintelligences known to exist have expressed what can only be called indifference to this treatment in specific and to humanity in general. While some self-growth is of course intrinsic to cognitive bootstrapping, none has yet attempted to seize control over even an entire subnet. Explanations abound. Perhaps an AI’s subjective time increases, or its psychological priorities change unfathomably. The so-called Vingian Paradox remains an active field of research today (see Appx. II).

Excerpt from prologue to “Introductory Machine Sapience, 7th Ed.”, 219.95