
Superintelligence: Paths, Dangers, Strategies

In the third stage, the neurocomputational structure resulting from the previous step is implemented on a sufficiently powerful computer. If completely successful, the result would be a digital reproduction of the original intellect, with memory and personality intact.
Nick Bostrom • Superintelligence: Paths, Dangers, Strategies
With further advances in genetic technology, it may become possible to synthesize genomes to specification, obviating the need for large pools of embryos. DNA synthesis is already a routine and largely automated biotechnology, though it is not yet feasible to synthesize an entire human genome that could be used in a reproductive context (not least
... See moreNick Bostrom • Superintelligence: Paths, Dangers, Strategies
there is no reason to suppose Homo sapiens to have reached the apex of cognitive effectiveness attainable in a biological system. Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got th
... See moreNick Bostrom • Superintelligence: Paths, Dangers, Strategies
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intellige
... See moreNick Bostrom • Superintelligence: Paths, Dangers, Strategies
At some point in the technology development process, once techniques are available for automatically emulating small quantities of brain tissue, the problem reduces to one of scaling.
Nick Bostrom • Superintelligence: Paths, Dangers, Strategies
It is now often thought that achieving a fully human-level performance on these tasks is an “AI-complete” problem, meaning that the difficulty of solving these problems is essentially equivalent to the difficulty of building generally human-level intelligent machines.61 In other words, if somebody were to succeed in creating an AI that could unders
... See moreNick Bostrom • Superintelligence: Paths, Dangers, Strategies
a series of recent surveys have polled members of several relevant expert communities on the question of when they expect “human-level machine intelligence” (HLMI) to be developed, defined as “one that can carry out most human professions at least as well as a typical human.”77 Results are shown in Table 2. The combined sample gave the following (m
... See moreNick Bostrom • Superintelligence: Paths, Dangers, Strategies
No brain has yet been emulated. Consider the humble model organism Caenorhabditis elegans, which is a transparent roundworm, about 1 mm in length, with 302 neurons. The complete connectivity matrix of these neurons has been known since the mid-1980s, when it was laboriously mapped out by means of slicing, electron microscopy, and hand-labeling of s
... See moreNick Bostrom • Superintelligence: Paths, Dangers, Strategies
The stages in this sequence correspond to whole brain emulations of successively more neurologically sophisticated model organisms—for example, C. elegans → honeybee → mouse → rhesus monkey → human. Because the gaps between these rungs—at least after the first step—are mostly quantitative in nature and due mainly (though not entirely) to the differ
... See more