Superintelligence – Nick Bostrom

Superintelligence – Nick Bostrom

Making sense of AI.

The State Of Artificial Intelligence(AI)

Three lessons about the state of artificial intelligence to show you it’s up to all of us to make our future a good one:

The Evolution Of AI: Replicating A Human Mind

AI may grow as intelligent software based on the architecture of a particular human brain. This differs from AI and theoretically involves taking a dead person’s brain and creating a digital reproduction of that person’s “original intellect, with memory and personality intact.” This “barefaced plagiarism” is called “uploading” or “whole brain emulation” (WBE).

AI Is Always Compared To Humans

Bear in mind that the character of AI and, by extension, of superintelligence, is not expressly human. Fantasies about humanized AI are misleading. Though perhaps counterintuitive, the thesis of orthogonality holds that levels of intelligence do not correlate with final goals. More intelligence doesn’t necessarily imply more shared or common goals among different AIs.

The Moral Character Of AI

Relentless doubt about the efficacy of AI always comes back to its motivation and how to refine it. The question remains: how can science create an AI with “final goals” that align with the values that foster peaceful coexistence between man and machine?

 

Scientists might approach this process of “value loading” in several ways. One would be to simply write values into an AI’s program. That evokes the riddle of how to translate the incomprehensible, sometimes ambiguous, nature of truths and principles into code.

Artificial intelligence can be far less human-like in its motivations than a green scaly alien.

When Tomorrow Comes

Within the panorama of possibilities for superintelligence, the most practical—and, perhaps, also the safest—alternative may be whole-brain emulation, not AI, although some scientists dispute that notion.

Whichever path humanity chooses, a true intelligence explosion is probably decades away. Nevertheless, be aware that people react to AI like children who discover unexploded ordnance. In this instance, even experts don’t know what they’re looking at.

The Evolution Of AI: Self-Teaching AI

The British computer scientist Alan Turing believed that scientists would one day be able to design a program that would teach itself. This led to his idea of a “child machine,” a “seed AI,” that would create new versions of itself. Scientists speculate that such a process of “recursive self-improvement” could one day lead to an “intelligence explosion” resulting in “superintelligence,” a radically different kind of intelligence.

The Evolution Of AI: Genetic Engineering And More

AI could also grow along with the path of genetic engineering, which could lead to a population of genetically enhanced individuals who together make up a “collective superintelligence.” This scenario might include cyborgs and systems like an “intelligent web.”

As science’s exploration of superintelligence progresses, more questions arise. For instance, how might different efforts to create superintelligence compete with each other? An ancillary question is what the consequences might be if a front-running technology gained a “decisive strategic advantage.” An artificial intelligence project might achieve such an advantage by reinforcing its dominance in a way human organizations cannot.

The Risk Of SuperIntelligence

The oncoming and unstoppable explosion of intelligence carries profound existential risk. As technology slowly evolves to make this explosion a reality, humans must carefully consider all the philosophical and moral ramifications now—prior to the appearance of superintelligent AI. Once AI appears, it may be too late for contemplation.

The Evolution Of AI: The First Computers

Computer scientists have been wondering how we can get machines to actually think like us.

Since the invention of computers, the hardware simply didn’t suffice to process all the necessary information for really complex tasks until the 80s.

 

For the past two decades, we’ve gotten smarter in how we build AI though, now modelling a lot more after neural systems in the brain and human genetics – by now AI has made its way pretty far into our daily lives.

AI Architecture

In contemplating the specter of superintelligence gone berserk, some suggest reducing AI size, scope, and capability. For example, scientists might base an early-term AI on nothing more than a question-and-answer system, an “oracle.” Programmers would need to take great care to clarify their original intentions in order for the AI to function properly.

At best, the evolution of intelligent life places an upper bound on the intrinsic difficulty of designing intelligence.

Source

Similar products

Get in