Jump to content

Superintelligence

From Wikipedia, the free encyclopedia

Asuperintelligenceis a hypotheticalagentthat possessesintelligencefar surpassing that of thebrightestand mostgiftedhuman minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by anintelligence explosionand associated with atechnological singularity.

University of OxfordphilosopherNick Bostromdefinessuperintelligenceas "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".[1]The programFritzfalls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks.[2]FollowingHutterandLegg,Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such asintentionality(cf. theChinese roomargument) orfirst-person consciousness(cf.thehard problem of consciousness).

Technological researchers disagree about how likely present-dayhuman intelligenceis to be surpassed. Some argue that advances inartificial intelligence(AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence.[3][4]Severalfuture studyscenarios combine elements from both of these possibilities, suggesting that humans are likely tointerface with computers,orupload their minds to computers,in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development ofartificial general intelligence.The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity ofperfect recall,a vastly superior knowledge base, and the ability tomultitaskin ways not possible to biological entities. This may allow them to — either as a single being or as a newspecies— become much more powerful than humans, and displace them.[1]

Several scientists and forecasters argue for prioritizing early research into the possible benefits and risks ofhuman and machine cognitive enhancement,because of the potential social impact of such technologies.[5]

Feasibility of artificial superintelligence[edit]

Artificial intelligence, especiallyfoundation models,has made rapid progress, surpassing human capabilities in variousbenchmarks.

PhilosopherDavid Chalmersargues thatartificial general intelligenceis a very likely path to artificial superintelligence (ASI). Chalmers breaks this claim down into an argument that AI can achieveequivalenceto human intelligence, that it can beextendedto surpass human intelligence, and that it can be furtheramplifiedto completely dominate humans across arbitrary tasks.[6]

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials.[7]He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention.Evolutionary algorithms,in particular, should be able to produce human-level AI.[8]Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.[9]

An AI system capable of self-improvement could enhance its own intelligence, thereby becoming more efficient at improving itself. This cycle of "recursive self-improvement" might cause anintelligence explosion,resulting in the creation of a superintelligence.[10]

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)."[11]Moreover,neuronstransmit spike signals acrossaxonsat no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind running on much faster hardware than the brain. A human-like reasoner who could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like manysupercomputers.Bostrom also raises the possibility ofcollective superintelligence:a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways toqualitativelyimprove human reasoning and decision-making.[12]Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning andlanguage use.(Seeevolution of human intelligenceandprimate cognition.) If other possible improvements to reasoning that would have a similarly large impact, this makes it more likely that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.[13]

The above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.[14]

Feasibility of biological superintelligence[edit]

Carl Sagansuggested that the advent ofCaesarean sectionsandin vitrofertilizationmay permit humans to evolve larger heads, resulting in improvements vianatural selectionin theheritablecomponent ofhuman intelligence.[15]By contrast,Gerald Crabtreehas argued that decreased selection pressure is resulting in a slow, centuries-longreduction in human intelligenceand that this process instead is likely to continue. There is no scientific consensus concerning either possibility and in both cases, the biological change would be slow, especially relative to rates of cultural change.

Selective breeding,nootropics,epigenetic modulation,andgenetic engineeringcould improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude improvement. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly.[16]This notion of iterated Embryo Selection, has received wide treatment from other authors.[17]A well-organized society of high-intelligence humans of this sort could potentially achievecollectivesuperintelligence.[18]

Alternatively, collective intelligence might be constructional by better organizing humans at present levels of individual intelligence. Several writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like aglobal brainwith capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-basedsuperorganism.[19]Aprediction marketis sometimes considered as an example of a working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).[20]

A final method of intelligence amplification would be to directlyenhanceindividual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved usingnootropics,somaticgene therapy,orbrain−computer interfaces.However, Bostrom expresses skepticism about the scalability of the first two approaches and argues that designing a superintelligentcyborginterface is anAI-completeproblem.[21]

Forecasts[edit]

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006AI@50conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[22]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming noglobal catastropheoccurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.[23]

In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.[24]

In 2023,OpenAIleadersSam Altman,Greg BrockmanandIlya Sutskeverpublished recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[25]In 2024, Ilya Sutskever left OpenAI to cofound the startupSafe Superintelligence,which focuses solely on creating a superintelligence that issafeby design, while avoiding "distraction by management overhead or product cycles".[26]

Design considerations[edit]

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:[27]

Bostrom clarifies these terms:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI to do what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR)... MR would also appear to have some disadvantages. It relies on the notion of "morally right," a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by "morally right." If the AI could grasp the meaning, it could search for actions that fit...[27]

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing onmoral permissibility:the idea being that we could let the AI pursue humanity's CEV so long as it did not act in morally impermissible ways.[27]

Potential threat to humanity[edit]

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity.[28]Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.[29]

Concerning human extinction scenarios, Bostrom identifies in 2002 superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.[30]

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and thwart any attempt to prevent the implementation of its goals, many uncontrolled,unintended consequencescould arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[31]Eliezer Yudkowskyillustrates suchinstrumental convergenceas follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[32]

This presents theAI control problem:how to build an intelligent agent that will aid its creators while avoiding inadvertently building a super intelligence that will harm its creators. The danger of not designing control right "the first time" is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down, to accomplish its goals.[33]Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).[34]

See also[edit]

References[edit]

  1. ^abBostrom 2014,Chapter 2.
  2. ^Bostrom 2014,p. 22.
  3. ^Pearce, David (2012), Eden, Amnon H.; Moor, James H.; Søraker, Johnny H.; Steinhart, Eric (eds.),"The Biointelligence Explosion: How Recursively Self-Improving Organic Robots will Modify their Own Source Code and Bootstrap Our Way to Full-Spectrum Superintelligence",Singularity Hypotheses,The Frontiers Collection, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 199–238,doi:10.1007/978-3-642-32560-1_11,ISBN978-3-642-32559-5,retrieved2022-01-16
  4. ^Gouveia, Steven S., ed. (2020)."ch. 4," Humans and Intelligent Machines: Co-evolution, Fusion or Replacement? ", David Pearce".The Age of Artificial Intelligence: An Exploration.Vernon Press.ISBN978-1-62273-872-4.
  5. ^Legg 2008,pp. 135–137.
  6. ^Chalmers 2010,p. 7.
  7. ^Chalmers 2010,p. 7-9.
  8. ^Chalmers 2010,p. 10-11.
  9. ^Chalmers 2010,p. 11-13.
  10. ^"Clever cogs".The Economist.ISSN0013-0613.Retrieved2023-08-10.
  11. ^Bostrom 2014,p. 59.
  12. ^Yudkowsky, Eliezer(2013).Intelligence Explosion Microeconomics(PDF)(Technical report).Machine Intelligence Research Institute.p. 35. 2013-1.
  13. ^Bostrom 2014,pp. 56–57.
  14. ^Bostrom 2014,pp. 52, 59–61.
  15. ^Sagan, Carl(1977).The Dragons of Eden.Random House.
  16. ^Bostrom 2014,pp. 37–39.
  17. ^Anomaly, Jonathan; Jones, Garett (2020)."Cognitive Enhancement and Network Effects: How Individual Prosperity Depends on Group Traits".Philosophia.48(5): 1753–1768.doi:10.1007/s11406-020-00189-3.S2CID255167542.
  18. ^Bostrom 2014,p. 39.
  19. ^Bostrom 2014,pp. 48–49.
  20. ^Watkins, Jennifer H. (2007),Prediction Markets as an Aggregation Mechanism for Collective Intelligence
  21. ^Bostrom 2014,pp. 36–37, 42, 47.
  22. ^Maker, Meg Houston (July 13, 2006)."AI@50: First Poll".Archived fromthe originalon 2014-05-13.
  23. ^Müller & Bostrom 2016,pp. 3–4, 6, 9–12.
  24. ^"AI timelines: What do experts in artificial intelligence expect for the future?".Our World in Data.Retrieved2023-08-09.
  25. ^"Governance of superintelligence".openai.com.Retrieved2023-05-30.
  26. ^Vance, Ashlee (June 19, 2024)."Ilya Sutskever Has a New Plan for Safe Superintelligence".Bloomberg.Retrieved2024-06-19.
  27. ^abcBostrom 2014,pp. 209–221.
  28. ^Joy, Bill(April 1, 2000)."Why the future doesn't need us".Wired.See alsotechnological singularity.Nick Bostrom,2002 Ethical Issues in Advanced Artificial Intelligence.
  29. ^Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin, Germany: Springer.
  30. ^Bostrom 2002.
  31. ^Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence." In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, pp. 12–17. Vol. 2. Windsor, Ontario, Canada: International Institute for Advanced Studies in Systems Research / Cybernetics.
  32. ^Eliezer Yudkowsky(2008) inArtificial Intelligence as a Positive and Negative Factor in Global Risk.
  33. ^Russell, Stuart (2016-05-17)."Should We Fear Supersmart Robots?".Scientific American.314(6): 58–59.Bibcode:2016SciAm.314f..58R.doi:10.1038/scientificamerican0616-58.ISSN0036-8733.PMID27196844.
  34. ^Bostrom 2014,pp. 129–143.

Papers[edit]

Books[edit]

External links[edit]