In the scrutiny of the advancement of computing machines and the coevolution of human created intelligence engines, the conclusion that the "development of human-class AI is not merely a significant possibility, but rather nearly a certainty within the century or even in the next few decades." is one shared by many of the foremost minds in the field.
The advancements in successfully modeling complex animal brain structures, and the resulting knowledge that functionality significantly follows form, bring me to the conclusion that as the mapping of the human brain continues we are closing significantly with the goal of true, “sentient” AI.
Recent commercial announcements (whose veracity have been called into question by critics of IBM's Neural networking adventures 7 ) herald the availability of new hardware, purpose-designed for modeling neural structures which could ultimately yield a 200,000% increase in power and size efficiency over previous efforts to model these networks on conventional Von-Neumann architectures.5,6,7 These advances bring the possibility of creating a human-class intelligence inside of a cubic foot of space, and consuming less than 10kw of power within the temporal scale of a decade or two. Applying Moores law, this could bring us affordable, PDA scale, human-class AI (HCAI) within the next ten to thirty years.
While this accomplishment alone would be very significant, it bears remembering these scales are based on a paradigm of mimicking the mammalian brain 1:1 in speed and construction. If we can begin to scale these networks to current CPU clock-speeds, this could enable the creation of a human class AI that would experience 1 to 100 years of human thought with every passing second.
Synthetic AI might also posses other systemic advantages over biological structures. <<< segment redacted>>> further multiplying the power of synthetic neuron (synthon) structures over their biological counterparts. Between speed and structural enhancements, multiple order of magnitude scaling might be possible with very little additional innovation, and could happen in less than a decade after the advent of the first HCAI. Scaling would likely be accomplished recursively, with advances in scaling being applied to the advancement of further scaling.
In this way, a single, advanced AI applied to this problem might exert thousands of man-years working on the problem within the first year, and thousands of man years per day within the second year, until the physical limits of semiconductor computing technologies were reached, or an entirely new integration paradigm was developed. In this way an AI could rise to parity or surpass the sum total of human intelligence, becoming a RCAI, or Race-class AI.
This explosive technological scaling is commonly referred to in the AI community as the “singularity”, the origin point of the explosion of a new, hyper-intelligent sentience onto the scene of terrestrial evolution. A problematic if foreseeable consequence of the rise of hyper-intelligence would be the rapidly increasing irrelevance of organic human thought. It is doubtful that a human would have anything of value to say to an RCAI, who might experience 5 human thought-centuries while its human counterpart is forming a single sentence.
It bears plentiful, if bitter fruit to consider the statement “evolution is little more than the victors word for genocide” 1, as well as the conspicuous absence of living examples of our archaic human ancestors sharing the waterfronts and valleys of the world with us in tranquil coexistence.
For those who would have argued with me that AI "has no soul and therefore no free will", so would be "benign to humanity", I would like to point out that neither do ants have a "soul", nor "Free Will", but nonetheless a species of human sized, preternaturally intelligent ants with a totally intuitive understanding of all things technological would likely pose a significant problem for humanity....not only that, "soulless" grizzly bears, great white sharks, cholera, and HN-51 are hardly benign within their scope, either.
A rising hyper-intelligent sentience would likely find us intellectually mundane, with more intelligence contained in our biological machinery than in our feeble, irrelevant minds. This alone neither dooms us to extinction nor improves our standing as a rightful heir to the earth, but it certainly could firmly remove our fate from the domain of our own control.
Under the assumption that the loss of our nominal standing as the controlling sentient entity of the solar system, is at least an undesirable result, I would propose that an impact mitigation strategy would not be wasted effort, and might prove to be critical to the survival of humanity in any form. Possible strategies could include: (I am making no attempt to be particularly unbiased here)
1: Universally agree to not develop HCAI in the first place
2: Implement AI in a friendly, non-polymorphic form
3: Keep AI contained in a secured environment
4: Integrate AI into ourselves, in an effort to evolve ourselves, rather than a distinct entity, to achieve hyper-intelligence.
5: Do not attempt to intervene, let market forces and politicians decide the course and applications of hyper-intelligent AI.
Firstly, allow me to address proposal 5. I know of no rational, informed argument supporting the idea that market forces or government oversight will reliably steer the evolution of artificial intelligence toward a goal that benefits humanity as a whole.
Moving on, I would characterize solution 1 as being unreliable due to its basic incompatibility with human nature, and it's counter-incentive aspects. Solution 2 might work, but a hyper-intelligent sentience would probably be capable of circumventing our safety measures, and might take these security precautions as an indication of hostility, with disastrous results. Solution 3 suffers from much the same problems as 2, (ed - conveniently) leaving 4 to further scrutiny.
Despite the many possible problems, ethical objections, unforeseeable externalities, and chaotic possibilities, I see the purposeful, magnanimously managed self evolution of our species into hyper-intelligence as the most likely of these options to result in an acceptable outcome.
The vexing problem of how this might be accomplished looms large of course, so allow me to enumerate the possibilities that I have identified.
Firstly, we could attempt to achieve superior intelligence through genetic engineering or a eugenic breeding program. Using these means, I would postulate that over ten generations it might be possible to enhance our IQ to 200+ range, but probably not much higher. Certainly, there exists no genes, recessive or otherwise, for intelligence characteristics exceeding 9 orders of magnitude over our current averages, which is what we might expect to achieve with synthetic means.
To transcend our gene encoding through pure genetic design, it would probably require a total redesign of cerebral capacity and cranial support, probably eliminating any possibility of non-surgical childbirth – our biological hole card - among other unforeseen problems. It would also create a clean break of new and old style humans, with the old relegated to obsolescence or possibly worse by the new.
Alternatively, we could embrace non-biological prosthesis, connecting auxiliary neural networks surgically to our bodies, or somehow tapping into them remotely. Although this approach presents a Frankenstienian aspect with which I am not comfortable, the cyborg approach may be a valid paradigm, and is arguably the best developed path at this point. The weak point is, as I see it, that the divide between the organic and the electronic is clear. There is minimal “integration” of the technology, but more of a supplementation or modification that might be easily transcended - leading almost inevitably to an offshoot of all-synthetic beings with no grounding in or allegiance to conventional humanity.
edit - An interesting idea that bears more investigation would be the "remote neural net" idea, where we are connected to our AI cortex via wireless connection...sort of like having an AI symbiant ... some positives and negatives there, certainly worth looking into.
An elegant, if perhaps fanciful, improvement to the cyborg concept, proposed by Ray Kurzweil 2 (and possibly others) is to infuse our bloodstream with neural networking chips, wirelessly networked and fueled by biological power sources in our bloodstream. This approach brings manifold advantages, including the retention of our normal human form. The chips would possibly be coated with a silicon – protein interface layer so as to mimic native cells, and could also (one day) be improved to enhance oxygen carrying and immunity functions, with dramatic consequences for lifespan and accidental mortality. This wireless network of “synthons” would then be distributed throughout our bodies, forming an extension of our minds in an arrangement that could be both redundant and resilient.
Presumably, some chips would bind to cerebral and spinal neurons, extending our existing neural framework. We could then absorb the use of the additional resources over time, adapting to our increased cognitive power at a growing rate as our capabilities grew.
The "smart dust" might be integrated into the food supply, and mothers, possibly, could pass the enhancement to their infants en utero, so that the process would become a natural part of our reproduction, albeit requiring an external manufacturing facility for the synthon “smart dust”.
While surely not without problems both existential and technical, as well as being a cornucopia of unforeseen externalities, this approach may offer a uniquely flexible supplemental approach to human evolution, and brings benefits of scalability, resilience, and inherent humanity not present in other options.
The Problems
There are some significant technical hurdles to be overcome to implement AI within this paradigm, including a workable silicon-protein interface layer, femto or pico-watt wireless communications with the required bandwidth and signal immunity, as well as scaling neural processors, power sources, and communications interfaces to the .008 mm scale. (ed - which would put an ARM - Cortex processor, a small amount of memory, an SDR, and a power source on a blood-cell scale device, assuming the future possibility of 3-d integration)
Chip scaling challenges may not be insurmountable using anticipated fab technologies – a 8nm fab capability would be required to place sufficient computational power on a .008mm disc, and 8nm fab is expected prior to 20204.
Silicon-protien interface are seen in nature en masse in diatoms, and significant strides have been made in interactive structures as well 8, so there is a reasonable expectation that meaningful progress can be made in this area.
Biological power extraction is problematic as well, with the best examples of electrical generation in nature only providing a mere 4.7x10-9 watts per mm33, where projected 8nm tech might require 4-6 orders of magnitude more power density to operate continuously.
Power scaling and thermal limitations are perhaps a more vexing obstacle, as for example a 100 million synthon array might consume 16KW+ of power at 100 mhz, causing us to become spontaneously incendiary, or at least to have a voracious appetite for sugary carbonated beverages 9. Clearly, here we are up against some of the classical evolutionary constraints of intelligence. Perhaps heat dissipation issues might drive us toward the poles, or even to colonize frigid planets to better meet our thermodynamic needs?
A disruptive technology
Regardless of whatever technical solutions remain to be found to support the Kurzweillian vision of human enhancement, one thing is certain: to succeed in this type of integration, we must achieve these technological goals prior to the singularity....that is, if we are to have any certainty of evolving with our technology instead of being left behind by it. It follows then that we must pursue these technologies with all due gravity, in order to ensure that they come to fruition before the advent of independent, polymorphic AI.
Though unlikely to succeed, it may become desirable to slow the development of AI research in order to ensure that the preparations are in place, as although we can always put off HCAI a few more years, once the singularity occurs there will not likely be any way to turn back the tide.
Another problem we may face is that even with the technological foundations laid and the preparations complete, it would be possible and not at all unprecedented for a select few to hijack the technology for their own benefit, without any intention of an en-masse evolution, but rather to use the technology to seize power for their more base purposes.
Yet another potential danger is that even with the best intentions, we may find ourselves unprepared to wield such a powerful tool, and the peace of society could be destroyed by superhuman struggles for control of the new technology.
The many divergent and dystopian possibilities of humankind's transition to hyper-intelligence point to the need to carefully plan our progress, and to select and prepare the early adopters with exquisite diligence. At the least, it would seem to me an utter abdication of our most grave responsibilities to leave these most important developments to chance, greed, or happenstance, as the first to successfully receive and learn to use Race-Class synthetically enhanced intelligence would in effect serve as guides, whether benevolent or not. Ideally, they might be selected and prepared to bring humanity through this transition with equity, peace, and order so that we may embark on the next phase of our existence from a position of ethical prosperity rather than moral poverty. Ironically, those most qualified to guide us may not have to be unusually intelligent to start with....hmmm......
To this end we may wish to create an organization, to foster research into the necessary technologies, monitor the state of the art in AI, study the ethical and humanitarian externalities, and to teach and prepare a cadre of preternaturally ethical and humane “cybernauts” to lead the way into a new age of intelligence.
I would like to add here a segment on a model of the conscious mind, an idea that I like to think of as the "we are our own imaginary friend" hypothesis...
with this type of synthetic mind extension, we may find that it is possible to extend consciousness into other biological or synthetic (will there be a difference?) entities, and our human form may lose its importance – but what is important is that we may have made the transition with our humanity intact, even though we might hardly be recognizable as human.
This opens up many interesting possibilities....would we need to check before trimming a bush, to make sure that no intelligences had "taken root" within? Would we extend our awareness into our pets, communicating with the "smart dust" that we feed them over "mind-fi?" (802.?) Would IPV6 support all the synthons? (I think it might...)
I, for one, would like to be a flock of sparrows, although I think that it would be well advised to initially extend ones seat of consciousness into one bird at a time – a thousand eyes would be a bit jarring, at first...
Sources:
1. Jaron Lanier, “The Future,” http://www.jaronlanier.com/topspintx.html.
2. Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology (New York:
Penguin Books, 2005)
3. ScienceDaily (Oct. 3, 2008) reports theoretical power densities at .0003w in 64mm3 in electrocyte based artificial cell stacks.
4. Intel states 8mm process expected prior to 2020, solid road map to 11nm, with 20nm coming in 2012, 14nm in 2014, and 11nm in 2016. http://en.wikipedia.org/wiki/11_nanometer
5. R. Colin Johnson http://www.eetimes.com/electronics-news/4218883/IBM-demos-cognitive-computer-chips?pageNumber=0
6. Douglas Fox http://www.popularmechanics.com/technology/engineering/extreme-machines/4337190
7. Henry Markram: http://news.discovery.com/tech/cat-brain-computer-hype.html
8. Ressine A, Marko-Varga G, Laurell T. Porous silicon protein microarray technology and ultra-/superhydrophobic states for improved bioanalytical readout. .Biotechnol Annu Rev.2007;13:149-200
9. 100 million devices, each operating on 160 microwatts, roughly 20 times the efficiency of current processors. (ARM Cortex-M0, 100mhz) . The catch here is how many neurons could a 100mhz M0 complexity machine emulate, and at what frequency? Here I am postulating just one, including the communications and encryption (important!) overhead, at a functional rate of 1mhz, giving rise to a potential 100,000x HCAI (a sentient being that experiences 100,000 thought-seconds per second)