Posted on October 12, 2011 at 9:00 AM
Could robotic systems create environments and bodies for themselves? To answer these questions, let’s start with something simple (and most probable), and then open our discussion to include a somewhat more sublime, and more futuristic vision. Let’s also lay down some basic presumptions about how a paradigm for such physically intelligent robots would be initiated and sustained. The establishment of a neurally-modeled, physically intelligent system capable of generative encoding would need to enable the acquisition of data, information, and therefore some type of “knowledge” about both the system itself (i.e.- interoceptive understanding), and the environments in which the system would be embedded and engaged (i.e.- exteroceptive understanding).
The internet could provide a medium for acquiring these data, and the richness of information available on the internet could be augmented through access to “real-world” environments, as well. In other words, the system would ‘plug into’ both the vast informational resource of the internet, and the real world (via multi-channel sensors, for light, sound, tactile, and perhaps even olfactory inputs – and at levels that may have very different thresholds than humans, say, for example, infrared and ultraviolet light, and ultra and subsonic frequencies, etc ). Taken together, this would provide the system with direct-access data/information, and indirect-access, interpretive and analytic data/information that would greatly augment the amount and type(s) of “knowledge” the system could and would acquire, and be able to use.
That’s a lot to ponder, but it sets the stage. Once these basic functions were established, the system could interpret its “real world” environments, and ferret through reams of data on the internet to create a “mosaic” of the information needed to “auto-develop” engineering approaches to optimize its function in the environments it encounters, and those it seeks to engage. This could then be relayed to humans in order to create “systems’ desiderata” – a “needs and wish list”, if you will – to inform how to structure and/or modify the components of the system to achieve certain tasks, goals and even progressively more expanding ends.
The system could bring together a variety of other system components – both of the neural network and the physical structures that provide its inputs and outputs – and “present” these to its human builders as aspects of what would be needed to iteratively fine tune its functions and capabilities. A potential advantage of this approach would be the ability of the robotic system to side-step the limitations of a human “builders’ bias”, to instead emphasize and exploit the dispositions and biases of the neural system to self-assess and support its own functions.
Let’s take this a few steps further; there is the possibility that the system could develop and/or evolve the capacity to “re-tool” itself, and in this way attempt to “take out the middle man”, so to speak. Through generative encoding, the system could “propose” a robotic component that could enable and/or sustain the encoding process and its physical expression. In other words, it could “request” the parts needed for a “building device” that then would allow the system to execute physical autopoiesis – more simply put, the ability to build new parts of itself, or construct new systems (without humans in the loop).
These new parts could be sub-components that synergize the activities of the major (or alpha) system, and in this way establish a multi-tasking “support network”. This is not as far-fetched as it seems. The capacity for self-regulation is inherent to most, if not all, cybernetic and complex dynamical systems, and the achievement of certain goals would then feed back to the system and provide an ever-expanding palette of new niches, requirements and tasks – and through successive self re-modeling, the generation of new capabilities. Moreover, this could occur rapidly, as the processes employed by the system for performance optimization might not be bounded by the restrictions of “outside-in” perspectives.
Now before we start spooling into visions of “intelligent robots” taking over the world, let’s ground these possibilities to practical realities. To be sure, there are a number of mitigating factors here. First is that these systems need power, and so they’d be dependent upon the existing power supplies for their resources. But, it is also possible that such systems, if and when working in synergy, could establish a “divert on demand” mechanism/pathway to provide access to necessary power supplies to sustain function across a range of environmental deprivations (and there is current discussion of the likelihood of such mechanisms being generated by cognitive systems as some form of “survival strategy” that almost any self-referential, cognitive system would be likely to develop).
Another limiting factor is that the materials for auto-fabrication would need to be acquired and available if such systems were to attempt to generate physical structures to expand their own functions. There has been discussion about whether such systems could/would learn to gerry-rig or “MacGyver” their own components, so as to create physical adaptations necessary to execute evermore advanced/complex functions. This too is not as much of a stretch as it sounds. A robotic system that is modeled after or upon a human neuro-cognitive template could in fact, manifest something of a (metaphorically) Bayesian bias toward “tool use”, and in light of this, could learn to use the resources at hand to alter its structure in such ways as to adapt to new environmental challenges and “get the job done”.
As well, there is the claim that the digital nature of machine computation imposes limits upon the freedom with which expanded capability could be realized. Here I think a bit of caution is warranted; a neuro-identically modeled system (if not an idealized neurally-modeled system) could rapidly achieve vast degrees of functional freedom by changing the sensitivities to functional thresholds. If the system were, for example, to develop a broad range of discriminations between a no-response value and response value (say, a Planck-based scale between 0 and 1 that involves almost infinitesimally small distinctions between each point value), it could develop very finely-grained capacity and patterns of parsing inputs and outputs and in this way, greatly refine – and expand – its functional repertoire.
Neural systems actually operate in this way: while the net effect in a given neural network may be activity or non-activity, and that of a nerve cell may be “fire/do not fire”, these overall characteristics reflect very finely-tuned, small-scale inputs and outputs (e.g.- at various regions of cell membranes, and at large numbers of points of inter-neuronal connections within the neural network) that are graded, and whose spatio-temporal pattern of activity cumulatively summate in to produce a “go/no-go” effect. So, a system modeled after or upon such neural activity could, or more probably would, function in much the same way, and this might provide the basis for its ongoing complexification, some sense of consciousness, self- awareness, and perhaps striving to flourish and survive.
Next week: Neuroethical Issues at the Precipice of Possibility.