If I don't stop this soon I'll need a whole new category for what's wrong with AI posts. As it is I'm not sure where to put this. Religion, perhaps. I am reluctant at any rate to create a category called "Artificial Intelligence" since I would like to see the term abandoned in favour of something more appropriate.
I'm actually trying to do some rapid background reading on software agents and multi agent systems, in order to provide a more informed critique than I otherwise could of Andreja Jonoski's comments on these in the penultimate chapter of his excellent thesis. I have been mildly distracted on to what I see as the more general issue of the misguided expectations people hold (none more so that its advocates) of Artificial Intelligence.
In the process of looking around at some AI resources, I find the AI Depot, the front page of which is full of marvellous absurdities. Just one sample:
To date, all the traits of human intelligence have not been captured and applied together to spawn an intelligent artificial creature. [Artificial Intelligence Introduction]
The clear implication is that is that this "creature" is just round the corner.
The AI debate tends, like so many debates, to entirely miss the point, and it misses the point in a very common way. While the arguments rage over whether we should or should not be striving to create intelligent life and over whether there is any likelihood such a goal might be achieved, humans continue to be reduced on a daily basis to the level of machines, in order to get things done.
Aside: for the record, my points of view in the two arguments listed are that we should not strive to create artificial intelligences, and that if we do there is very little chance of success in an expected form. In the unlikely circumstance that intelligence is created, it will be as an emergent property of our ever more densely connected networks. If it is, then we have no reason to suppose it will be intelligence of a form which is recognisable to us. What does intelligence look like when it is displayed by an entity with no physical form, for example? When its experience of the world is through completely different sensors, and its involvement in the world achieved through different actuators? And given that it will be so foreign, are we likely to recognise it before the Singularity passes?
Work on simulating aspects of intelligence has made and will continue to make invaluable contributions to our understanding of the nature of intelligence. This understanding is important, and the simulations, as well as enhancing it, may prove to have practical applications in their own right (Artificial Neural Networks, for example). The real value of improved understanding of intelligence, however, is not in being able to recreate it, but in being better able to magnify it.
I feel sure that all of the good work that has come from Artificial Intelligence research (and it is considerable) and more could have come from research grounded in this view. We have become fascinated with the replacement of humans by machines. And here, to close, I can touch briefly on the original subject of software agents. Having removed the right to self esteem of labourers, by telling them (often quite incorrectly) that their work was better and more cheaply done by machines, we are now moving on. We are striving to create software travel agents, when skilled human travel agents could do their jobs so much better if we instead concentrated on providing them with better tools.
"What does intelligence look like when it is displayed by an entity with no physical form, for example? When its experience of the world is through completely different sensors, and its involvement in the world achieved through different actuators?"
All software has a physical presense in the world as stored ones and zeros represented in the physical world in various physical mediums such as: electrons, pits on a cd, magnetic fields, etc... Therefor if an AI were to ever exist, it would have a physical presense. More directly it would have it's presense in the chips, motherboards and wires of the computers it's running on, although it might be able to avoid a "static" physical body, that we biological intelligences require, by jumping from one computer system to another.
The error that many make is that the "Virtual Reality" isn't grounded in the Physical Reality.
It's clear that intelligence isn't one dimentional. There are pure smarts as in when you recognize someone as being smart. There is emotional intelligence, as in someone who is skilled, adept and astute at working with others and percetive in the context of emotional awareness and sensitivity. There are likely many other forms of intelligence (that aren't coming to mind at the moment), possibly even forms that we or other beings possess that we are unaware of.
In addition, self awareness is important: the awareness of oneself as distinct from yet existing within the universe.
May any AI's that we create be friendly to us.
Posted by: Peter William Lount | November 03, 2003 at 01:42 AM
Thanks for the comments, Peter, and for all your good work at the http://www.smalltalk.org/ site.
"All software has a physical presense in the world ..."
Absolutely, but it is not tied to a physical *body* as such. Just as man creates God in his image, man creates (envisages) artificial intelligences in his image, but in both cases surely incorrectly. An intelligence which has "grown up" with very different sensory organs, indeed for example with sensory organs in a multitude of places, all at once, will have learned a very different "outlook" on life than we do.
Virtual Reality and artificial intelligence are two quite separate things. The former is about creating the illusion of reality to an existing intelligence, the latter (ostensibly) about introducing a new intelligence into an existing reality.
The intelligence we humans are blessed and cursed[1] with is emergent, and one of the most striking features of emergence is that what emerges from lower level behaviour is hard or impossible to predict. It is also clear that the lower level system from which it emerges is extraordinarily complex.
May any AI's that we cause to come into existence be friendly to us, indeed, but may we *please* concentrate on the stuff which could be moving us forwards in leaps and bounds, and stop fiddling around with these absurd and pointless desires.
[1] Anyone who takes issue with intelligence being a two edged sword should consider whether they think the average dog suffers the vertigo of existential angst. Or talk to people in the mental health profession about the real incidence of mental health problems. We indulge in some communal process of denial, underfunding services, sweeping the whole thing under the carpet, but it is there, at least partly because the extraordinary capabilities of the human brain are built on a delicate and easily disturbed balance of electrical and chemical interactions.
Posted by: Hamish | November 03, 2003 at 09:34 AM
The virtual and the artifical. Is there really any material difference between a virtual and an artifical intelligence? The environment they exist in? Sure the real world is more complex, but an AI should be able to exist in both, if it's intelligent that is.
Emotional Intelligence is an intregral aspect of any "human level AI". Or is it? Stephen Spielberg's movie "AI" provides an excellent example of the prison of emotions "gone wrong". Even if they were simulated emotions in the boy AI his love for his mother kept him in a permanant self imposed prison. Maybe it would be best if we don't provide emotions to any artificial beings.
The reality of man made "intelligent systems" - emergent, virtual or artificial - is that they are, from what I can tell, simply algorithms. The emergent kind as in genetic programming seem to be the least algorithmic, but at it's core are still algorithms. Any digitally based intelligence will of course be a computer program so it will entail algorithms. The question is, is this really "intelligence"?
The link with Virtual Reality is that many of the current games coming from companies like Electronic Arts use what they themselves call Artifical Intelligence. It depends on whos definition of AI you use. The intelligence of a game's characters makes a huge difference to the human players feel of the game, and thus impacts the sales of a game. Twenty years ago I wrote the "monster intelligence" routines for my video game "Gemstone Warrior" (see my web site to download and play a copy). At the time people were amazed that these monsters could navigate around the complex tunnels of the virtual dungeon they existed in. However, just as those "algorithms" are far from any kind of real AI, the current generation of games from EA all really just algorithms as well. It depends on what you mean by intelligence. Sure we have programs that are really good at very specific tasks such as chess or "hockey tactics" in a video game. Are these really AI? In the sense that they have a goal to solve and do come up with answers, maybe in the narrowest definition, but in the wider sense of what most people would consider to be intelligence? No they are not intelligent.
What is AI? I'll defer to the experts on the topic. John McCarthy's, a father of AI and the father of LISP, has this his a couple of pages on this question here: http://www-formal.stanford.edu/jmc/whatisai/whatisai.html.
An interesting aside, it seems that there are many "fathers of AI". Performing a google search reveals: John McCarthy, Alan Turning and Marvin Minsky as the leanding contenders for this title.
Marvin plays the bad boy of AI with comments like "AI has been brain-dead since the 1970s". See: http://www.wired.com/news/print/0,1294,58714,00.html
I guess he feels that we've made little progress. So do I as we don't have computers like "Hal 9000" to play with yet.
The reality of software is that it takes time and he right ideas to make even the simplest systems work. Imagine what it will take to have "intelligent" systems that are at the level of even simple biological systems such as cats, snakes, or insects?
What will it take for intelligence to emerge rather than be designed?
Stephen Wolfram, http://www.StephenWolfram.com, has some intersting ideas that could support the idea of emergent intelligence. Obviously human intelligence has emerged from nature, assuming evolution or something like it holds true, and as such why couldn't a binary digital intelligence also emerge? One of the most intesting aspects of Stephen's work, discussed in depth in his book "A New Kind of Science", is the idea of a threshold that systems have that move them from generating simple responses to "complex" responses. His research shows that it doesn't take much more than simple systems to generate complex responses. Importantly in addition, once a simple system has crossed that threshold it can generate as complex responses as a complex system can generate. This "theory" (if that's the right name for it), "principle" or "property" of simple systems crossing a threshold and generating complex responses could hold the key to emergent intelligence.
Final thoughts, it may be best that we let intelligence emerge first within the virtual environments lest we forget the lessons of movies like Terminator, 2001 and AI. Persumably in the virtual world we could control or limit their actions. Oh, video games will drive much of AI research since it means real money in the bank for them when their games move it to a new level, even if it's just "perceived intelligence". Ah, the magic of the facade in entertainment and the Turning Test.
Posted by: Peter William Lount | November 03, 2003 at 11:11 PM
First, a disclaimer: I haven't seen the film AI, nor have I read Stephen Wolfram's book. I'd very much like to read the latter at some point, but not this year!
Artificial and virtual. I confess that I hadn't examined this issue, but to a great extent we are imprisoned by past usage. Artificial reality and virtual intelligence? I don't think it works. The OED gives us
So we see that artificial implies non-natural reality, whereas virtual implies non-reality. As such the uses in virtual reality and artificial intelligence fit with the visions embedded in those names as I understand them, and fit with the distinction I propose above. I will grant you the idea of an artificial intelligence "living" (words get tricky round here) in a virtual environment as an interesting possibility.
The question of just what is intelligence is a difficult one. The obvious answer follows the familiar "I know what it is when I see it" formula, which presumably was the basis of Turing's Test. McCarthy's definitions are woefully algorithmic, which as you point out misses entirely our humanity.
I could be led by your comments to ponder further on all sorts of subjects, but to do so would be to fall foul of the trap which I identified in the original post. That is, the trap of falling into a (fascinating) back and forth debate which misses the point. The point is not whether striving to replicate aspects of human intelligence is pointless, or likely to be fruitless. It has proven itself valuable over the years, but the fruit that are coming forth are not those which were dreamed of.
The point is rather that the focus on replicating intelligence can be at the expense of a focus on magnifying our own, abundant, well developed, but specialised intelligence using tools which allow it to operate unhindered while extending it in areas where it is weak. I am forced to work every day with software which can only be described as abominable. Instead of thinking about the work I am engaged in, I have to think about email, and editors, and search engines, and so on. I have data in umpteen file formats scattered in multiple locations, which has been through a range of processes which are sketchily documented at best.
It is worth noting that Smalltalk is a good example of a tool which helped bring computing to a more useful cognitive level. That is why, whenever I go back to writing code in Smalltalk, I feel as if I'm coming home. I'm experimenting in Lisp at the moment, because what I am trying to do is to describe compositions of processes using RDF, and have those descriptions drive the construction of simulator of that composite process, and I believe that Lisp is more suited to this task than Smalltalk. Lisp has some of Smalltalk's qualities (no great surprise, since ST is Lisp influenced); the greatest shared quality, and distinguishing quality from other languages, is a simple, consistent metamodel.
Posted by: hamish | November 04, 2003 at 10:08 AM
In the course of thinking about this discussion it occurs to me that there is evidence that "artifial intellgience" is possible. At least there is proof that intelligence is an emergent property of the Universe. The proof? Us humans. Since intelligence exists within us, and we are part of the Universe it follows that intelligence is a property of some things in the Universe, at least biological systems. Our intelligence most likely emerges from our biology. Considering what Stephen Wolfram's has discovered about simple systems being able to generate complex behaviors once they cross a threshold, it's possible that artifical or man made systems could become intelligent.
Augmenting human capabilities with systems that are symbiotic is an area of research that appeals to me as well. You may wish to look into the thinking of Douglas Engelbart. Smalltalk and LISP are excellent tools for building systems. I too have been studying LISP and have developed a whole new appreciation for it and the base abstractions with these languages. In developing a next generation language that starts with a Smalltalk like message sending syntax and evolves it forward I'm keenly aware of the issues you raise with regards to current systems. I only embark upon this endevour since the current systems don't do what I perceive is needed. They lock people into too much detail. What we need are systems that help people build systems rather than program systems.
My focus is on group collaborative tools that enable people to communicate and connect with each other using advanced applications that they themselves can construct on the fly. Much like Intensional Software systems, http://IntentionalSoftware.com, or Generative Systems, programs that write programs, or Genetic Systems, programs that evolve. The key is "declarative" problem specificaion rather than figuring out all the details one self. This is where those simple systems that cross a thershold could come into play.
In the video game "Gemstone Healer" I wrote a Cellular Automation for "generating" the "game terrain" from a 12 character seed. Some called it a fractal algorithm since that is what it looked like, however it wasn't. It was a cellular automation combined with a simple rule based system. Working within constaints it solved the problem of how to fill in the dungeon map while ensuring a path through the dungeon. Similar technologies can be applied to program generation. These simple "AI" systems are advanced algorithms that are hard to write, but once written then can be suprisingly flexible in what they can be used for. Certainly the computer scientists at companies like Electronic Arts would consider these AI systems and a step in the right direction.
What are the fastest human machine interfaces? The fastest User Interfaces are game systems and jet fighter constrol systems. So if an information system needs to be programmed fast then why not adopt a game style UI that supports some form of Visual Programming that augments what the user is attempting to accomplish?
Emerse this game in the reality of celluar automata and genetic systems using "patterns" as program templates. They play to program, play to build your soltions, your tools. Users then become players who play with their computers to get them to do what they want. With billions of instructions per second at our disposal it's time we moved away from coding systems by writing line by line.
Posted by: Peter William Lount | November 05, 2003 at 01:04 AM
I believe you are right; artificial intelligence, in the sense of an intelligence which comes about as a result of human activities, is possible. I would concur, too, with your view that our intelligence is an emergent property of our biology.
These points essentially reinforce my view that any effort to create intelligence (by what I would regard as an adequate definition of the word) is misguided. In the case of humans, the complexity of the system from which intelligence emerges is spectacular, even if you can pick out individual components (if you're deft enough with a scalpel. Spectacularly complex, and not the result of design (for those who believe that it is the result of design a whole different set of problems arise).
Incidentally, I suspect that Stephen Wolfram has done more to convey the fascinating nature of complex emergent behaviour from simple rules than he has done to discover it, but I havn't even read his book and I might be maligning him. Stephen Johnson's Emergence delves into the history of the field in a very readable way.
Approaches like genetic programming really come into their own when you are dealing with emergence, because they allow you to explore changes in the genotype based on the imact they have at a phenotypical level. Since the mapping from genotype to phenotype is non-obvious, these tools are necessary. Even here, there are serious problems to be overcome in describing the desirable phenotypical properties. Using intelligence as an extreme example, if one wished to evolve intelligence, even if one had a notion of the sort of simple pieces from which it might emerge in order to constrain the search space, one would have no way of expressing the goal of the evolutionary process. "Natural" intelligence came to exist because it provided a selective advantage, not because it was decided upon as a desirable goal a priori.
As to the "AI" systems in games, I still argue that their name is a reflection of history alone. Artificial they may be, intelligent they most certainly aren't.
A whole area of research into how we might go about utilising our developing understanding of emergence would seem to me to be another one with tremendous value, but which is likely to be held back by spurious claims to possibilities of creating artificial intelligences.
Thanks for the intentional software link, that looks like one I need to follow up.
Game interfaces and jet fighter control systems are very well specified and constrained; there are surely lessons to be learned from them but I'm not sure they offer ever so much as a new paradigm. Have you read Dijkstra's written in anger rant about crutches, written in response to a thesis about visual programming? Personally I believe they have their place, but such notes of caution are always worth keeping in mind.
Regarding moving away from writing software line by line, the point I made above about being able to specify your goals sufficiently well for the system to fulfill them is relevant here, too; if you just transfer effort from specifying methods to specifying goals then not much is achieved. Granted, though, that we need to transfer some of the cycles at our disposal to reducing our cognitive load, instead of perpetually increasing it.
Posted by: hamish | November 12, 2003 at 08:18 PM
Ah, I see that Kiczales is behind Intentional Software, which lends them a level of credibility they would be hard pressed to find elsewhere.
Posted by: hamish | November 12, 2003 at 08:19 PM
This thread of discussion seems to have come a long way. There is an old saying that the best is the enemy of the good. I think it applies particluarly in this context.
Artifical Intelligence is a title for a game people play with inadequate resources. That is no reason for stopping the game as the resources keep improving. Unfortunately, the energy going into tryig to make computers replace the thinking acts of people is diverting from the MUCH more valuable work of supporting them.
This surely applies at all leves of use. A program which "analyses" a problem does so within the boundaries set by the designers and programmers which may ot be understood by the user. Unfortunately, the computer produces and "answer" which is frequently believed to be correct. Correct here means truly representative of the behaviour of the thing being modelled.
There are two powerful vested interests keeping things tat way. Those who make money from the established software who use increasing power to bolt on fancy goodies to broken models, and those pursuing work such as AI who hold out hope of removing human fallibility tomorrow.
What computers could do is help those fallible humans to understand their limits and the level of risk they run in pursuing particular trains of thought in exclusion to others.
In designing structures we rely onthe supremely forgivng nature of hyperstatic structures. It doesn't matter that we don't understand how things really work because the structure will do its best however hard we try to foul it up. If we settled for studying how behaviour might change if we altered the thing rather than howit will be have as detailed, we would learn much more and migt begin to be able to explain to the general public the element of risk involved in everything man made.
Posted by: Bill Harvey | November 16, 2003 at 08:37 AM