Intelligence? Until humans agree on a definition and neutral measure of it, it would be unfair to judge computers. Computers may well be showing signs of intelligence far greater than ours already - we just move the goal posts to what intelligence is and make up a new measure that we know computers can't do. It is a piecemeal descent into a biased able-bodied-human-only philosophy of intelligence.
So something like "Humans thought they were the most intelligent species on the planet because they had build cities. Dolphins thought they were the most intelligent for exactly the same reason."
An interesting point of view, not one I agree with. I have a PhD in Cybernetics, my topic being Artificial Intelligence. As in brains. In computers. I'm not a neuroscientist but my colleagues were...
The brain is not a perfectly universal computer. It's a very specific computer that performs a very limited series of tasks very slowly. It can process large amounts of data but its ability to perform decisions and calculations is rubbish. There is no amount of self-programming that you can do that will noticably increase the speed in which you can do calculations - nor can you alter the manner in which you perceive the world beyond the genetic wiring you're born with. When damaged, the brain can reconfigure but only to regain a lost ability - not to perceive entirely differently.
Computers can learn, cluster similar, classify and detect novelty. They can reason using Bayesian Belief Networks (which uses Bayes law of probability). I am not sure what "generalised pattern recognition" is but they perform pattern recognition far better than humans do (faster, more accurately with lower failure rates). They do retain individual experience - if mounted in a device that allows them to experience the world (like a mobile robot). Natural language is so very close now, give it 10 years and your phone will be able to understand natural language. You won't want it to, though.
Abstraction (free association) is where things get sticky because we're not sure whether humans can do it. We're just assuming they can. "That cloud looks like a dog" abstraction is difficult for us to understand the mechanism behind why humans come out with it. It's not quite pattern recognition. It's not quite experiential. Nor is it random. When people perform abstract creativity (computers can do abstract problem decomposition - after all that's what PCB routing software is) other humans aren't sure whether the process is considered and intelligent or if it is random.
Improvisation is also difficult to define. We loaded simple n-tuple neural networks onto 10 mobile robots to take to schools to demo learning and robotics to kids. It was a big hit. One robot had a crappy wheel and it found it difficult to do right turns. It learnt that to turn right, it needed to do a left 270 degree. All we told it was that going near objects was bad, it learnt the rest for itself. The 12 year old kids (and the teacher) believed that the robot had improvised past its problem. Is it enough to display a novel solution to problem not designed for? Furthermore, they associated human attributes to it and we sent an email to the school a week after to assure the class that we had fixed "Squeeky" and that he could now turn right.
As for complexity, that's a ticking clock. My PhD work was done on single core 800MHz and I'd run dynamic neural networks (more costly than normal ones) up to about 20,000 neurons. My latest C version of the network runs on my i5 750 at home with just 2 of the 4 cores and will spit out 2 million neurons. My javascript neural networks running in Chrome can go up to about 80,000 because the GUI grinds to a halt but that's a tech demo only.
If you added natural language to the Asimo, you're a stone's throw from C3PO. But we're not trying to build C3PO because there's no money in it - not like there is in search, face recognition, novelty detection etc etc. Only when there is a business need do technologies take off and there isn't a great business need for a robot to do a job that a human can do cheaper.
Intentionality is the other hurdle you didn't mention but then we can't really define what that is. Personally, we know we want to do something but how much of that is genetic and social programming and how much of it is free will? Before we start trying to judge machines or other animals, we should know what it is that we're trying to measure.
Our problem is human bias. We assume that other humans are perceiving the world in a similar way to us. It's nice and easy to do that. As soon as something else demonstrates intelligence, we compare it to humanity. When a machine goes above and beyond what a human can do, we write it off as not being part of what we consider as intelligence. When a human acts in a way that does not fit with our perceptions, it is difficult for us to understand because of our deep seated bias.
I meant universal as in, you can teach a human pretty much anything. We have hardware limitations (or OS/Firmware, however you define it) but we're pretty flexible in what we do. The examples you mentioned are all great, and we certainly have come far, but they're all specialized solutions, no?
I am not saying that AI is impossible, I am just convinced it'll take us another 40 years or so to have a silicone equivalent to a human - when we have fusion power and Linux on the desktop.
Also just for the record, I do not subscribe to the absurd notion that humans are somehow "special". We're just biological machines. Complex, but not supernatural or anything. If we are "intelligent", it follows logically that machines can be intelligent as well.
But I am VERY glad to hear you have a professional background on this topic... do you mind playing scientific advisor on the subject for notoriously underfunded worldbuilders?