I was reading this article on Slashdot the other day and I couldn’t help but think about how many times I’ve seen this before. I had actually started writing a post on this back in July, but I didn’t have enough ammunition to make my point. This article provides ample evidence of how a user’s experience with PCs is not uniform between individual boxes. This is not just dependent on operating systems. Time and time again, I have experienced this. For example, I could not get iTunes to install on one of my Vista boxes. On an XP machine, Twitter doesn’t work. On my iMac, I can’t get Firefox 3 to install. My experience as a hardware fanatic is one thing. But what I wonder at is the experience of the average user. A non-geek could buy a computer with Vista on it and try to install iTunes and fail. Another user might be banned from ever experiencing Twitter and never be the wiser that the problem comes from the vagaries of one machine.
The latest example concerns my daughters’ computers. I bought identical parts for them to make my life easier. The free Nero program that came with the DVD drives works on one of the computers but not the other. I have reinstalled windows twice in trying to resolve this. It simply won’t run. Yet, it runs fine on the other one. I have a workaround. Sonic works fine on the machine that hates Nero. So I use Sonic. It’s no big deal, but it bugs me. Again it is supposedly identical computers behaving differently.
Digital isn’t supposed to be like this. This is analog behavior. The OP from Slashdot was about how computers and cores really aren’t the same from machine to machine. Each box develops its own idiosyncrasies. I’m frankly amazed that computers work at all given this divergence.
But from an AI perspective and from the genetic algorithm perspective, this is crazy. You’d have to develop using several different boxes simultaneously to allow for the divergence.
I was talking about this with Jason, AI researcher and the creator of Underworld Hockey Club (and also a Friday Night Party Line panelist) and I thought his comments were insightful:
“[O]ne interesting thing with genetic algorithms is that they learn with the computer. If I train a checkers player on the cluster in the lab, that player will not be as good when I run it on my own computer. It’s not stupid, but it’s not as good. It’s because it’s tuned to the specific compiler & floating point operations of the cluster. This is why we need online learning. Genetic algorithms simulate evolution, but there aren’t any mature methods to simulate learning during the life of an individual. A common belief that I share is that evolution contains 99% of the knowledge we need to survive, but without the 1% from learning, it is completely useless. Look at deer, for example. Baby deer (and other quadrupeds) are able to walk within minutes of being born. Clearly, the basic constructs of coordination are tied to DNA (baby deer don’t flail stupidly until they figure it out), but the deer need learning to make sure that their programming works with their specific bodies. One student in the lab is looking at neuroplasticity, which is the study of neural networks that can adapt to their environment in realtime. It’s really interesting stuff.”