Thursday, December 15, 2005

AI is not brain dead, but perhaps it's mentally handicapped


My hero Marvin Minsky slammed me the other day: "AI has been brain-dead since the 1970s," he said, and with that slammed everyone else in my field. [see http://www.wired.com/news/print/0,1294,58714,00.html]

The main thrust of his problem with AI is that as a field it does not seem to be trying to tackle the hard problems. Specifically, he means general human-level intelligence. The example he points out is commonsense reasoning, of which CYC is the one great current project.

I have, particularly in the last year or so, grown dissatisfied with the kind of work a lot of people in AI are doing for the same reason. For shorthand, I will call the project of trying to replicate human-level intelligence on a machine to be "the Great Work" (see my blog posting called "Why Artificial Intelligence is the most important problem to be working on" for a justification for this grandiose name.) The focus on small problems is so great that often the researchers do not even have a story of how their findings constrain or fit into a general theory of AI. I think that we are lucky that our field has such a relatively clearly-defined objective, and I believe it's every AI researcher's responsibility to at least spend some time thinking about how their research contributes to the Great Work.

At the same time, it' s very hard to do the great work. Not only is the mind a bogglingly thorny system, it's hard to get the Great Work funded, it's hard to get testable hypotheses. It's hard to do it without borrowing systems from everybody else, which is a problem because people want to know what your contribution is. Interfacing systems together is seen to be an inferior intellectual enterprise, even if it might be exactly what is needed for the Great Work. So people work on, say, tweaking A* (a search algorithm) in a particular domain. So to that extent I see where Minsky is coming from.

However, I disagree with his statement for two reasons. First, it's inflamatory, overly simplified, and dismissive of some excellent research. It's overstated. The years and years of research it took to get speech recognition up to snuff so your credit card company can understand your voice on the phone can hardly be classified as "brain-dead."

The other problem I have with it is that it's focusing on high-level reasoning, and ignoring the importance of low level sensory-motor and perceptual reasoning, which, in spite of how difficult it's proven to be, has made some progress. It's unfair to call that "brain-dead" either.

So in summary, I am sympathetic to Minsky's thoughts behind the statement, but it's overstated. We can't say "retarded" anymore, unless you are a rapper, so I guess I would say that AI is "mentally handicapped," so come on people, let's get on that Great Work. Oh wait, I am a rapper.

Ok, it's retarded.

Tuesday, December 13, 2005

Apparently my twin is a German pop star.

http://www.toepel-online.de/

See link for my twin.

What's wrong with documentaries and journalism

Just saw a documentary last night: Rize. It's about the culture of clowning and krumping in LA, which is basically street kids dressing up like clowns and having these energetic dance competitions. And though I enjoyed the movie, it re-affirmed the view I walked in with: Don't trust documentaries.

It's not just documentaries I don't trust-- it's also journalism and, most recently, many books.

The basis of this reasoning is that truth is difficult to get at. It requires careful study, and personally, I believe that science is the only trustworthy way to determine what it's rational to believe about the world. It's not that science always brings up the truth; it doesn't. It's not that non-science can't get lucky once in a while; it does. It's just that looking at current scientific research gives you what is most rational to believe right now, given the world's state of knowledge and theory.

Key to this is peer review, and that's where journalism, documentaries, and many books get into trouble. Peer review is the process of getting published through approval of your scientific peers. If you've made a mistake, it's the reviewer's job to see that. You might have not controlled for a variable. You might have ignored some other force that has been discovered that might render your hypothesis faulty. When you read a paper in a scientific journal, or see it at a scientific conference, you can trust to some degree that it's been screened by about three scientists who can vouch for its quality. That means that if you want to walk away believing what the article concludes, you can feel fairly fuzzy inside with respect to your justification in doing so.

The problem with documentaries and journalism is not that they can't get it right, but since there's no peer review you can't tell if it was done right or not. The fact that you can create polemical, one-sided documentaries means that you can't tell, as a naive observer, whether the documentary you're watching is one of those or not. So when I watch documentaries I have this uncomfortable conflict in my head: believe it or don't? I'd rather not have the struggle and just watch fiction. Another problem with documentaries is that the visual medium is not condusive to reporting quantitative data which is often so key in understanding anything. Here's an example from Rize: They showed a long sequence in which the krumping people were dancing, and similar-looking dances from African tribes were interspersed. What am I to make of this? Did these inner-city kids watch a lot of discovery channel, tape these dances and run with them? Is the filmmaker trying to say there is something preserved through black culture running all the way back to Africa? Or even more ridiculously, that there is something innate about black people to make them dance this way? Of course, this question is not answered in the film. All we have is this sequence that suggests a connection, but doesn't come out and say what it is, let alone try to justify it. So what is the observer rationally supposed to take from this? The verbal nature of writing allows very precise statements, and when you have precise statements, the required evaluation of them becomes much clearer.

[From my friend Daniel Saunders after reading my essay (LaChappelle is the director):
From a salon.com interview with David LaChappelle,
http://www.salon.com/ent/feature/2005/01/27/rize/index3.html:

"They create family where there was none and they create art where there
wasn't any. I was given every art program when I was a kid. They've had
none. They've had no African studies. They've never seen African dance.
They've never seen African face paint. It's in their blood."

Ouch.]

What's my problem with journalism? Well, journalism has one up on documentaries because at least they have fact-checkers. So I think it's fine to trust journalism with getting the facts right. However, there is the problem of which facts are reported, and in what context. Like a documentary, the presentation of facts usually makes a subtle, implied point. It's this point, which is often implicit, that is not a fact that can be "checked." It is, whether they know it or not, a scientific theory that needs empirical evaluation. Even worse, sometimes this theory is not implicit. There's a respected magazine called The Economist. I was reading an article about vehicle safety and in that article they wrote that Americans accept accidents as inevitable. The facts they drew on were that they purchased big cars that had poor maneuverability and bulk. This is in spite of the fact that you're safer in a Jetta, statistically, than in an SUV. But to say a culture accepts accidents as inevitable is a major claim that is underconstrained by the facts. Why on earth did they publish such a thing? It's a plausible explanation, but there are a hundered roads called "Plausible" on the way to truth. It could be that they are buying SUVs purely for style reasons. It could be they like being higher up, and their reasons for buying them has nothing to do with safety. It could be that they don't rellly understand how smaller cars can avoid accidents. As you can see, the Economist's statement assumes a lot. Figuring this out is the job of a scientist.

Now we get to books. The problem with books, even "scientific" books, is that, believe it or not they are not peer reviewed either. Gladwell's awful "The Tipping Point" is a great case in point. He's a great writer, and the book is full of interesting information, but it's terribly unscientific, even though he's arguing for a scientific theory. Yesterday I bought a book for my father for Christmas (you can keep reading Dad; I'm taking it back today) called "The Skeptical Environmentalist," by Lomborg. It looked great! Full of charts, one quarter of the book was endnotes and a bibliography, written by a statistician, quotes of praise from Matt Ridley (author of Gemone, a great book), the Wall Street Journal, and the Times. It looked like a great, data-driven review of the state of the environment. I got on the web later and found out that there was a huge outcry from scientists blasting this book. I could not find a single scientist who praised it (Matt Ridley is a science writer, but not a scientist). Like Diamond's "Collapse," (which is on the opposite side of the environmental adgenda), the science in books is often either poorly reported (Collapse) or poorly done (Lomborg).

So now I'm even skeptical of books. For me to trust a non-fiction book now, it needs to be written by a scientist I trust, with a heavy publication record in the field as the book. See what I mean about getting crotchety?

A caveat: Autobiographies and travel stories have their value. In this essay I am talking primarily about big truths worth knowing. Big things about the world we live in. Little details about the lives of individuals perhaps need not be scientific in their reporting.

Anyway, this is the long-winded reason that if you invite me over to watch a documentary, I might just stay home and watch "Spirited Away." If I'm watching untrustworthy material, it might as well have a good story.

Wednesday, December 07, 2005

A project-based desktop environment

In graduate school I used unix, and one of the things I loved about it was the virtual desktop concept.

If you know what virtual desktops are, you can skip this paragraph. A virtual desktop system offers a grid of screens. Your computer is focused on one at a time, depending on which one is selected. You can selected it by clicking on the desktop icon or sometimes by cntrl-arrow or something. Each desktop has some number of windows in some arrangement, and each time you return to the desktop, those windows are in place.

When I arrived at Queen's I started using windows and missed very much the virtual desktop, and this is why: I always have many many windows open, each associated with a different project. Often there would be so many windows on the taskbar that I couldn't tell what they were for by looking at them. To get to the window I was looking for I would have to use alt-tab, which is annoying because you pass through in one direction and you have to select the window to see what it's for. That's a lot of sequential reading.

I realized that with a few exceptions (email, music players) each window is associated with a project. So, for example, I have a project of burning my mp3s to CD ROM s. For this I have a cd burning software window and an explorer window with my music. When I burn a cd, I use this window to move burned music to it's own directory so I know not to burn it twice. Without virtual desktops, to work on this project I'd need to raise these two windows, after a search for them, and minimize the others so they are not distracting and for aesthetic reasons. For a multitasker like me, this is unacceptable.

The function of virtual desktops (for me, anyway) is to separate projects. Now I have installed vern, and I have a desktop for each project. I go to the right desktop, and what I need for the project are there.

A couple of gripes about using vern on windows: First, the taskbar still has all windows from all desktops. And second, when I select a window from the taskbar, sometimes it brings me to that window on the right desktop, but sometimes it brings up that window in the current desktop. I haven't taken time to investigate, but from casual use it appears unpredictable.

MacOS does not have virtual desktops either, which I find kind of baffling. Are my projects idiosyncratic in the way each one uses multiple applications and windows?

In conclusion, I suggest that OS desktop interface designers think in terms of project organization. Virtual desktops are a first step, but windows and mac have yet to take even that.

Postscript: My points are made above. The following is just to show how unweildy all my windows can be; it's a case study of my current desktop: I have six virtual desktops...

One has the palm pilot desktop software I'm typing this into, my mozilla browser, trillian (my instant messenger), and one instant messenger conversation. Thank goodness Trillian puts all conversations as tabs in one window. I love Mozilla because it has tabs for different webpages in a single window. The downside is that sometimes I want to go to another tab, but since I'm thinking of it as another window, I use alt-tab to try, unsuccessfully, to get to it. As far as I know there's not keyboard shortcut for navigating between mozilla tabs.

Another desktop has an exporer window open to songs, and windows media player, currently playing the instrumental version of Terror Squad's "Lean Back."

Another desktop has the window to an external hard drive with one of VisionQuest's plays on it, digitized. I need to edit this down to a managable size, so soon the movie editor will be on this desktop too.

Another music desktop has music information about the Beatnuts. There was a launchcast Internet radio player on there, but it crashed.

Finally I have up a word document I was making a pdf of for someone who wanted a copy of the paper. Behind it is an explorer window with the files related to the paper on it.

Two desktops are empty.

Finally we have one desktop with a unix emulator on it. This is where it gets interesting, because the emulator itself has a virtual desktop too, all four of which are being used:

The first Unix desktop is called "jobs." Yes, in unix you can name your virtual desktops, which is great. In "Jobs" I have an emacs open with all the places I want to apply. The emacs window is broken into two planes so I can read about my notes on the places while I'm looking at the list. Also there's a mozilla there for job searching and online applications and emailing applications. Yet another emacs window has my cover letter in it, where I make cover letters customized for each place I'm applying. An xterm is there so I can latex the coverletters.

Bored yet?

The second unix desktop is called "Compint" where I'm editing my recent Computational Intelligence accepted paper. In that I have the emacs window with the paper in it, and an xterm for using latex. I will probably bring up another emacs for the bibtex file.

The third unix desktop is called "Gaia" and contains only a window with an ssh connection to my Georgia Tech account, where I keep my webpages, which I'm constantly updating.

The last unix window (I wish there were more than four) is called "bigpaper" and it's another paper I'm working on. Emacs and xterm for the latex.

So much for all that. Now, I've had to restart my computer today. Often it's even more complicated than this. Lately I often have a programming desktop, and a desktop for working on my Cognitive Science paper. That would fill all six desktops.

It takes a long time to get all these windows in place, but I like to work for an hour on one thing, than an hour on another, and switch between them when I like. One of the reasons I've never wanted a laptop is because this state is not preserved. In other words, when you reboot, which happens all the time with a laptop, you have to reconstruct where you were and what you were doing and bring up all the windows. I love coming in every morning and having each desktop there, ready for me to start work.

Tuesday, December 06, 2005

Life Expectancy


There is a fun website for determining how long you are expected to live at http://www.nmfn.com/tn/learnctr--lifeevents--longevity_game

I'm supposed to live to be 87.

Monday, December 05, 2005

I have a fine-looking spinal cord. Ladies control yourselves.


I was a volunteer for an FMRI experiment. I was paid a little bit and was told I could have a copy of the scan. I got put in the spinal cord group rather than the brain group. They asked me if that was okay, and I told them that I'd rather be in the brain group but I appreciated the importance of random assignment. So spinal cord it is. Here's the picture I got. The experiment involved making my hand cold. I don' t know if this is my spinal cord when my hand is cold or not. Feel free to give your opinions in the comment section.

I wanted to to this because I read and cite FMRI studies but had never experienced one. Also, it was a neat challenge I was up for, to stay absolutely still for two hours. I am proud to say I was very good at holding still. When I got out of there I could barely move. Now I know why you toss and turn during sleep. I'd hate to wake like that every morning.

Thursday, December 01, 2005

Knowing the definition of the word for something does not mean you know anything about that something

So I'm at this philosophy talk and the guy is talking about beliefs. Beliefs are psychological entities. They are mysterious psychological entities that people like me are trying to understand. What does this guy do? It starts talking about the root of the word belief. It's latin roots or whatever. This kind of talk has no place in a discussion about what beliefs are. Knowing the name of something doesn't mean you know anything about that something. Knowing the definition of the word for something does not mean you know anything about that something. That is, you can't appeal to the a word's definition to understand the referent of that word.

Why is it that philosophers don't seem to get this? Or at least, they don't act like they get it. I'd be embarassed to, say, talk about the root of the word "brain" or "mind" in a psychology talk.

A few weeks ago this other philosopher was talking about the difference between anger and indignation. According to his definitions (which came, laughably, from ancient Greek texts), being angry with someone is different from being indignant with someone because anger is a feeling you get toward an agent when that agent has hurt (in your perception) you or yours (yours meaning someone personally connected to you.) Being indignant means you're upset on behalf of someone else who's not you or yours. Anyway, on the basis of this he said that the two were separate and that indignance was not even a kind of anger. And that these distinctions were important.

I'm thinking, let me get this straight-- anger is a complex biological and psychological phenomenon that can arise from many different stimuli. And this guy, based on ancient Greek texts, is going to tell me that the feeling you happen to get if someone hurts another (not personally connected to you) is a completely different feeling? Docta, please!

One of the main goals of intellectual fields is to understand the best categories for the world, and it's a mistake to use the words your language happens to have for things as evidence for what the best category breakdown is. And, in case you still need convincing, this is why:

The words our languages use, and the categories that those words refer to, were selected over time because they were useful for us with respect to living in our world. So the words reflect a naive view of the world (naive physics, folk psychology, etc.) which have been shown again and again to be wrong in various ways. Science shows us a way better than this. With rigorous inquiry, we discover better categories, and make up new words or re-define old words to accomodate those discoveries. A great example is speed and velocity, and Einstein's redefinition of space.

It's a trap, being all into words and what people say they mean. Such knowledge can obscure what's really true, or different, fruitful ways of looking at the world. As Minsky says in "Society of Mind," words should be our servants, not our masters.