Monday, June 25, 2007

The Probability of Extra-Terrestrial Intelligence

Is there any good reason to think there is intelligent life in the
universe that is not from Earth?

There have been UFO sightings, and an organization dedicated to
looking for extra-terrestrial communications (SETI), but nothing very
convincing has come out of this line of evidence, scientifically
speaking. That is, it is not reasonable to believe we have been
visited or contacted in any way by an extra-terrestrial intelligence
(ETI). If you need convincing of this, please, please, read the
relevant chapter of Shermar's Why People Believe Weird
. In fact, read the whole book.


So maybe there are ETIs out there we just haven't heard from yet. What
is the probability of this? One attempt to estimate the number of ETIs
in our galaxy is the Drake equation:

number = N x f x n x f x f x f x f

p e l i c l

The lower letters are my attempt to do subscripts in HTML.

This means that the number of radio-communicating ETIs is the number
of stars in the milky way times the fraction with orbiting planets
times the average number capable of supporting life, times the number
fraction on which life actually evolves, times the fraction that
evolves intelligent life, times the fraction of the universe's life
during which the ETI communicates with radio waves.

Depending on what numbers you put in these variables (have any good
idea of the fraction of planets on which life actually evolves?) you
can get widely differing numbers, but most people who do end up with
high numbers (obviously there could be a biased sample effect
here). For example, Carl Sagan estimates a million; Drake estimates
ten thousand.

How encouraging! (i.e. if you don't think they would be out to destroy

Then why has SETI failed to hear from them? Why haven't they shown up
on our doorsteps and made us welcome to the galactic community? There
are a few answers for this.

SETI's answer is that we haven't been looking at enough wavelenghts of
enough portions of sky to see the signal.

An answer from science fiction is that we are not ready. That is,
perhaps evolving worlds are left alone until they reach a certain
level of technological competence. This is the "prime directive" idea
from Star Trek. In the film Star Trek: First Contact, as soon
as the first warp drive is tested on earth, the Vulcans just... show
up. It's exciting to think this might be the case. What tech are they
waiting for?

I don't find it very convincing, though, because you'd think that,
even if they'd come up with new forms of communication, they'd
still be communicating with light waves (radio, gamma, etc.) and we'd
"hear" it. Or if they stopped, it still takes a while for light to get
to us, and those communications would still be propogating.

And as Ray Kurzweil points out, it seems unlikely that all of
the civilizations would follow this rule to the letter.

There are so many variables in the Drake equation. And we don't have
very good estimates of, well, any of them. Other reasonable numbers
make N = 1, which would be... us.


Ray Kurzweil, in his book The Singularity Is Near, presents an
interesting take on this debate, which I will outline below.

Astronomer N. S. Kardashev introduced the idea of Type II and Type III
civilizations. Type II civilizations have harnassed the power of their
own star for use in communications. Type III have done so with their
own galaxy. According to current trends, our civilization will become
a Type II in the twenty-second century.

If there are billions of civilizations ahead of us in the galaxy,
there should be many Type IIs out there, and some Type IIIs as
well. But even one Type II would be sending out enough communications
to be picked up by SETI.

Therefore, it's unlikely that there are other ETIs in the galaxy.


This is not to say there is no other life. It could be that there is
other life out there but it's not that smart. We might find
single-celled organisms or something. Right now, though, Kurzweil's
argument seems pretty convincing to me. If there were a few planets
with life, why not many? And if many, why did no others become
intelligent? Sad as it makes me, we just might be all there is out

This back and forth is a great example of how our opinions on
mysterious things can switch around based on new arguments that relate
to things we never even thought relevant. I would not be the least bit
surprised if my mind changed again at some point, based on some new
argument that involves some information about something I can't
predict now.
Pictured is a view I took from an airplane with my cell phone.


Kurzweil, R. (2005). The Singularity Is Near. Penguin Books.

Shermer, M. (1997) Why People Believe Weird Things. Freeman.

Wednesday, June 20, 2007

Anybody Want To Move To Ottawa?

My goal has always been to move somewhere, and settle down, then talk all the people I care about most into coming to live there with me. I grew up in America, so most of my friends are American. Now I live in Canada, and, admittedly, asking people to change countries is kind of a tall order. So I was quite happy to see that Ottawa ranks very well in "quality of life" worldwide.

Ottawa ranks 18th (tied with Luxembourg)! See the survey results and the top 50 cities at

Personally, I've lived in several cities (Lake George, Oswego, Beijing, Shanghai, Los Alamos, Santa Fe, Atlanta, Kingston, and Ottawa, in that order) and I like Ottawa the best. The reasons? It has what I want out of a city (arts, night clubs, universities, etc.) and they tend to be close by-- I live in "centretown" and I can walk to almost everything cool. Also, it's very, very beautiful. The picture you see was taken recently on a bridge over the canal. On the downside, yes, it's colder than bathing in a tub of Slush Puppie every winter.

Canada did very well in general in the survey. Almost every major city in Canada is featured in the top 50.

Never underestimate the power of poutine.

Sunday, June 17, 2007

Falsifiability and the Importance of the Theory Creation Process

Karl Popper came up with this great idea: Good scientific theories should be falsifiable. If you are already familiar with this idea and what it means, go ahead and skip the next paragraph.

If a theory is falsifiable, it makes predictions about what cannot be observed. That is, it sticks it's neck out and says that there are some particular things that cannot happen. For example, if I say that all swans are white, this is falsiable because all you would need to do is observe a single non-white swan to falsify my
theory. A theory that says that people like to dance to songs with "energy," on the other hand, could very well be non-falsifiable if your measure of a song's "energy" is how likely people are to dance to it. In this case, you'd need an independent measure of "energy," and then see if that measure correlates with the probability of dancing. When a theory is "falsifiable," what this means is that it's potentially falsifiable, given certain possible observations. When a theory makes strong falsifiable predictions, yet fails to get falsified in spite of the efforts of scientists, it is very impressive. For example, relativity predicted that starlight would bend with the gravitational pull of the sun. We had to wait for an eclipse (after Einstein was dead) to measure the displacement of the stars around the sun (because the sun is too bright otherwise). Turns out yes, the stars looked out of place, just like Einstein predicted! If they had not, his theory would have been falsified," which, strictly speaking, means it had been found to be not ompletely true, and in need of revision or abandonment.

The problem is that some scientists take this idea a little too
seriously and think that unfalsifiable theory has no place at all in
the scientific process. Such people seriously underestimate the
complexity and importance of the process of theory creation.

Creating a theory, is, in essence, a highly creative act, and as such
can be affected by all kinds of things: random events in a person's
life, art, and yes, other people's ideas, be they falsifiable or not.

It's kind of understandable why people downplay scientific
creativity. Lay people think science is a straightforward, logical
progression from data to truth. This is wildly off-base.

But even people who study science don't pay enough attention to theory
creation. The field of "science studies" consists of the following
kinds of people:

  • philosophers of science

  • sociologists of science

  • historians of science

  • psychologists of science

Philosophers of science tend to focus on the normative aspects
of science. That is, how scientists ought to do things. Popper
was one of these. The ones who do descriptive work, that is,
describing how scientists actually do what they do, is mostly
limited to theory evaluation, or how scientists choose between
competing theories, rather than how the theories are created in the
first place.

Sociologists of science view science, for the most part, as a
social activity where power relations dominate. They are not
particularly interested in where theory comes from, or, sometimes,
what the theories even are.

Historians of science often do not speculate on the inner
mental workings of the scientists in question. They stick to their
field of expertise, which is telling a coherent story from historical

Psychologists of science are few. They tend to study things
like theory preference and how hypotheses are tested. It's harder to generalize about this group, because their work is very diverse.

Of all of the scholars working in science studies, only a handful are
actually approaching theory creation as a creative endeavour. The ones
that do include myself, Kevin Dunbar, Paul Thagard, Ryan Tweney,
Ronald Giere, and my former co-advisor Nancy Nersessian.

What little we know about scientific creativity shows, however, is
that often interesting theories come out of analogies with analogues
from outside the discipline. James Clerk Maxwell, for example, used
an analogy with a gear system to help come up with his electromagnetic
equations (Nersessian 1984, 1990). I modeled a part of this theory creation
as a part of my dissertation work (Davies, Nersessian, & Goel, 2001).

Given that ideas for scientific theories can come from such diverse
sources as moving trains (Einstein), dreams (Kekule), and physical
machinery (Maxwell), it should not be surprising that good ideas can
come out of non-falsifiable theories.

Freud's ideas, the poster children for non-falsifiability, continue to
inspire people to create new, falsifiable theories (e.g. Minsky,

Another way to look at it is in terms of "research programs." Laudan,
a philosopher of science, coined this term (Laudan, 1978). Behaviorism, for example, is not a falsifiable theory. It's a framework, an approach, that uses a particular way of looking at the

When learning about a theory that is unfalsifiable, try to think of it
as a way to look at something, an approach, a springboard of ideas
rather than something to dismiss out of hand. It just might help you
come up with something wonderful.
The pictured photo is one I took during the Ottawa Tulip Festival.


Davies, J. R., Nersessian, N. J. & Goel, A. K. (2001). The role of visual analogy in scientific discovery. Model-Based Reasoning: Scientific Discovery, Technological Innovation, Values. Pavia, Italy.

Laudan, L. (1978) Progress and Its Problems. University of California Press.

Minsky, M. (2006). The Emotion Machine. Simon & Schuster.

Nersessian, N. J. (1984, 1990) Faraday to Einstein: Constructing Meaning in Scientific Theories. Kluwer.

Popper, K. (reprint 2002) Conjectures and Refutations. Routledge.

Thursday, June 14, 2007

Other People's Pac-Man Art

Since I have a gallery of Pac-Man art, people like to send me links to other Pac-Man art out there. I think I'm the only one who really did a whole series, though. There's some cool stuff out there.

This image is from
select a work > 2003-2004 > pac-man (adv)

I'm not sure I like the teeth, but it would not be much of a sculpture without them.

I'm struck by how much this looks like a pug skull...

This one's called "You Promised No One Would Get Hurt," by Andrew Bell. I think the facial expressions in this one are great, and I like how Pac-Man's mouth is full of dots.

This one is by Idan Shani, an illustration made for the erotic visual arts magazine Forno "1st issue - Cunnilingus".

I can't remember where I found this one, but I think it's by someone named "Martin." I love how the characters sort of mesh smoothly with a flexible floor.
And finally, there's some tagger in Ottawa who likes to paint ghosts around centretown, my hood.

It's amazing how much Acrobat can zoom

I like to write once in a while about what kinds of scientific things I'm currently researching. Sometimes a snapshot of my screen says it all. As you can see, lately I've been very interested in the word "to."

Monday, June 11, 2007

Computer Games As Art

On several occasions I've seen a computer game review that stated that the arrival of this or that game has finally shown that computer games
can be works of art. To me it's clear that computer games are art, but what is unclear is in which ways they are art and in which ways they
are not.

Nobody argues that computer games have art, in the sense of visual art. Even simple games like Pac-Man have graphics that must be designed. Particularly in the early years of computer games, the constraints of the computer systems were enormous. Creating a cute or attractive game with so few pixels and so little processing was an enormous challenge.

But certainly there's more to it than that. Computer games are not
merely a platform for traditional visual art. Computer games
are, like film, multi-media experiences. Many of the criteria one
would use for film are applicable to computer games

In terms of visual art, they include character design, set design,
costume, animation, color, lighting, composition, etc.

In terms of sound, there are sound effects, voices that are acted, and
music. The game "Gears of War" for Xbox 360 is soundtracked like an
action movie.

In terms of narrative, computer games have stories that can be
interesting or sucky. In general, the stories do not mesh well with
the gameplay. That is, the story is often exposed in "cut scenes"
between the action, and you can usually ignore them completely and
still play the game just fine. But at a finer-grain level the gameplay
itself forms a story.

But these are aspects of computer games that are shared with film; we
can evaluate them on similar criteria. What makes computer games

Well, you can play a videogame. It is the nature of gameplay
that sets computer games apart from movies. Games are
interactive. Gameplay can be evaluated in these ways:

  • Fun. Simply put, is the game fun to play? This is
    independent, for the most part, of the graphics. To take some
    extreme examples, Solitare and tetris (and puzzle games in general)
    are loads of fun, and the graphics are trivial. Fun can be thought
    of in two related ways: Do you feel happy when playing, and do you
    want to play it more?

  • Innovation. Computer games get more respect when they
    pioneer a new kind of gameplay. Innovation can come through new
    computer interfaces (e.g. "Dance Dance Revolution," "Centipede,"
    and games that use the Nintendo Wii controller,) or through new
    on-screen gameplay paradigms (e.g. "Dune," "Katamari Damacy,"
    "Dungeon Keeper," "Castle Wolfenstein.")

It's important to distinguish these. Great games are not always
The original "Warcraft," and "Total Annihilation" were
really well-done RTS (Real-Time Strategy) games, but they were not
the first. Likewise, the original first-person shooter was "Castle
Wolfenstein," but later games such as "Doom" and "Half Life" became the
superb examples of the genre.

I do some consulting for video game design, and I often feel the
tension between being original and being good. Being good cannot be
planned in advance, for the most part. If you're doing a known genre,
such as RTS, first-person shooter, or platformer, what makes the game
fun or not depends on subtle tweaking of the interface, game elements,
difficulty, etc. It's not something you can perfect until you're
actually playtesting the game. Unfortunately, with ship date deadlines
and the difficulty of planning programming projects, these crucial
tweaks are often not done, and the game is shipped as soon as it's
playable and free of obvious bugs. This is why the second version of
computer games are often so much better than the first. They are not
that different from the first, in some cases, but they have gotten it
just right the second time around.

So how can we look at gameplay as art? It depends on your view of
art. Since I'm writing the essay, we'll use mine: the compellingness
theory of art. The goal of a work of art is to be compelling. To be
compelling means to make the audience of the work want to
experience the art, usually through intellectual stimulation,
emotional response, or sensory pleasure.

Gameplay is one aspect of a computer game. According to compellingness
theory, the goal of a gameplay design is to compell the audience (in
this case, the player) to play the game. The innovation in the
gameplay makes the game more interesting, which is compelling, and the
fun makes the player want to play more, which is by definition

Gameplay is designed, like other art forms, and has the goal of
compelling the audience, like other art forms. It's partially
engineering and craft, but so are many art forms. Gameplay design should,
therefore, be viewed as art.

But isn't there gameplay design in non-computer games, like chess and
trivial pursuit? Yes, I think that game designers, whether they work
with board games, card games, or computer games, should be thought of
as artists and the products of their creativity as works of art with
game-design-specific criteria for their evaluation. We can look at
visual art in terms of symbolism, composition, color choices,
etc. What are the ways we can look at gameplay experiences? As far as
I know, this is still unexplored territory.

Sunday, June 03, 2007

Be Skeptical of Good Writers

One time I was defending atheism to some friends over dinner. Some of
the more spiritual members of the conversation were complaing that
science was always changing, so you never really knew anything with
absolute certainty. I acknowledged this point, saying that most of
scientific knowledge today might be, strictly speaking, false, but
science provides what is most rational to believe at any given
point. Jen said "So you'd rather have a scientific falsehood than a
spiritual truth?"

Rachel said "Ooooo! Good point!"

Actually it's a terrible point, but damn, it sounded good. Looking at
that argument as a war, I lost some ground there, not because she was
right, but because I'd been bested, momentarily, by her rhetorical

It's wrong because we have even fewer reasons to believe spiritual
dogma than scientific facts. At least science has a built-in
correction mechanism.

It sounds good, though, for the following reasons:

  • It aligns science with falsehood and spirituality with
    truth. I'd just admitted that most science facts were probably
    false, and that was enough to make Rachel swallow the whole point.
  • It sounds like it's forcing the audience to make a choice
    between truth and falsehood, rather than between science and
    spirituality. I could have just as easily asked "So you'd rather
    have a spiritual falsehood than a scientific truth?", and, in the
    right conversational context, this would have sounded just fine, and
    Rachel would probably have said "Ooooo! Good point!"

If anyone can think of other reasons why it sounded so good I'd love
to hear them.

We all know of smooth talkers who can feed you a lot of crap and make
it sound like gold, but people don't use the same thinking as often
with writers. Well I do.

I was talking with my friend Guen Davies (no relation)about author
Milan Kundera, who wrote The Unbearable Lightness Of Being. She
said something like "He's such a good writer that you're convinced of
what he's saying, and only on later reflection realize he's completely
wrong." Now, I'm not supporting this view of Milan Kundera, but I do
support this argument for Malcolm Gladwell.

Malcolm Gladwell is a New Yorker writer, and a damn good one. I
read all his articles, because they tend to be fascinating,
science-related, and, above all, extraordinarily well-written. I've
already complained about one of his articles in my blog entry about
fact checking, and this essay continues along similar lines.

When he's writing for The New Yorker he's kept in check, more
or less, by the fact checkers. It's when he writes his own books he
goes off the deep end. I'm mainly talking about The Tipping
, which is an excellent read and contains a lot of really
sweet-smalling crap.

When I notice that the writing is really good, I make sure I'm a
little more skeptical, because I know good writing can get your
defenses down.