Keith Douglas' Web Page

About me Find out who I am and what I do.
My resumé A copy of my resumé and other documentation about my education and work experience for employers and the curious.
Reviews, theses, articles, presentations A collection of papers from my work, categorized and annotated.
Current research projects What I am currently working on, including some non-research material.
Interesting people People professionally "connected" to me in some way.
Interesting organizations Organizations I am "connected" to. (Some rather loosely.)
Intellectual/professional influences Influences on my work, including an organization chart. Here you can also buy many good books on philosophy and other subjects via I have included brief reviews of hundreds of books.
Professional resources Research sources, associates programs, etc.
What is the philosophy of computing? A brief introduction to my primary professional interest.
My intellectual heroes A partial list of important people. Limited to the dead.
My educational philosophy As a sometime teacher I've developed one. Includes book resources.

Book Influences - Computing: Artificial Intelligence

Abductive Inference: Computation, Philosophy, Technology Josephson and Josephson (eds.) A small (<300 pages) introduction to an important area where computing meets philosophy. Abductive inference is reasoning of the form "B; but B would be a matter of course if A, therefore it is very likely that A. It is also called (by some) "inference to the best explanation". This book discusses increasingly sophisticated computer programs to perform this task and defends the notion as one used in science, technology (noticably in medical diagnosis) and elsewhere. Weaknesses of the otherwise quite decent book include: insufficient computational discussion (there is no source code in the book, only algorithm sketches) and almost no discussion of the problem of generation of hypotheses. This latter issue is in my view the most philosophically interesting, psychologically baffling and computationally difficult to implement.
Artificial Intelligence: A New Synthesis Nilsson A whirlwind tour of many approaches in AI. Very little code (pseudo or otherwise) is actually presented, but the introduction to many different approaches is welcome. Robot vision, propositional and predicate logic, modal logic, bayes networks, neural networks, probability theory, multiagent systems, searching on graphs (including alpha-beta pruning) and more are all discussed. In a way, the book is like a very detailed extended bibliography.
Defending AI Research: a collection of essays McCarthy As far as is possible with short essays, the reviews and other items in this book are to the point. I suspect, however, that his targets will largely misunderstand his criticism. (Particularly those critics of AI from a phenomenological/existentialist background, though McCarthy is right: they and other critics rarely cite any papers in AI beyond Turing's speculative piece from 1950.)
Fluid Concepts and Creative Analogies Hofstadter One of the books that critics of AI should actually read and study - and even it is 15 years old!
Gödel, Escher, Bach: An Eternal Golden Braid Hofstadter I could write for hours about this one. Instead, I will simply say: Read it. (The absence of actually proving the first incompleteness theorem is the only thing wrong with it, but the other aspects are so delightfully powerful I reread it fairly often and they certainly make up for the lack, however important at first sight it might seem.) Last time I reread it I found that Hofstadter thanks my teacher from CMU: Wilfried Sieg, who I suppose he met when they were both at Stanford.)
The Subtlety of Sameness: A Theory and Computer Model of Analogy-Making French One of Hofstadter's students, this is a brief description (no code; mostly architecture from the computational side) of the famous "Tabletop" analogy program from the early 1990s. French does an admirable job at explaining why this seemingly trivial domain has lots of structure and many hidden twists. Some of it is built in, and some, as he says, emerges as the program runs and the subsystems interact with each other. However, I do wonder about the "downward" level, since most of what French concerns himself with is scability. My concern is that it will be very hard to do implement substructure to many of the "parts" of the system. For example, take the "temperature" global property. In temperature as it is understood in physics, it is an aggregate (or emergent: it doesn't matter which) property of many microproperties. How would one do that computationally? Here I am relying, of course, on an analogy - perhaps this aspect is not fruitifully understood as mapping on. But then we're owed a story of what is computational and what isn't. We both (well, Hofstadter and I - and I'll assume French agrees) that eventually computation bottoms out, so to speak. But with these more abstract models, how does one know one has pursued the computational analysis far enough?