Keith Douglas' Web Page

About me Find out who I am and what I do.
My resumé A copy of my resumé and other documentation about my education and work experience for employers and the curious.
Reviews, theses, articles, presentations A collection of papers from my work, categorized and annotated.
Current research projects What I am currently working on, including some non-research material.
Interesting people People professionally "connected" to me in some way.
Interesting organizations Organizations I am "connected" to. (Some rather loosely.)
Intellectual/professional influences Influences on my work, including an organization chart. Here you can also buy many good books on philosophy and other subjects via amazon.com. I have included brief reviews of hundreds of books.
Professional resources Research sources, amazon.com associates programs, etc.
What is the philosophy of computing? A brief introduction to my primary professional interest.
My intellectual heroes A partial list of important people. Limited to the dead.
My educational philosophy As a sometime teacher I've developed one. Includes book resources.

Book Influences - Philosophy of Technology

Title
Author
Purchase / Enjoy Cover
Comment
Bridging the Digital Divide Servon Reports on public technology movements in the United States.
Computing, Philosophy and Cognition Magnani and Dossena (eds.) This is a collection of conference procedings from ECAP 2004. As expected, the quality of the contributions is variable. I would not say any particularly stand out as wonderful or horrible, so this collection is pretty middling. However, some have very little to do with computing and philosophy, despite the provenance of the collection. Also mildly irritating are a rather large number of typos and unrefined uses of English - which is more or less excusable, given the large number of non-native speakers.
Creations of the Mind: Theories of Artifacts and their Representation Margolis and Laurence (eds.) A collection of 16 papers on the metaphysics, semantics, epistemology, psychology, neuroscience, biology and anthropology of artifacts. Despite consisting primarily of review works, this volume has the feel of kicking off a profitable new area of study. I do note with personal interest that many of the investigations would benefit from an investigation of computational artifacts. I note that although I have classified this work in the philosophy of technology, none of the authors makes much of the craft/technology distinction. (The terminology I don't expect, but the idea I would have hoped to encounter.)
Cyberphilosophy: The Intersection of Computing and Philosophy Moor and Bynum (eds.)

A collection of 16 papers plus an editor's introduction makes for an ecclectic but wide-ranging survey of topics on the obvious subject. Since the volume first came out almost 10 years ago, one interesting task with the book is to look back and see all the topics which were "new at the time" and where they went. John Sullins' recent lecture on the future of robot ethics makes for a case in point on his subject: what was a tiny field is now a subject on its own: so much so that it seems John has forgotten some of the early history, at least in the lecture. On the other hand, I have heard Luciano Floridi try to "sell" the philosophy of information several times, as well as read his monograph on the subject, and the arguments for lumping all the notions of information together still don't impress me - it seems they haven't changed much. One area which I wonder where it has gone is on the mixed logic systems described by van den Hoven and Lokhorst and their computational implementation. My friend Audrey Yap has sort of (near as I can tell) done work on the former aspect but as far as I know she has not implemented the material computationally. A topic for the next ten years?

In addition to the "looking forward, looking backwards" theme, there is also interest in picking out the one most startling stuff for its surprise value. One is Grim's work on partial truth (using fuzzy logic); his research group has discovered chaotic liar sentences. I wonder if his results hold if a Bunge style theory of partial truth is used? A second, and final for this review, startling finding to me (which I must have missed while at CMU, alas) was that extra variables beyond 2 make causal discovery easier, not harder - or at least no harder. Scheines' piece more generally is valuable to philosophers because it enumerates a list of some of the (up to the time of writing of the paper) of the hypotheses which allow inferring of causation from statistical data.

Digital Democracy: Discourse and Decision Making in the Information Age Hague and Loader (eds.) A collection of papers about public engagement with the democratic process via information and communication technologies. Published in 1999 so very dated in some respects at the time of writing of this minireview (2012). Worse, some papers are rather uninformed technologically. For example, an otherwise thoughtful paper on USENET is vitiated by the lack of understanding of its functioning, including of group charters as well as news server subscription policies. However, many of the papers have redeeming value as to the questions raised if not the answers offered, so the volume is not without use.
Digital Divide: Civic Engagement, Information Povety, and the Internet Worldwide Norris An investigation into the "civil state of Internet" as of c. 2000. Includes analysis of e-democracy, political affiliations of users, popularity of Internet in various countries, etc.
How to Read a Paper: The Basics of Evidence-Based Medicine Greenhalgh A brief guide to understanding papers in various EBM styles (simple intervention reports, complex IR, reports on diagnostic or screening techniques, etc.) as well as on how to conduct literature searches, understand statistics, etc. All very clear and well written except for what can be regarded as the fundamental oversight and hardest to understand part of any scientific or technological area. This is the role of background knowledge. Some of this is discussed, of course, but not to any great extent. For example, clincians (and anyone who has done secondary school level chemistry) should regard homeopathy as so ridiculous to not even be worth investigating. Why do we say this? We have background knowledge which tells us (sometimes, in some areas) that there are just things we wouldn't bother doing. I think everyone would agree a mixture of petroleum jelly, battery acid, cloves and cow blood is unlikely to be worth trying as a salve for colic, right? How do you rule that out? You have some idea of how the world works, and certain hypotheses just don't mesh - until new notice. Though in the case of homeopathy, the background which would have to change is so substantial one could call it a "several Nobel prize" problem. This dismissive attitude is justified, and yet it is poorly understood by the layman, and even by scientists and technologists themselves, because the area involves tacit knowledge and judgement, etc., which have been poorly studied and perhaps are never fully articulatible anyway. (This is not to say that there aren't grey areas where there might be legitimate disagreement over what counts as ruled out by the background because, of course, people have different backgrounds.)
Human Values and the Design of Computer Technology Friedman (ed.) This volume is a little collection of papers on the ethics of computing.
Internet: a Philosophical Inquiry Graham A little book about the philosophical and cultual implications of Internet. A little too banal in its conclusions, unfortunately. A deeper analysis on some of the themes (e.g. linking it with the social epistemology literature) might help.
Life on the Screen: Identity in the Age of the Internet Turkle If one overlooks / reads past all the psychoanalysis, postmodernism and minor errors concerning various technical matters (e.g. concerning object oriented programming, the Church-Turing thesis, etc.) and remember also that this book is 12 years old, then there's a fair bit in here that is worth considering, such as the relationship between real sex and various forms of netsex, etc.
Microelectronics and Society Friedrichs and Schaff I picked up this by now 23 year old book to see how people were viewing what is now the present many years ago. It is quite interesting to see what has and hasn't come to pass. Example of the first - increased use of "home" computers. Example of the second: wide spread voice recognition systems.
Of Men and Machines Lewis Collection of essays and fiction on machine-human relations.
Moral Machines: Teaching Robots Right from Wrong Wallach and Allen

A well-written extended introduction to the question of "robot ethics". However, I feel after reading this that the authors barely skimmed the surface of this important topic. For example, I would have liked to have seen more specifically computational questions addressed as well as the question of the "total situation" - humans can change subject easily. Barring total AI, bots cannot. Moreover, global situations are often necessary for certain moral judgements. Warbots, for example, if they are to be ethical according to some, have to be able (like human soldiers) to refuse illegal orders. Since at least according to the precedent at Nuremberg, the conflict's origin can determine whether orders are illegal, this would require making such available to one's warbot. (Incidentally, there is also a mistake in attributing the idea of "universal moral grammar" to Rawls: Chomsky discussed this in the 1960s, before A Theory of Justice appeared.)

Interested parties may also profitably read Peter Danielson's review of this book.

Philosophy and Technology Fellows A volume of papers on the philosophy of technology, though at least one paper (by Nancy Cartwright) does not quite fit in this subject though is quite informative in its own right.
Psychiatry in the Scientific Image Murphy This book is about an important psychotechnology (though Murphy does not use that term; see my papers page for details). Murphy defends psychiatry from those who would remove it from its scientific roots but at the same time suggests that its ubiquitous manual, the DSM, needs drastic conceptual revision, even on its own terms. Gist of the criticism centers around the idea that classification should be lawful (in Bunge's sense of laws) - causal in his terminology. A few minor complaints marr this otherwise stimulating book. One, there are several jarring typos and related matters. Second, (utopianistically speaking) it would have been interesting to see if treating psychiatry as a technology could help solve some of the author's worries about normativity, etc. Third, some of the discussion centers around the notion of "function", which has a substantial philosophical literature barely touched. Finally, his only response to the antiscientific critics is that we talk as if they are wrong. This is a remarkably feeble defense.
Regulating Toxic Substances: A Philosophy of Science and the Law Cranor Cranor's thin book is an exploration of a topic which needs more attention by philosophers of science and technology, namely risk. His book has convinced me that risk management is indeed a technology, despite the insistence that it is a special sort of "normative science". I am sympathetic to a thorough investigation of how science informs risk, and carcinogens - the book's focus - are certainly of some concern. However, the thesis of the book from the science and technology perspective is laudable at first glance, but odd upon reflection. It is recommended by the book that we (as a society) adopt methods which will decrease false negatives and increase the number of substances processable by regulatory agencies. Unfortunately, the author does not address the (to me) far more basic question over which substances to even bother testing. I would be sympathetic to the approach given if we were willing (and had developed) strong theories in the relevant basic areas so that not everything need be tested. Moreover, the question of "degree" of risk in terms of sorts of cancer, etc. is not at all addressed, and certainly would be if other potentially harmful effects are to be examined using the policies proposed. Finally, I think merely dismissing some approaches (as the author does) because they will not be understood by risk managers is a bad approach. Surely the idea would to be at least try to give the relevant training or require the relevant background from such people, not "hack around" these limitations. However, on the good side, the book is clearly written and introduces much of the necessary background to understand the debate. Most important of the book's strengths are its many references and its willing to address a novel and interesting topic. I note in passing that the author, and hence the legal and regulatory scene, is American.
Robot Ethics: The Ethical and Social Implications of Robotics Lin, Abney and Bekey (eds.) 22 papers plus many introductory pieces about the various sections of the book make for a good introductory reader about the state of the art (as of 2012) in robot ethics. This burgeoning field includes both implementation of ethics into various sorts of 'bots as well as the ethics for the roboticists who design and build them. Given that the most controversial areas surround warbots and carebots, many of the papers focus on these particular uses of robotics and so there is some redundancy - but, perhaps good, perhaps bad - there is very little wide disagreement and the differences that are there are hard to adjudicate. One can hope that there will be many more volumes on this important topic to come.
The Digital Phoenix: How Computers are Changing Philosophy Bynum and Moor (eds.) I regard this as one of the starting points for anyone interested in how philosophy is affected by computing. Both philosophy of computing, and computational methods and approaches in philosophy are represented. Included in the latter are papers on ethics, aesthetics, metaphysics, philosophy of mind, history of philosophy, etc. Some papers strike me as being too brief, but they do provide many further references to followup upon. The highlight of the volume is Jim Fetzer's neat little piece on program verification. (For once I agree with him!) A warning: this edition does not include a working index ...
The Philosophy of Artificial Intelligence Boden (ed.) 14 classic plus 1 (at original publication time) new paper and editor's introduction and extended bibliography make for an adequate overview of the field, as befits a volume in the "Oxford Readings in Philosophy" series. I do have a minor selection complaint, though: the piece by Paul Churchland ("Some Reductive Strategies in Cognitive Neurobiology"), while classic, is not exactly about AI.
Thinking through Technology: The Path between Engineering and Philosophy Mitcham See my amazon.com review.
What Computers Can't Do: The Limits of Artificial Intelligence (Revised Edition) Dreyfus Dreyfus has been criticizing AI and the "computational theory of mind" (which like many critics and partisans of such approaches he conflates) for decades now. This is one of his intermediate in time books (from 1979) on the subject. Exasperating, the book can be read as having three distinct parts. One is a critique of the excesses of the early period of computing and AI specifically, especially that of the Simon school. Second is a brief discussion of where exactly they go wrong (supposedly). Third, smallest of all (because not as on topic) is Dreyfus' own views about mentality, etc. The first part of these three is adequate as far as it goes, though Dreyfus never goes so far as to analyze any programs much beyond what their (admittedly ridiculously pompous in some cases) creators say for them. The second is not adequate as it stands. Dreyfus' arguments on the "why impossible" are often very very weak. He just asserts again and again that the way to develop any computer program is simply to program lists of facts or the like. This is simply wrong, especially now. At the time it might have seemed plausible, to be fair, but I am, admittedly reviewing with hindsight. He does realize this in part when he admits (and then takes back) the ideas he presents are only of "let's build a fully 'adult'" AI and not one of any other sort. He just says (perhaps rightly, perhaps not) that we have no knowledge of learning of the right sort to make anything else even conceivable. As for the third part, when asked the question, "how do we do it" he, like the phenomenologists (including Wittgenstein, who is in a way one in my view) he draws upon simply describes the difficulty like our extensive sociality. He thinks there's our brain and then what we do, with a complete (perennial) mystery in the middle. But even there he is not quite consistent. He makes a point of appealing to some very dubious (especially in hindsight) "holographic" models of brain functioning, claiming they have the right features to go along with his synthesis of the gestalt school and phenomenologists like Husserl, Heidegger and Merleau-Ponty. It is also, finally, interesting, to see how his predictions fared. Chess, he claimed, despite not being in his "hardest" category, would never be successfully AI'ed. This, for all my disappointment as to how it was done, has in fact now been done (and at the time I write this in 2011, for approximately 15 years). His description of chess as uncomputable (even if he later says practically uncomputable) perhaps mislead him. To be fair, there has been substantially less progress on his type IV problems, but also not zero. Fault tolerance is one area which is substantially worked upon. We now have "self-healing" (and yes, that's a metaphor, but not an entirely inappropriate one) file systems, for example.

 

Finished with this section? Go back to the list of book subjects here.