December 8th, 2016
Much of the technical terminology of computer science betrays its logical heritage: ‘language’, ‘symbol’, ‘syntax’, ‘semantics’, ‘value’, ‘reference’, ‘identifier’, ‘data’, etc. Classically, such terms were used to name essential phenomena underlying logic, human thought and language — phenomena, it was widely believed, that would never succumb to scientific (causal, mechanical) explanation. Computer science, however, now uses all these terms in perfectly good scientific ways, to name respectable scientific (causally explicable, mathematically modellable) phenomena.
There are two possibilities. The first is that computer science has given us a scientific understanding the fundamental mysteries of language, logic, and mind. The second is that computer science has redefined these words, so that, although they have been brought into the realm of the scientific, they no longer refer to what they used to refer to. Most people believe the former. I will argue for the latter: that, for reasons traceable back to Turing’s 1936-7 paper, computer science has redefined these terms in such a way as to “disappear” much of what is fundamental to the human condition: language’s long-distance reach, the “non-effectiveness” of truth and reference, thought’s normative deference to the world.
The result, I believe, not only challenges prospects for Artificial Intelligence and cognitive science, but also limits our ability to understand data bases, knowledge representation, even programs. It also hinders communication, because overlapping technical vocabulary means different things in different communities. Most seriously, it undermines our ability to talk about the most fundamental aspects of semantic or symbolic systems.
Brian Cantwell Smith is Professor of Information, Philosophy, and Computer Science at the University of Toronto. His primary appointment is in the Faculty of Information, where he served as Dean from 2003-2008, and where he held a Canada Research Chair in the Foundations of Information. He also teaches in the University’s Cognitive Science Program and Philosophy Department, is a senior fellow at Massey College, and is a member of the Research Council of the Canadian Institute for Advanced Research.
Dr. Smith received his B.S., M.S., and Ph.D. from the Massachusetts Institute of Technology in Computer Science and Artificial Intelligence. In the 1980s and 1990s he held senior research and administrative positions at the Xerox Palo Alto Research Centre (PARC) in California, was an adjunct professor in the Philosophy and Computer Science departments at Stanford University, was a founder and principal investigator of the Stanford-based Centre for the Study of Language and Information (CSLI), and was a founder and first President of Computer Professionals for Social Responsibility (CPSR). In 1996 he moved to the Indiana University at Bloomington as professor of cognitive science, computer science, philosophy, and informatics, where he was also a fellow of the Center for Social Informatics in the School of Library and Information Sciences. From 2001 to 2003 he held the Kimberly J. Jenkins University Professorship of Philosophy and New Technologies at Duke University, with appointments in Philosophy and Computer Science.
In the 1980s Dr. Smith developed the world’s first reflective programming language (3-Lisp). His present research focuses on the conceptual foundations of computation and information, and on new forms of metaphysics, ontology, and epistemology. He is the author of *On the Origin of Objects* (MIT, 1996). Two volumes of papers, entitled *Indiscrete Affairs*, are forthcoming; a multi-volume series entitled “The Age of Significance: An Essay on the Origins of Computation and Intentionality” is also in preparation.