4.6: Analytic Philosophy

Analytic philosophy

It is difficult to give a precise definition of analytic philosophy, since it is not so much a specific doctrine as an overlapping set of approaches to problems. Its 20th-century origin is often attributed to the work of the English philosopher G.E. Moore (1873–1958). In Principia Ethica (1903) Moore argued that the predicate good, which defines the sphere of ethics, is “simple, unanalyzable, and indefinable.” His contention was that many of the difficulties in ethics, and indeed in philosophy generally, arise from an “attempt to answer questions, without first discovering precisely what question it is which you desire to answer.” These questions thus require analysis for their clarification. Philosophers in this tradition generally have agreed with Moore that the purpose of analysis is the clarification of thought. Their varied methods have included the creation of symbolic languages as well as the close examination of ordinary speech, and the objects to be clarified have ranged from concepts to natural laws and from notions that belong to the physical sciences—such as mass, force, and testability—to ordinary terms such as responsibility and see. From its inception, analytic philosophy also has been highly problem-oriented. There is probably no major philosophical problem that its practitioners have failed to address.The development of analytic philosophy was significantly influenced by the creation of symbolic (or mathematical) logic at the beginning of the century (see formal logic). Although there are anticipations of this kind of logic in the Stoics, its modern forms are without exact parallel in Western thought, a fact that is made apparent by its close affinities with mathematics and science. Many philosophers thus regarded the combination of logic and science as a model that philosophical inquiry should follow, though others rejected the model or minimized its usefulness for dealing with philosophical problems. The 20th century thus witnessed the development of two diverse streams of analysis, one of them emphasizing formal (logical) techniques and the other informal (ordinary-language) ones. There were, of course, many philosophers whose work was influenced by both approaches. Although analysis can in principle be applied to any subject matter, its central focus for most of the century was language, especially the notions of meaning and reference. Ethics, aesthetics, religion, and law also were fields of interest, though to a lesser degree. In the last quarter of the century there was a profound shift in emphasis from the topics of meaning and reference to issues about the human mind, including the nature of mental processes such as thinking, judging, perceiving, believing, and intending as well as the products or objects of such processes, including representations, meanings, and visual images. At the same time, intensive work continued on the theory of reference, and the results obtained in that domain were transferred to the analysis of mind. Both formalist and informalist approaches exhibited this shift in interest.

 

The formalist tradition

 

Logical atomism

The first major development in the formalist tradition was a metaphysical theory known as logical atomism, which was derived from work in mathematical logic by the English philosopher Bertrand Russell (1872–1970). Russell’s work in turn was based in part on early notebooks written before World War I by his former pupil Ludwig Wittgenstein (1889–1953). In “The Philosophy of Logical Atomism,” a monograph published in 1918–19, Russell gave credit to Wittgenstein for supplying “many of the theories” contained in it. Wittgenstein had joined the Austrian army when the war broke out, and Russell had been out of contact with him ever since. Wittgenstein thus did not become aware of Russell’s version of logical atomism until after the war. Wittgenstein’s polished and very sophisticated version appeared in the Tractatus Logico-Philosophicus, which he wrote during the war but did not publish until 1922.

Both Russell and Wittgenstein believed that mathematical logic could reveal the basic structure of reality, a structure that is hidden beneath the cloak of ordinary language. In their view, the new logic showed that the world is made up of simple, or “atomic,” facts, which in turn are made up of particular objects. Atomic facts are complex mind-independent features of reality, such as the fact that a particular rock is white or the fact that the Moon is a satellite of Earth. As Wittgenstein says in the Tractatus, “The world is determined by the facts, and by their being all the facts.” Both Russell and Wittgenstein held that the basic propositions of logic, which Wittgenstein called “elementary propositions,” refer to atomic facts. There is thus an immediate connection between formal languages, such as the logical system of Russell’s Principia Mathematica (written with Alfred North Whitehead and published between 1910 and 1913), and the structure of the real world: elementary propositions represent atomic facts, which are constituted by particular objects, which are the meanings of logically proper names. Russell differed from Wittgenstein in that he held that the meanings of proper names are “sense data,” or immediate perceptual experiences, rather than particular objects. Further, for Wittgenstein but not for Russell, elementary propositions are connected to the world by being structurally isomorphic to atomic facts—i.e., by being a “picture” of them. Wittgenstein’s view thus came to be known as the “picture theory” of meaning.

Logical atomism rested upon a number of theses. It was realistic, as distinct from idealistic, in its contention that there are mind-independent facts. But it presupposed that language is mind-dependent—i.e., that language would not exist unless there were sentient beings who used sounds and marks to refer and to communicate. Logical atomism was thus a dualistic metaphysics that described both the structure of the world and the conditions that any particular language must satisfy in order to represent it. Although its career was brief, its guiding principle—that philosophy should be scientific and grounded in mathematical logic—was widely acknowledged throughout the century.

Logical positivism

Logical positivism was developed in the early 1920s by a group of Austrian intellectuals, mostly scientists and mathematicians, who named their association the Wiener Kreis (Vienna Circle). The logical positivists accepted the logical atomist conception of philosophy as properly scientific and grounded in mathematical logic. By “scientific,” however, they had in mind the classical empiricism handed down from Locke and Hume, in particular the view that all factual knowledge is based on experience. Unlike logical atomists, the logical positivists held that only logic, mathematics, and the sciences can make statements that are meaningful, or cognitively significant. They thus regarded metaphysical, religious, ethical, literary, and aesthetic pronouncements as literally nonsense. Significantly, because logical atomism was a metaphysics purporting to convey true information about the structure of reality, it too was disavowed. The positivists also held that there is a fundamental distinction to be made between “analytic” statements (such as “All husbands are married”), which can be known to be true independently of any experience, and “synthetic” statements (such as “It is raining now”), which are knowable only through observation.The main proponents of logical positivism—Rudolf CarnapHerbert Feigl, Philipp Frank, and Gustav Bergmann—all emigrated from Germany and Austria to the United States to escape Nazism. Their influence on American philosophy was profound, and, with various modifications, logical positivism was still a vital force on the American scene at the beginning of the 21st century.

Naturalized epistemology

The philosophical psychology and philosophy of mind developed since the 1950s by the American philosopher Willard Van Orman Quine (1908–2000), known generally as naturalized epistemology, was influenced both by Russell’s work in logic and by logical positivism. Quine’s philosophy forms a comprehensive system that is scientistic, empiricist, and behaviourist (see behaviourism). Indeed, for Quine the basic task of an empiricist philosophy is simply to describe how our scientific theories about the world—as well as our prescientific, or intuitive, picture of it—are derived from experience. As he wrote:

 

The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology?

Although Quine shared the logical postivists’ scientism and empiricism, he crucially differed from them in rejecting the traditional analytic-synthetic distinction. For Quine this distinction is ill-founded because it is not required by any adequate psychological account of how scientific (or prescientific) theories are formulated. Quine’s views had an enormous impact on analytic philosophy, and until his death at the end of the century, he was generally regarded as the dominant figure in the movement.

Identity theoryfunctionalism, and eliminative materialism

Logical positivism and naturalized epistemology were forms of materialism. Beginning about 1970, these approaches were applied to the human mind, giving rise to three general viewpoints: identity theory, functionalism, and eliminative materialism. Identity theory is the view that mental states are identical to physical states of the brain. According to functionalism, a particular mental state is any type of (physical) state that plays a certain causal role with respect to other mental and physical states. For example, pain can be functionally defined as any state that is an effect of events such as cuts and burns and that is a cause of mental states such as fear and behaviour, such as saying “Ouch!” Eliminative materialism is the view that the familiar categories of “folk psychology”—such as belief, intention, and desire—do not refer to anything real. In other words, there are no such things as beliefs, intentions, or desires; instead, there is simply neural activity in the brain. According to the eliminative materialist, a modern scientific account of the mind no more requires the categories of folk psychology than modern chemistry requires the discarded notion of phlogiston. A complete account of human mental experience can be achieved simply by describing how the brain operates.

The informalist tradition

Generally speaking, philosophers in the informalist tradition viewed philosophy as an autonomous activity that should acknowledge the importance of logic and science but not treat either or both as models for dealing with conceptual problems. The 20th century witnessed the development of three such approaches, each of which had sustained influence: common-sense philosophy, ordinary-language philosophy, and speech-act theory.

 

Common-sense philosophy

Originating as a reaction against the forms of idealism and skepticism that were prevalent in England at about the turn of the 20th century, the first major work of common-sense philosophy was Moore’s paper “A Defense of Common Sense” (1925). Against skepticism, Moore argued that he and other human beings have known many propositions about the world to be true with certainty. Among these propositions are: “The Earth has existed for many years” and “Many human beings have existed in the past and some still exist.” Because skepticism maintains that nobody knows any proposition to be true, it can be dismissed. Furthermore, because these propositions entail the existence of material objects, idealism, according to which the world is wholly mental, can also be rejected. Moore called this outlook “the common sense view of the world,” and he insisted that any philosophical system whose propositions contravene it can be rejected out of hand without further analysis.

Ordinary-language philosophy

The two major proponents of ordinary-language philosophy were the English philosophers Gilbert Ryle (1900–76) and J.L. Austin (1911–60). Both held, though for different reasons, that philosophical problems frequently arise through a misuse or misunderstanding of ordinary speech. In The Concept of Mind (1949), Ryle argued that the traditional conception of the human mind—that it is an invisible ghostlike entity occupying a physical body—is based on what he called a “category mistake.” The mistake is to interpret the term mind as though it were analogous to the term body and thus to assume that both terms denote entities, one visible (body) and the other invisible (mind). His diagnosis of this error involved an elaborate description of how mental epithets actually work in ordinary speech. To speak of intelligence, for example, is to describe how human beings respond to certain kinds of problematic situations. Despite the behaviourist flavour of his analyses, Ryle insisted that he was not a behaviourist and that he was instead “charting the logical geography” of the mental concepts used in everyday life.

Austin’s emphasis was somewhat different. In a celebrated paper, “A Plea for Excuses” (1956), he explained that the appeal to ordinary language in philosophy should be regarded as the first word but not the last word. That is, one should be sensitive to the nuances of everyday speech in approaching conceptual problems, but in certain circumstances everyday speech can, and should, be augmented by technical concepts. According to the “first-word” principle, because certain distinctions have been drawn in ordinary language for eons—e.g., males from females, friends from enemies, and so forth—one can conclude not only that the drawing of such distinctions is essential to everyday life but also that such distinctions are more than merely verbal. They pick out, or discriminate, actual features of the world. Starting from this principle, Austin dealt with major philosophical difficulties, such as the problem of other minds, the nature of truth, and the nature of responsibility.

Speech-act theory

Austin was also the creator of one of the most-original philosophical theories of the 20th century: speech-act theory. A speech act is an utterance that is grammatically similar to a statement but is neither true nor false, though it is perfectly meaningful. For example, the utterance “I do,” performed in the normal circumstances of marrying, is neither true nor false. It is not a statement but an action—a speech act—the primary effect of which is to complete the marriage ceremony. Similar considerations apply to utterances such as “I christen thee the Queen Elizabeth,” performed in the normal circumstances of christening a ship. Austin called such utterances “performatives” in order to indicate that, in making them, one is not only saying something but also doing something.

The theory of speech acts was, in effect, a profound criticism of the positivist thesis that every meaningful sentence is either true or false. The positivist view, according to Austin, embodies a “descriptive fallacy,” in the sense that it treats the descriptive function of language as primary and more or less ignores other functions. Austin’s account of speech acts was thus a corrective to that tendency.

After Austin’s death in 1960, speech-act theory was deepened and refined by his American student John R. Searle. In The Construction of Social Reality (1995), Searle argued that many social and political institutions are created through speech acts. Money, for example, is created through a declaration by a government to the effect that pieces of paper or metal of a certain manufacture and design are to count as money. Many institutions—such as banks, universities, and police departments—are social entities created through similar speech acts. Searle’s development of speech-act theory was thus an unexpected extension of the philosophy of language into social and political theory.