title
stringlengths 4
52
| text
stringlengths 396
121k
| relevans
float64 0.76
0.82
| popularity
float64 0.9
1
| ranking
float64 0.71
0.82
|
---|---|---|---|---|
Epistemological pluralism | Epistemological pluralism is a term used in philosophy, economics, and virtually any field of study to refer to different ways of knowing things, different epistemological methodologies for attaining a fuller description of a particular field. A particular form of epistemological pluralism is dualism, for example, the separation of methods for investigating mind from those appropriate to matter (see mind–body problem). By contrast, monism is the restriction to a single approach, for example, reductionism, which asserts the study of all phenomena can be seen as finding relations to some few basic entities.
Epistemological pluralism is to be distinguished from ontological pluralism, the study of different modes of being, for example, the contrast in the mode of existence exhibited by "numbers" with that of "people" or "cars".
In the philosophy of science epistemological pluralism arose in opposition to reductionism to express the contrary view that at least some natural phenomena cannot be fully explained by a single theory or fully investigated using a single approach.
In mathematics, the variety of possible epistemological approaches includes platonism ("mathematics as an objective study of abstract reality, no more created by human thought than the galaxies") radical constructivism (with restriction upon logic, banning the proof by reductio ad absurdum and other limitations), and many other schools of thought.
In economics controversy exists between a single epistemological approach to economics and a variety of approaches. "At midcentury, the neoclassical approach achieved near-hegemonic status (at least in the United States), and its proponents sought to bring all kinds of social phenomena under its uniform explanatory umbrella. The resistance of some phenomena to neoclassical treatment has led a number of economists to think that alternative approaches are necessary for at least some phenomena and thus also to advocate pluralism." An extensive history of these attempts is provided by Sent.
See also
Epistemological anarchism
Interdisciplinarity
Methodological individualism
Multimethodology
Trichotomy (philosophy)
VPEC-T
References
External links
Epistemological theories
Pluralism (philosophy) | 0.807781 | 0.959507 | 0.775072 |
Bayesian epistemology | Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in the philosophy of science, for example, can be approached through the Bayesian principle of conditionalization by holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept of coherence in terms of probability, usually in the sense that two propositions cohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field of social epistemology, for example, concerning the problem of testimony or the problem of group belief. Bayesianism still faces various theoretical objections that have not been fully solved.
Relation to traditional epistemology
Traditional epistemology and Bayesian epistemology are both forms of epistemology, but they differ in various respects, for example, concerning their methodology, their interpretation of belief, the role justification or confirmation plays in them and some of their research interests. Traditional epistemology focuses on topics such as the analysis of the nature of knowledge, usually in terms of justified true beliefs, the sources of knowledge, like perception or testimony, the structure of a body of knowledge, for example in the form of foundationalism or coherentism, and the problem of philosophical skepticism or the question of whether knowledge is possible at all. These inquiries are usually based on epistemic intuitions and regard beliefs as either present or absent. Bayesian epistemology, on the other hand, works by formalizing concepts and problems, which are often vague in the traditional approach. It thereby focuses more on mathematical intuitions and promises a higher degree of precision. It sees belief as a continuous phenomenon that comes in various degrees, so-called credences. Some Bayesians have even suggested that the regular notion of belief should be abandoned. But there are also proposals to connect the two, for example, the Lockean thesis, which defines belief as credence above a certain threshold. Justification plays a central role in traditional epistemology while Bayesians have focused on the related notions of confirmation and disconfirmation through evidence. The notion of evidence is important for both approaches but only the traditional approach has been interested in studying the sources of evidence, like perception and memory. Bayesianism, on the other hand, has focused on the role of evidence for rationality: how someone's credence should be adjusted upon receiving new evidence. There is an analogy between the Bayesian norms of rationality in terms of probabilistic laws and the traditional norms of rationality in terms of deductive consistency. Certain traditional problems, like the topic of skepticism about our knowledge of the external world, are difficult to express in Bayesian terms.
Fundamentals
Bayesian epistemology is based only on a few fundamental principles, which can be used to define various other notions and can be applied to many topics in epistemology. At their core, these principles constitute constraints on how we should assign credences to propositions. They determine what an ideally rational agent would believe. The basic principles can be divided into synchronic or static principles, which govern how credences are to be assigned at any moment, and diachronic or dynamic principles, which determine how the agent should change their beliefs upon receiving new evidence. The axioms of probability and the principal principle belong to the static principles while the principle of conditionalization governs the dynamic aspects as a form of probabilistic inference. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. This test for determining irrationality has been referred to as the "pragmatic self-defeat test".
Beliefs, probability and bets
One important difference to traditional epistemology is that Bayesian epistemology focuses not on the notion of simple belief but on the notion of degrees of belief, so-called credences. This approach tries to capture the idea of certainty: we believe in all kinds of claims but we are more certain about some, like that the earth is round, than about others, like that Plato was the author of the First Alcibiades. These degrees come in values between 0 and 1. A degree of 1 implies that a claim is completely accepted. A degree of 0, on the other hand, corresponds to full disbelief. This means that the claim is fully rejected and the person firmly believes the opposite claim. A degree of 0.5 corresponds to suspension of belief, meaning that the person has not yet made up their mind: they have no opinion either way and thus neither accept nor reject the claim. According to the Bayesian interpretation of probability, credences stand for subjective probabilities. Following Frank P. Ramsey, they are interpreted in terms of the willingness to bet money on a claim. So having a credence of 0.8 (i.e. 80 %) that your favorite soccer team will win the next game would mean being willing to bet up to four dollars for the chance to make one dollar profit. This account draws a tight connection between Bayesian epistemology and decision theory. It might seem that betting-behavior is only one special area and as such not suited for defining such a general notion as credences. But, as Ramsey argues, we bet all the time when understood in the widest sense. For example, in going to the train station, we bet on the train being there on time, otherwise we would have stayed at home. It follows from the interpretation of credence in terms of willingness to make bets that it would be irrational to ascribe a credence of 0 or 1 to any proposition, except for contradictions and tautologies. The reason for this is that ascribing these extreme values would mean that one would be willing to bet anything, including one's life, even if the payoff was minimal. Another negative side-effect of such extreme credences is that they are permanently fixed and cannot be updated anymore upon acquiring new evidence.
This central tenet of Bayesianism, that credences are interpreted as subjective probabilities and are therefore governed by the norms of probability, has been referred to as probabilism. These norms express the nature of the credences of ideally rational agents. They do not put demands on what credence we should have on any single given belief, for example, whether it will rain tomorrow. Instead, they constrain the system of beliefs as a whole. For example, if your credence that it will rain tomorrow is 0.8 then your credence in the opposite proposition, i.e. that it will not rain tomorrow, should be 0.2, not 0.1 or 0.5. According to Stephan Hartmann and Jan Sprenger, the axioms of probability can be expressed through the following two laws: (1) for any tautology ; (2) For incompatible (mutually exclusive) propositions and , .
Another important Bayesian principle of degrees of beliefs is the principal principle due to David Lewis. It states that our knowledge of objective probabilities should correspond to our subjective probabilities in the form of credences. So if you know that the objective chance of a coin landing heads is 50% then your credence that the coin will land heads should be 0.5.
The axioms of probability together with the principal principle determines the static or synchronic aspect of rationality: what an agent's beliefs should be like when only considering one moment. But rationality also involves a dynamic or diachronic aspect, which comes to play for changing one's credences upon being confronted with new evidence. This aspect is determined by the principle of conditionalization.
Principle of conditionalization
The principle of conditionalization governs how the agent's credence in a hypothesis should change upon receiving new evidence for or against this hypothesis. As such, it expresses the dynamic aspect of how ideal rational agents would behave. It is based on the notion of conditional probability, which is the measure of the probability that one event occurs given that another event has already occurred. The unconditional probability that will occur is usually expressed as while the conditional probability that will occur given that B has already occurred is written as . For example, the probability of flipping a coin two times and the coin landing heads two times is only 25%. But the conditional probability of this occurring given that the coin has landed heads on the first flip is then 50%. The principle of conditionalization applies this idea to credences: we should change our credence that the coin will land heads two times upon receiving evidence that it has already landed heads on the first flip. The probability assigned to the hypothesis before the event is called prior probability. The probability afterward is called posterior probability. According to the simple principle of conditionalization, this can be expressed in the following way:
. So the posterior probability that the hypothesis is true is equal to the conditional prior probability that the hypothesis is true relative to the evidence, which is equal to the prior probability that both the hypothesis and the evidence are true, divided by the prior probability that the evidence is true. The original expression of this principle, referred to as Bayes' theorem, can be directly deduced from this formulation.
The simple principle of conditionalization makes the assumption that our credence in the acquired evidence, i.e. its posterior probability, is 1, which is unrealistic. For example, scientists sometimes need to discard previously accepted evidence upon making new discoveries, which would be impossible if the corresponding credence was 1. An alternative form of conditionalization, proposed by Richard Jeffrey, adjusts the formula to take the probability of the evidence into account: .
Dutch books
A Dutch book is a series of bets that necessarily results in a loss. An agent is vulnerable to a Dutch book if their credences violate the laws of probability. This can be either in synchronic cases, in which the conflict happens between beliefs held at the same time, or in diachronic cases, in which the agent does not respond properly to new evidence. In the most simple synchronic case, only two credences are involved: the credence in a proposition and in its negation. The laws of probability hold that these two credences together should amount to 1 since either the proposition or its negation are true. Agents who violate this law are vulnerable to a synchronic Dutch book. For example, given the proposition that it will rain tomorrow, suppose that an agent's degree of belief that it is true is 0.51 and the degree that it is false is also 0.51. In this case, the agent would be willing to accept two bets at $0.51 for the chance to win $1: one that it will rain and another that it will not rain. The two bets together cost $1.02, resulting in a loss of $0.02, no matter whether it will rain or not. The principle behind diachronic Dutch books is the same, but they are more complicated since they involve making bets before and after receiving new evidence and have to take into account that there is a loss in each case no matter how the evidence turns out to be.
There are different interpretations about what it means that an agent is vulnerable to a Dutch book. On the traditional interpretation, such a vulnerability reveals that the agent is irrational since they would willingly engage in behavior that is not in their best self-interest. One problem with this interpretation is that it assumes logical omniscience as a requirement for rationality, which is problematic especially in complicated diachronic cases. An alternative interpretation uses Dutch books as "a kind of heuristic for determining when one's degrees of belief have the potential to be pragmatically self-defeating". This interpretation is compatible with holding a more realistic view of rationality in the face of human limitations.
Dutch books are closely related to the axioms of probability. The Dutch book theorem holds that only credence assignments that do not follow the axioms of probability are vulnerable to Dutch books. The converse Dutch book theorem states that no credence assignment following these axioms is vulnerable to a Dutch book.
Applications
Confirmation theory
In the philosophy of science, confirmation refers to the relation between a piece of evidence and a hypothesis confirmed by it. Confirmation theory is the study of confirmation and disconfirmation: how scientific hypotheses are supported or refuted by evidence. Bayesian confirmation theory provides a model of confirmation based on the principle of conditionalization. A piece of evidence confirms a theory if the conditional probability of that theory relative to the evidence is higher than the unconditional probability of the theory by itself. Expressed formally: . If the evidence lowers the probability of the hypothesis then it disconfirms it. Scientists are usually not just interested in whether a piece of evidence supports a theory but also in how much support it provides. There are different ways how this degree can be determined. The simplest version just measures the difference between the conditional probability of the hypothesis relative to the evidence and the unconditional probability of the hypothesis, i.e. the degree of support is . The problem with measuring this degree is that it depends on how certain the theory already is prior to receiving the evidence. So if a scientist is already very certain that a theory is true then one further piece of evidence will not affect her credence much, even if the evidence would be very strong. There are other constraints for how an evidence measure should behave, for example, surprising evidence, i.e. evidence that had a low probability on its own, should provide more support. Scientists are often faced with the problem of having to decide between two competing theories. In such cases, the interest is not so much in absolute confirmation, or how much a new piece of evidence would support this or that theory, but in relative confirmation, i.e. in which theory is supported more by the new evidence.
A well-known problem in confirmation theory is Carl Gustav Hempel's raven paradox. Hempel starts by pointing out that seeing a black raven counts as evidence for the hypothesis that all ravens are black while seeing a green apple is usually not taken to be evidence for or against this hypothesis. The paradox consists in the consideration that the hypothesis "all ravens are black" is logically equivalent to the hypothesis "if something is not black, then it is not a raven". So since seeing a green apple counts as evidence for the second hypothesis, it should also count as evidence for the first one. Bayesianism allows that seeing a green apple supports the raven-hypothesis while explaining our initial intuition otherwise. This result is reached if we assume that seeing a green apple provides minimal but still positive support for the raven-hypothesis while spotting a black raven provides significantly more support.
Coherence
Coherence plays a central role in various epistemological theories, for example, in the coherence theory of truth or in the coherence theory of justification. It is often assumed that sets of beliefs are more likely to be true if they are coherent than otherwise. For example, we would be more likely to trust a detective who can connect all the pieces of evidence into a coherent story. But there is no general agreement as to how coherence is to be defined. Bayesianism has been applied to this field by suggesting precise definitions of coherence in terms of probability, which can then be employed to tackle other problems surrounding coherence. One such definition was proposed by Tomoji Shogenji, who suggests that the coherence between two beliefs is equal to the probability of their conjunction divided by the probabilities of each by itself, i.e. . Intuitively, this measures how likely it is that the two beliefs are true at the same time, compared to how likely this would be if they were neutrally related to each other. The coherence is high if the two beliefs are relevant to each other. Coherence defined this way is relative to a credence assignment. This means that two propositions may have high coherence for one agent and a low coherence for another agent due to the difference in prior probabilities of the agents' credences.
Social epistemology
Social epistemology studies the relevance of social factors for knowledge. In the field of science, for example, this is relevant since individual scientists have to place their trust in some claimed discoveries of other scientists in order to progress. The Bayesian approach can be applied to various topics in social epistemology. For example, probabilistic reasoning can be used in the field of testimony to evaluate how reliable a given report is. In this way, it can be formally shown that witness reports that are probabilistically independent of each other provide more support than otherwise. Another topic in social epistemology concerns the question of how to aggregate the beliefs of the individuals within a group to arrive at the belief of the group as a whole. Bayesianism approaches this problem by aggregating the probability assignments of the different individuals.
Objections
Problem of priors
In order to draw probabilistic inferences based on new evidence, it is necessary to already have a prior probability assigned to the proposition in question. But this is not always the case: there are many propositions that the agent never considered and therefore lacks a credence for. This problem is usually solved by assigning a probability to the proposition in question in order to learn from the new evidence through conditionalization. The problem of priors concerns the question of how this initial assignment should be done. Subjective Bayesians hold that there are no or few constraints besides probabilistic coherence that determine how we assign the initial probabilities. The argument for this freedom in choosing the initial credence is that the credences will change as we acquire more evidence and will converge on the same value after enough steps no matter where we start. Objective Bayesians, on the other hand, assert that there are various constraints that determine the initial assignment. One important constraint is the principle of indifference. It states that the credences should be distributed equally among all the possible outcomes. For example, the agent wants to predict the color of balls drawn from an urn containing only red and black balls without any information about the ratio of red to black balls. Applied to this situation, the principle of indifference states that the agent should initially assume that the probability to draw a red ball is 50%. This is due to symmetric considerations: it is the only assignment in which the prior probabilities are invariant to a change in label. While this approach works for some cases it produces paradoxes in others. Another objection is that one should not assign prior probabilities based on initial ignorance.
Problem of logical omniscience
The norms of rationality according to the standard definitions of Bayesian epistemology assume logical omniscience: the agent has to make sure to exactly follow all the laws of probability for all her credences in order to count as rational. Whoever fails to do so is vulnerable to Dutch books and is therefore irrational. This is an unrealistic standard for human beings, as critics have pointed out.
Problem of old evidence
The problem of old evidence concerns cases in which the agent does not know at the time of acquiring a piece of evidence that it confirms a hypothesis but only learns about this supporting-relation later. Normally, the agent would increase her belief in the hypothesis after discovering this relation. But this is not allowed in Bayesian confirmation theory since conditionalization can only happen upon a change of the probability of the evidential statement, which is not the case. For example, the observation of certain anomalies in the orbit of Mercury is evidence for the theory of general relativity. But this data had been obtained before the theory was formulated, thereby counting as old evidence.
See also
Bayesian statistics
Bayesian probability
Bayesian inference
Probability interpretations
References
Further reading
Bayesian inference
Bayesian statistics
Coherentism
Epistemological theories
Formal epistemology
Philosophy of mathematics
Philosophy of science
Probability interpretations | 0.787184 | 0.984586 | 0.775051 |
Hermeneutics | Hermeneutics is the theory and methodology of interpretation, especially the interpretation of biblical texts, wisdom literature, and philosophical texts. As necessary, hermeneutics may include the art of understanding and communication.
Modern hermeneutics includes both verbal and non-verbal communication, as well as semiotics, presuppositions, and pre-understandings. Hermeneutics has been broadly applied in the humanities, especially in law, history and theology.
Hermeneutics was initially applied to the interpretation, or exegesis, of scripture, and has been later broadened to questions of general interpretation. The terms hermeneutics and exegesis are sometimes used interchangeably. Hermeneutics is a wider discipline which includes written, verbal, and nonverbal communication. Exegesis focuses primarily upon the word and grammar of texts.
Hermeneutic, as a count noun in the singular, refers to some particular method of interpretation (see, in contrast, double hermeneutic).
Etymology
Hermeneutics is derived from the Greek word (hermēneuō, "translate, interpret"), from (hermeneus, "translator, interpreter"), of uncertain etymology (R. S. P. Beekes (2009) suggests a Pre-Greek origin). The technical term (hermeneia, "interpretation, explanation") was introduced into philosophy mainly through the title of Aristotle's work ("Peri Hermeneias"), commonly referred to by its Latin title De Interpretatione and translated in English as On Interpretation. It is one of the earliest (c. 360 BCE) extant philosophical works in the Western tradition to deal with the relationship between language and logic in a comprehensive, explicit and formal way.
The early usage of "hermeneutics" places it within the boundaries of the sacred. A divine message must be received with implicit uncertainty regarding its truth. This ambiguity is an irrationality; it is a sort of madness that is inflicted upon the receiver of the message. Only one who possesses a rational method of interpretation (i.e., a hermeneutic) could determine the truth or falsity of the message.
Folk etymology
Folk etymology places its origin with Hermes, the mythological Greek deity who was the 'messenger of the gods'. Besides being a mediator among the gods and between the gods and men, he led souls to the underworld upon death.
Hermes was also considered to be the inventor of language and speech, an interpreter, a liar, a thief and a trickster. These multiple roles made Hermes an ideal representative figure for hermeneutics. As Socrates noted, words have the power to reveal or conceal and can deliver messages in an ambiguous way. The Greek view of language as consisting of signs that could lead to truth or to falsehood was the essence of Hermes, who was said to relish the uneasiness of those who received the messages he delivered.
In religious traditions
Mesopotamian hermeneutics
Islamic hermeneutics
Talmudic hermeneutics
Summaries of the principles by which Torah can be interpreted date back to, at least, Hillel the Elder, although the thirteen principles set forth in the Baraita of Rabbi Ishmael are perhaps the best known. These principles ranged from standard rules of logic (e.g., a fortiori argument [known in Hebrew as קל וחומר – kal v'chomer]) to more expansive ones, such as the rule that a passage could be interpreted by reference to another passage in which the same word appears (Gezerah Shavah). The rabbis did not ascribe equal persuasive power to the various principles.
Traditional Jewish hermeneutics differed from the Greek method in that the rabbis considered the Tanakh (the Jewish Biblical canon) to be without error. Any apparent inconsistencies had to be understood by means of careful examination of a given text within the context of other texts. There were different levels of interpretation: some were used to arrive at the plain meaning of the text, some expounded the law given in the text, and others found secret or mystical levels of understanding.
Vedic hermeneutics
Vedic hermeneutics involves the exegesis of the Vedas, the earliest holy texts of Hinduism. The Mimamsa was the leading hermeneutic school and their primary purpose was understanding what Dharma (righteous living) involved by a detailed hermeneutic study of the Vedas. They also derived the rules for the various rituals that had to be performed precisely.
The foundational text is the Mimamsa Sutra of Jaimini (ca. 3rd to 1st century BCE) with a major commentary by Śabara (ca. the 5th or 6th century CE). The Mimamsa sutra summed up the basic rules for Vedic interpretation.
Buddhist hermeneutics
Buddhist hermeneutics deals with the interpretation of the vast Buddhist literature, particularly those texts which are said to be spoken by the Buddha (Buddhavacana) and other enlightened beings. Buddhist hermeneutics is deeply tied to Buddhist spiritual practice and its ultimate aim is to extract skillful means of reaching spiritual enlightenment or nirvana. A central question in Buddhist hermeneutics is which Buddhist teachings are explicit, representing ultimate truth, and which teachings are merely conventional or relative.
Biblical hermeneutics
Biblical hermeneutics is the study of the principles of interpretation of the Bible. While Jewish and Christian biblical hermeneutics have some overlap, they have very different interpretive traditions.
The early patristic traditions of biblical exegesis had few unifying characteristics in the beginning but tended toward unification in later schools of biblical hermeneutics.
Augustine offers hermeneutics and homiletics in his De doctrina christiana. He stresses the importance of humility in the study of Scripture. He also regards the duplex commandment of love in Matthew 22 as the heart of Christian faith. In Augustine's hermeneutics, signs have an important role. God can communicate with the believer through the signs of the Scriptures. Thus, humility, love, and the knowledge of signs are an essential hermeneutical presupposition for a sound interpretation of the Scriptures. Although Augustine endorses some teaching of the Platonism of his time, he recasts it according to a theocentric doctrine of the Bible. Similarly, in a practical discipline, he modifies the classical theory of oratory in a Christian way. He underscores the meaning of diligent study of the Bible and prayer as more than mere human knowledge and oratory skills. As a concluding remark, Augustine encourages the interpreter and preacher of the Bible to seek a good manner of life and, most of all, to love God and neighbor.
There is traditionally a fourfold sense of biblical hermeneutics: literal, moral, allegorical (spiritual), and anagogical.
Literal
Encyclopædia Britannica states that literal analysis means “a biblical text is to be deciphered according to the ‘plain meaning’ expressed by its linguistic construction and historical context.” The intention of the authors is believed to correspond to the literal meaning. Literal hermeneutics is often associated with the verbal inspiration of the Bible.
Moral
Moral interpretation searches for moral lessons which can be understood from writings within the Bible. Allegories are often placed in this category.
Allegorical
Allegorical interpretation states that biblical narratives have a second level of reference that is more than the people, events and things that are explicitly mentioned. One type of allegorical interpretation is known as typological, where the key figures, events, and establishments of the Old Testament are viewed as “types” (patterns). In the New Testament this can also include foreshadowing of people, objects, and events. According to this theory, readings like Noah's Ark could be understood by using the Ark as a “type” of the Christian church that God designed from the start.
Anagogical
This type of interpretation is more often known as mystical interpretation. It claims to explain the events of the Bible and how they relate to or predict what the future holds. This is evident in the Jewish Kabbalah, which attempts to reveal the mystical significance of the numerical values of Hebrew words and letters.
In Judaism, anagogical interpretation is also evident in the medieval Zohar. In Christianity, it can be seen in Mariology.
Philosophical hermeneutics
Ancient and medieval hermeneutics
Modern hermeneutics
The discipline of hermeneutics emerged with the new humanist education of the 15th century as a historical and critical methodology for analyzing texts. In a triumph of early modern hermeneutics, the Italian humanist Lorenzo Valla proved in 1440 that the Donation of Constantine was a forgery. This was done through intrinsic evidence of the text itself. Thus hermeneutics expanded from its medieval role of explaining the true meaning of the Bible.
However, biblical hermeneutics did not die off. For example, the Protestant Reformation brought about a renewed interest in the interpretation of the Bible, which took a step away from the interpretive tradition developed during the Middle Ages back to the texts themselves. Martin Luther and John Calvin emphasized scriptura sui ipsius interpres (scripture interprets itself). Calvin used brevitas et facilitas as an aspect of theological hermeneutics.
The rationalist Enlightenment led hermeneutists, especially Protestant exegetists, to view Scriptural texts as secular classical texts. They interpreted Scripture as responses to historical or social forces so that, for example, apparent contradictions and difficult passages in the New Testament might be clarified by comparing their possible meanings with contemporary Christian practices.
Friedrich Schleiermacher (1768–1834) explored the nature of understanding in relation not just to the problem of deciphering sacred texts but to all human texts and modes of communication.
The interpretation of a text must proceed by framing its content in terms of the overall organization of the work. Schleiermacher distinguished between grammatical interpretation and psychological interpretation. The former studies how a work is composed from general ideas; the latter studies the peculiar combinations that characterize the work as a whole. He said that every problem of interpretation is a problem of understanding and even defined hermeneutics as the art of avoiding misunderstanding. Misunderstanding was to be avoided by means of knowledge of grammatical and psychological laws.
During Schleiermacher's time, a fundamental shift occurred from understanding not merely the exact words and their objective meaning, to an understanding of the writer's distinctive character and point of view.
Nineteenth- and twentieth-century hermeneutics emerged as a theory of understanding (Verstehen) through the work of Friedrich Schleiermacher (Romantic hermeneutics and methodological hermeneutics), August Böckh (methodological hermeneutics), Wilhelm Dilthey (epistemological hermeneutics), Martin Heidegger (ontological hermeneutics, hermeneutic phenomenology, and transcendental hermeneutic phenomenology), Hans-Georg Gadamer (ontological hermeneutics), Leo Strauss (Straussian hermeneutics), Paul Ricœur (hermeneutic phenomenology), Walter Benjamin (Marxist hermeneutics), Ernst Bloch (Marxist hermeneutics), Jacques Derrida (radical hermeneutics, namely deconstruction), Richard Kearney (diacritical hermeneutics), Fredric Jameson (Marxist hermeneutics), and John Thompson (critical hermeneutics).
Regarding the relation of hermeneutics with problems of analytic philosophy, there has been, particularly among analytic Heideggerians and those working on Heidegger's philosophy of science, an attempt to try and situate Heidegger's hermeneutic project in debates concerning realism and anti-realism: arguments have been presented both for Heidegger's hermeneutic idealism (the thesis that meaning determines reference or, equivalently, that our understanding of the being of entities is what determines entities as entities) and for Heidegger's hermeneutic realism (the thesis that (a) there is a nature in itself and science can give us an explanation of how that nature works, and (b) that (a) is compatible with the ontological implications of our everyday practices).
Philosophers that worked to combine analytic philosophy with hermeneutics include Georg Henrik von Wright and Peter Winch. Roy J. Howard termed this approach analytic hermeneutics.
Other contemporary philosophers influenced by the hermeneutic tradition include Charles Taylor (engaged hermeneutics) and Dagfinn Føllesdal.
Dilthey (1833–1911)
Wilhelm Dilthey broadened hermeneutics even more by relating interpretation to historical objectification. Understanding moves from the outer manifestations of human action and productivity to the exploration of their inner meaning. In his last important essay, "The Understanding of Other Persons and Their Manifestations of Life" (1910), Dilthey made clear that this move from outer to inner, from expression to what is expressed, is not based on empathy, understood as a direct identification with the Other. Interpretation, on a hermeneutical conception of empathy involves an indirect or mediated understanding that can only be attained by placing human expressions in their historical context. Thus, understanding is not a process of reconstructing the state of mind of the author, but one of articulating what is expressed in his work.
Dilthey divided sciences of the mind (human sciences) into three structural levels: experience, expression, and comprehension.
Experience means to feel a situation or thing personally. Dilthey suggested that we can always grasp the meaning of unknown thought when we try to experience it. His understanding of experience is very similar to that of phenomenologist Edmund Husserl.
Expression converts experience into meaning because the discourse has an appeal to someone outside of oneself. Every saying is an expression. Dilthey suggested that one can always return to an expression, especially to its written form, and this practice has the same objective value as an experiment in science. The possibility of returning makes scientific analysis possible, and therefore the humanities may be labeled as science. Moreover, he assumed that an expression may be "saying" more than the speaker intends because the expression brings forward meanings which the individual consciousness may not fully understand.
The last structural level of the science of the mind, according to Dilthey, is comprehension, which is a level that contains both comprehension and incomprehension. Incomprehension means, more or less, wrong understanding. He assumed that comprehension produces coexistence: "he who understands, understands others; he who does not understand stays alone."
Heidegger (1889–1976)
In the 20th century, Martin Heidegger's philosophical hermeneutics shifted the focus from interpretation to existential understanding as rooted in fundamental ontology, which was treated more as a direct—and thus more authentic—way of being-in-the-world (In-der-Welt-sein) than merely as "a way of knowing." For example, he called for a "special hermeneutic of empathy" to dissolve the classic philosophic issue of "other minds" by putting the issue in the context of the being-with of human relatedness. (Heidegger himself did not complete this inquiry.)
Advocates of this approach claim that some texts, and the people who produce them, cannot be studied by means of using the same scientific methods that are used in the natural sciences, thus drawing upon arguments similar to those of antipositivism. Moreover, they claim that such texts are conventionalized expressions of the experience of the author. Thus, the interpretation of such texts will reveal something about the social context in which they were formed, and, more significantly, will provide the reader with a means of sharing the experiences of the author.
The reciprocity between text and context is part of what Heidegger called the hermeneutic circle. Among the key thinkers who elaborated this idea was the sociologist Max Weber.
Gadamer (1900–2002)
Hans-Georg Gadamer's hermeneutics is a development of the hermeneutics of his teacher, Heidegger. Gadamer asserted that methodical contemplation is opposite to experience and reflection. We can reach the truth only by understanding or mastering our experience. According to Gadamer, our understanding is not fixed but rather is changing and always indicating new perspectives. The most important thing is to unfold the nature of individual understanding.
Gadamer pointed out that prejudice is an element of our understanding and is not per se without value. Indeed, prejudices, in the sense of pre-judgements of the thing we want to understand, are unavoidable. Being alien to a particular tradition is a condition of our understanding. He said that we can never step outside of our tradition—all we can do is try to understand it. This further elaborates the idea of the hermeneutic circle.
New hermeneutic
New hermeneutic is the theory and methodology of interpretation to understand Biblical texts through existentialism. The essence of new hermeneutic emphasizes not only the existence of language but also the fact that language is eventualized in the history of individual life. This is called the event of language. Ernst Fuchs, Gerhard Ebeling, and James M. Robinson are the scholars who represent the new hermeneutics.
Marxist hermeneutics
The method of Marxist hermeneutics has been developed by the work of, primarily, Walter Benjamin and Fredric Jameson. Benjamin outlines his theory of the allegory in his study Ursprung des deutschen Trauerspiels ("Trauerspiel" literally means "mourning play" but is often translated as "tragic drama"). Fredric Jameson draws on Biblical hermeneutics, Ernst Bloch, and the work of Northrop Frye, to advance his theory of Marxist hermeneutics in his influential The Political Unconscious. Jameson's Marxist hermeneutics is outlined in the first chapter of the book, titled "On Interpretation" Jameson re-interprets (and secularizes) the fourfold system (or four levels) of Biblical exegesis (literal; moral; allegorical; anagogical) to relate interpretation to the mode of production, and eventually, history.
Objective hermeneutics
Karl Popper first used the term "objective hermeneutics" in his Objective Knowledge (1972).
In 1992, the Association for Objective Hermeneutics (AGOH) was founded in Frankfurt am Main by scholars of various disciplines in the humanities and social sciences. Its goal is to provide all scholars who use the methodology of objective hermeneutics with a means of exchanging information.
In one of the few translated texts of this German school of hermeneutics, its founders declared:
Other recent developments
Bernard Lonergan's (1904–1984) hermeneutics is less well known, but a case for considering his work as the culmination of the postmodern hermeneutical revolution that began with Heidegger was made in several articles by Lonergan specialist Frederick G. Lawrence.
Paul Ricœur (1913–2005) developed a hermeneutics that is based upon Heidegger's concepts. His work differs in many ways from that of Gadamer.
Karl-Otto Apel (b. 1922) elaborated a hermeneutics based on American semiotics. He applied his model to discourse ethics with political motivations akin to those of critical theory.
Jürgen Habermas (b. 1929) criticized the conservatism of previous hermeneutists, especially Gadamer, because their focus on tradition seemed to undermine possibilities for social criticism and transformation. He also criticized Marxism and previous members of the Frankfurt School for missing the hermeneutical dimension of critical theory.
Habermas incorporated the notion of the lifeworld and emphasized the importance for social theory of interaction, communication, labor, and production. He viewed hermeneutics as a dimension of critical social theory.
Rudolf Makkreel (b. 1939) has proposed an orientational hermeneutics that brings out the contextualizing function of reflective judgment. It extends ideas of Kant and Dilthey to supplement the dialogical approach of Gadamer with a diagnostic approach that can deal with an ever-changing and multicultural world.
Andrés Ortiz-Osés (1943–2021) developed his symbolic hermeneutics as the Mediterranean response to Northern European hermeneutics. His main statement regarding symbolic understanding of the world is that meaning is a symbolic healing of injury.
Two other important hermeneutic scholars are Jean Grondin (b. 1955) and Maurizio Ferraris (b. 1956).
Mauricio Beuchot coined the term and discipline of analogic hermeneutics, which is a type of hermeneutics that is based upon interpretation and takes into account the plurality of aspects of meaning. He drew categories both from analytic and continental philosophy, as well as from the history of thought.
Two scholars who have published criticism of Gadamer's hermeneutics are the Italian jurist Emilio Betti and the American literary theorist E. D. Hirsch.
Applications
Archaeology
In archaeology, hermeneutics means the interpretation and understanding of material through analysis of possible meanings and social uses.
Proponents argue that interpretation of artifacts is unavoidably hermeneutic because we cannot know for certain the meaning behind them. We can only apply modern values when interpreting. This is most commonly seen in stone tools, where descriptions such as "scraper" can be highly subjective and actually unproven until the development of microwear analysis some thirty years ago.
Opponents argue that a hermeneutic approach is too relativist and that their own interpretations are based on common-sense evaluation.
Architecture
There are several traditions of architectural scholarship that draw upon the hermeneutics of Heidegger and Gadamer, such as Christian Norberg-Schulz, and Nader El-Bizri in the circles of phenomenology. Lindsay Jones examines the way architecture is received and how that reception changes with time and context (e.g., how a building is interpreted by critics, users, and historians). Dalibor Vesely situates hermeneutics within a critique of the application of overly scientific thinking to architecture. This tradition fits within a critique of the Enlightenment and has also informed design-studio teaching. Adrian Snodgrass sees the study of history and Asian cultures by architects as a hermeneutical encounter with otherness. He also deploys arguments from hermeneutics to explain design as a process of interpretation. Along with Richard Coyne, he extends the argument to the nature of architectural education and design.
Education
Hermeneutics motivates a broad range of applications in educational theory. The connection between hermeneutics and education has deep historical roots. The ancient Greeks gave the interpretation of poetry a central place in educational practice, as indicated by Dilthey: "systematic exegesis (hermeneia) of the poets developed out of the demands of the educational system."
Gadamer more recently wrote on the topic of education, and more recent treatments of educational issues across various hermeneutical approaches are to be found in Fairfield and Gallagher.
Environment
Environmental hermeneutics applies hermeneutics to environmental issues conceived broadly to subjects including "nature" and "wilderness" (both terms are matters of hermeneutical contention), landscapes, ecosystems, built environments (where it overlaps architectural hermeneutics ), inter-species relationships, the relationship of the body to the world, and more.
International relations
Insofar as hermeneutics is a basis of both critical theory and constitutive theory (both of which have made important inroads into the postpositivist branch of international relations theory and political science), it has been applied to international relations.
Steve Smith refers to hermeneutics as the principal way of grounding foundationalist yet postpositivist theory of international relations.
Radical postmodernism is an example of a postpositivistanti-foundationalist paradigm of international relations.
Law
Some scholars argue that law and theology are particular forms of hermeneutics because of their need to interpret legal tradition or scriptural texts. Moreover, the problem of interpretation has been central to legal theory since at least the 11th century.
In the Middle Ages and Italian Renaissance, the schools of glossatores, commentatores, and usus modernus distinguished themselves by their approach to the interpretation of "laws" (mainly Justinian's Corpus Juris Civilis). The University of Bologna gave birth to a "legal Renaissance" in the 11th century, when the Corpus Juris Civilis was rediscovered and systematically studied by men such as Irnerius and Johannes Gratian. It was an interpretative Renaissance. Subsequently, these were fully developed by Thomas Aquinas and Alberico Gentili.
Since then, interpretation has always been at the center of legal thought. Friedrich Carl von Savigny and Emilio Betti, among others, made significant contributions to general hermeneutics. Legal interpretivism, most famously Ronald Dworkin's, may be seen as a branch of philosophical hermeneutics.
Phenomenology
In qualitative research, the beginnings of phenomenology stem from German philosopher and researcher Edmund Husserl. In his early days, Husserl studied mathematics, but over time his disinterest with empirical methods led him to philosophy and eventually phenomenology. Husserl's phenomenology inquires on the specifics of a certain experience or experiences and attempts to unfold the meaning of experience in everyday life. Phenomenology started as philosophy and then developed into methodology over time. American researcher Don Ihde contributed to phenomenological research methodology through what he described as experimental phenomenology: “Phenomenology, in the first instance, is like an investigative science, an essential component of which is an experiment.” His work contributed heavily to the implementation of phenomenology as a methodology.
The beginnings of hermeneutic phenomenology stem from a German researcher and student of Husserl, Martin Heidegger. Both researchers attempted to pull out the lived experiences of others through philosophical concepts, but Heidegger's main difference from Husserl was his belief that consciousness was not separate from the world but a formation of who we are as living individuals. Hermeneutic phenomenology stresses that every event or encounter involves some type of interpretation from an individual's background, and that we cannot separate this from an individual's development through life. Ihde also focuses on hermeneutic phenomenology within his early work, and draws connections between Husserl and French philosopher Paul Ricoeur's work in the field. Ricoeur focuses on the importance of symbols and linguistics within hermeneutic phenomenology. Overall, hermeneutic phenomenological research focuses on historical meanings and experiences, and their developmental and social effects on individuals.
Political philosophy
Italian philosopher Gianni Vattimo and Spanish philosopher Santiago Zabala in their book Hermeneutic Communism, when discussing contemporary capitalist regimes, stated that, "A politics of descriptions does not impose power in order to dominate as a philosophy; rather, it is functional for the continued existence of a society of dominion, which pursues truth in the form of imposition (violence), conservation (realism), and triumph (history)."
Vattimo and Zabala also stated that they view interpretation as anarchy and affirmed that "existence is interpretation" and that "hermeneutics is weak thought."
Psychoanalysis
Psychoanalysts have made ample use of hermeneutics since Sigmund Freud first gave birth to their discipline. In 1900 Freud wrote that the title he chose for The Interpretation of Dreams 'makes plain which of the traditional approaches to the problem of dreams I am inclined to follow...[i.e.] "interpreting" a dream implies assigning a "meaning" to it.'
The French psychoanalyst Jacques Lacan later extended Freudian hermeneutics into other psychical realms. His early work from the 1930s–50s is particularly influenced by Heidegger, and Maurice Merleau-Ponty's hermeneutical phenomenology.
Psychology and cognitive science
Psychologists and Cognitive science have recently become interested in hermeneutics, especially as an alternative to cognitivism.
Hubert Dreyfus's critique of conventional artificial intelligence has been influential among psychologists who are interested in hermeneutic approaches to meaning and interpretation, as discussed by philosophers such as Martin Heidegger (cf. Embodied cognition) and Ludwig Wittgenstein (cf. Discursive psychology).
Hermeneutics is also influential in humanistic psychology.
Religion and theology
The understanding of a theological text depends upon the reader's particular hermeneutical viewpoint. Some theorists, such as Paul Ricœur, have applied modern philosophical hermeneutics to theological texts (in Ricœur's case, the Bible).
Mircea Eliade, as a hermeneutist, understands religion as 'experience of the sacred', and interprets the sacred in relation to the profane. The Romanian scholar underlines that the relation between the sacred and the profane is not of opposition, but of complementarity, having interpreted the profane as a hierophany. The hermeneutics of the myth is a part of the hermeneutics of religion. Myth should not be interpreted as an illusion or a lie, because there is truth in myth to be rediscovered. Myth is interpreted by Mircea Eliade as 'sacred history'. He introduces the concept of 'total hermeneutics'.
Safety science
In the field of safety science, and especially in the study of human reliability, scientists have become increasingly interested in hermeneutic approaches.
It has been proposed by ergonomist Donald Taylor that mechanist models of human behaviour will only take us so far in terms of accident reduction, and that safety science must look at the meaning of accidents for human beings.
Other scholars in the field have attempted to create safety taxonomies that make use of hermeneutic concepts in terms of their categorisation of qualitative data.
Sociology
In sociology, hermeneutics is the interpretation and understanding of social events through analysis of their meanings for the human participants in the events. It enjoyed prominence during the 1960s and 1970s, and differs from other interpretive schools of sociology in that it emphasizes both context and form within any given social behaviour.
The central principle of sociological hermeneutics is that it is only possible to know the meaning of an act or statement within the context of the discourse or world view from which it originates. Context is critical to comprehension; an action or event that carries substantial weight to one person or culture may be viewed as meaningless or entirely different to another. For example, giving the "thumbs-up" gesture is widely accepted as a sign of a job well done in the United States, while other cultures view it as an insult. Similarly, marking a piece of paper and putting it into a box might be considered a meaningless act unless it is put into the context of an election (the act of putting a ballot paper into a box).
Friedrich Schleiermacher, widely regarded as the father of sociological hermeneutics believed that, in order for an interpreter to understand the work of another author, they must familiarize themselves with the historical context in which the author published their thoughts. His work led to the inspiration of Heidegger's "hermeneutic circle" a frequently referenced model that claims one's understanding of individual parts of a text is based on their understanding of the whole text, while the understanding of the whole text is dependent on the understanding of each individual part. Hermeneutics in sociology was also heavily influenced by German philosopher Hans-Georg Gadamer.
Criticism
Jürgen Habermas criticizes Gadamer's hermeneutics as being unsuitable for understanding society because it is unable to account for questions of social reality, like labor and domination.
See also
Allegorical interpretations of Plato
Authorial intentionalism
Biblical law in Christianity
Close reading
Gymnobiblism
Hermeneutics of suspicion
Historical poetics
Narrative inquiry
Parallelomania
Pesher
Philology
Principle of charity
Quranic hermeneutics
Reader-response criticism
Structuration theory
Symbolic anthropology
Tafsir
Talmudical hermeneutics
Text criticism
Theosophy
Truth theory
Notable precursors
Johann August Ernesti
Johann Gottfried Herder
Friedrich August Wolf
Georg Anton Friedrich Ast
References
Bibliography
Aristotle, On Interpretation, Harold P. Cooke (trans.), in Aristotle, vol. 1 (Loeb Classical Library), pp. 111–179. London: William Heinemann, 1938.
Clingerman, F. and B. Treanor, M. Drenthen, D. Ustler (2013), Interpreting Nature: The Emerging Field of Environmental Hermeneutics, New York: Fordham University Press.
De La Torre, Miguel A., "Reading the Bible from the Margins," Orbis Books, 2002.
Fellmann, Ferdinand, "Symbolischer Pragmatismus. Hermeneutik nach Dilthey", Rowohlts deutsche Enzyklopädie, 1991.
Forster, Michael N., After Herder: Philosophy of Language in the German Tradition, Oxford University Press, 2010.
Ginev, Dimitri, Essays in the Hermeneutics of Science, Routledge, 2018.
Khan, Ali, "The Hermeneutics of Sexual Order".
Köchler, Hans, "Zum Gegenstandsbereich der Hermeneutik", in Perspektiven der Philosophie, vol. 9 (1983), pp. 331–341.
Köchler, Hans, "Philosophical Foundations of Civilizational Dialogue: The Hermeneutics of Cultural Self-comprehension versus the Paradigm of Civilizational Conflict." International Seminar on Civilizational Dialogue (3rd: 15–17 September 1997: Kuala Lumpur), BP171.5 ISCD. Kertas kerja persidangan / conference papers. Kuala Lumpur: University of Malaya Library, 1997.
Mantzavinos, C. Naturalistic Hermeneutics, Cambridge University Press, 2005. .
Oevermann, U. et al. (1987): "Structures of meaning and objective Hermeneutics." In: Meha, V. et al. (eds.). Modern German Sociology. European Perspectives: a Series in Social Thought and Cultural Ctiticism. New York: Columbia University Press, pp. 436–447.
Olesen, Henning Salling, ed. (2013): "Cultural Analysis and In-Depth Hermeneutics." Historical Social Research, Focus, 38, no. 2, pp. 7–157.
Przyłębski, Andrzej. Ethics in the Light of Hermeneutical Philosophy, LIT Verlag, Zurich 2017.
Przyłębski, Andrzej. The Value of Motherland: An Introduction to a Hermeneutic Philosophy of Politics, LIT Verlag, Zurich 2022.
Wierciński, Andrzej. Hermeneutics between Philosophy and Theology: The Imperative to Think the Incommensurable, Germany, Münster: LIT Verlag, 2010.
External links
Abductive Inference and Literary theory – Pragmatism, Hermeneutics and Semiotics written by Uwe Wirth.
Meta: Research in Hermeneutics, Phenomenology, and Practical Philosophy – International peer-reviewed journal.
Objective Hermeneutics Bibliographic Database provided by the Association for Objective Hermeneutics.
de Berg, Henk: Gadamer's Hermeneutics: An Introduction (2015)
de Berg, Henk: Ricoeur's Hermeneutics: An Introduction (2015)
Palmer, Richard E., "The Liminality of Hermes and the Meaning of Hermeneutics"
Palmer, Richard E., "The Relevance of Gadamer's Philosophical Hermeneutics to Thirty-Six Topics or Fields of Human Activity", Lecture Delivered at the Department of Philosophy, Southern Illinois University, Carbondale, IL, 1 April 1999, Eprint.
Plato, Ion, Paul Woodruff (trans.) in Plato, Complete Works, ed. John M. Cooper. Indianapolis: Hackett Publishing Company, 1997, pp. 937–949.
Quintana Paz, Miguel Ángel, "On Hermeneutical Ethics and Education", a paper on the relevance of Gadamer's Hermeneutics for our understanding of Music, Ethics and our Education in both.
Szesnat, Holger, "Philosophical Hermeneutics", Webpage.
Literary criticism
Martin Heidegger
Philosophical methodology
Religious terminology | 0.775849 | 0.998897 | 0.774993 |
Definitions of education | Definitions of education aim to describe the essential features of education. A great variety of definitions has been proposed. There is wide agreement that education involves, among other things, the transmission of knowledge. But there are deep disagreements about its exact nature and characteristics. Some definitions see education as a process exemplified in events like schooling, teaching, and learning. Others understand it not as a but as the of such processes, i.e. as what characterizes educated persons. Various attempts have been made to give precise definitions listing its necessary and sufficient conditions. The failure of such attempts, often in the form of being unable to account for various counter examples, has led many theorists to adopt less precise conceptions based on family resemblance. On this view, different forms of education are similar by having overlapping features but there is no set of features shared by all forms. Clarity about the nature of education is central for various issues, for example, to coherently talk about the subject and to determine how to achieve and measure it.
An important discussion in the academic literature is about whether evaluative aspects are already part of the definition of education and, if so, what roles they play. Thin definitions are value-neutral while thick definitions include evaluative and normative components, for example, by holding that education implies that the person educated has changed for the better. Descriptive conceptions try to capture how the term "education" is used by competent speakers. Prescriptive conceptions, on the other hand, stipulate what education be like or what constitutes education.
Thick and prescriptive conceptions often characterize education in relation to the goals it aims to realize. These goals are sometimes divided into epistemic goods, like knowledge and understanding, skills, like rationality and critical thinking, and character traits, like kindness and honesty. Some theorists define education in relation to an overarching purpose, like socialization or helping the learner lead a good life. The more specific aims can then be understood as means to achieve this overarching purpose. Various researchers emphasize the role of critical thinking to distinguish education from indoctrination.
Traditional accounts of education characterize it mainly from the teacher's perspective, usually by describing it as a process in which they transmit knowledge and skills to their students. Student-centered definitions, on the other hand, emphasize the student's experience, for example, based on how education transforms and enriches their subsequent experience. Some conceptions take both the teacher's and the student's point of view into account by focusing on their shared experience of a common world.
General characteristics, disagreements, and importance
Definitions of education try to determine the essential features of education. Many general characteristics have been ascribed to education. However, there are several disagreements concerning its exact definition and a great variety of definitions have been proposed by theorists belonging to diverse fields. There is wide agreement that education is a purposeful activity directed at achieving certain aims. In this sense, education involves the transmission of knowledge. But it is often pointed out that this factor alone is not sufficient and needs to be accompanied by other factors, such as the acquisition of practical skills or instilling moral character traits.
Many definitions see education as a task or a process. In this regard, the conception of education is based on what happens during events like schooling, training, instructing, teaching, and learning. This process may in turn be understood either from the perspective of the teacher or with a focus on the student's experience instead. However, other theorists focus mainly on education as an achievement, a state, or a product that results as a consequence of the process of being educated. Such approaches are usually based on the features, mental states, and character traits exemplified by educated persons. In this regard, being educated implies having an encompassing familiarity with various topics. So one does not become an educated person just by undergoing specialized training in one specific field. Besides these two meanings, the term "education" may also refer to the academic field studying the methods and processes involved in teaching and learning or to social institutions employing these processes.
Education is usually understood as a very general term that has a wide family of diverse instances. Nonetheless, some attempts have been made to give a precise definition of the essential features shared by all forms of education. An influential early attempt was made by R. S. Peters in his book "Ethics and Education", where he suggests three criteria that constitute the necessary and sufficient conditions of education: (1) it is concerned with the transmission of knowledge and understanding; (2) this transmission is worthwhile and (3) done in a morally appropriate manner in tune with the student's interests. This definition has received a lot of criticism in the academic literature. While there is wide agreement that many forms of education fall under these three criteria, opponents have rejected that they are true for all of them by providing various counterexamples. For example, in regard to the third criterion, it may be sometimes necessary to educate children about certain facts even though they are not interested in learning about these facts. And regarding the second criterion, not everyone agrees that education is always desirable. Because of the various difficulties and counterexamples with this and other precise definitions, some theorists have argued that there is no one true definition of education. In this regard, the different forms of education may be seen as a group of loosely connected topics and "different groups within a society may have differing legitimate conceptions of education".
Some theorists have responded to this by defining education in terms of family resemblance. This is to say that there is no one precise set of features shared by all and only by forms of education. Instead, there is a group of many features characteristic of education. Some of these features apply to one form of education while slightly different ones are exemplified by another form of education. In this sense, any two forms of education are similar and their characteristic features overlap without being identical. This is closely related to the idea that words are like tools used in language games. On this view, there may be various language games or contexts in which the term "education" is used, in each one with a slightly different meaning. Following this line of thought, it has been suggested that definitions of education should limit themselves to a specific context without claiming to be true for all possible uses of the term. The most paradigmatic form of education takes place in schools. Many researchers have specifically this type of education in mind and some define it explicitly as the discipline investigating the methods of teaching and learning in a formal setting, like schools. But in its widest sense, it encompasses many other forms as well, including informal and non-formal education.
Clarity about the nature of education is important for various concerns. In a general sense, it is needed to identify and coherently talk about education. In this regard, all the subsequent academic discourse on topics like the aims of education, the psychology of education, or the role of education in society, depends on this issue. For example, when trying to determine what good education is like, one has to already assume some idea of what education is to decide what constitutes a good instance. It is also central for questions about how to achieve and measure the results of educational processes. The importance of providing an explicit definition is further increased by the fact that education initially seems to be a straightforward and common-sense concept that people usually use outside the academic discourse without much controversy. This impression hides various conceptual confusions and disagreements that only come to light in the attempt to make explicit the common pre-understanding associated with the term.
Many concrete definitions of education have been proposed. According to John Dewey, education involves the transmission of habits, ideals, hopes, expectations, standards, and opinions from one generation to the next. R. S. Peters revised his earlier definitions and understands education in his later philosophy as a form of initiation in which teachers share the experience of a common world with their students and convey worthwhile forms of thought and awareness to them. For Lawrence Cremin, "[e]ducation is the deliberate, systematic, and sustained effort to transmit, provoke or acquire knowledge, values, attitudes, skills or sensibilities as well as any learning that results from the effort". Another definition sees education as "a serious and sustained programme of learning, for the benefit of people qua people rather than only qua role-fillers or functionaries, above the level of what people might pick up for themselves in their daily lives'". The English word "education" has its etymological root in the Latin word "educare", which means "to train", "to mold", or "to lead out".
Role of values
There are various disagreements about whether evaluative and normative aspects should already be included in the definition of education and, if so, what roles they play. An important distinction in this regard is between thin and thick definitions. Thin definitions aim to provide a value-neutral description of what education is, independent of whether and to whom it is useful. Thick definitions, on the other hand, include various evaluative and normative components in their characterization, for example, the claim that education implies that the person educated has changed for the better. Otherwise, the process would not deserve the label "education". However, different thick definitions of education may still disagree with each other on what kind of values are involved and in which sense the change in question is an improvement. A closely related distinction is that between descriptive and prescriptive or programmatic conceptions. Descriptive definitions aim to provide a description of how the term "education" is actually used. They contrast with prescriptive definitions, which stipulate what education be like or what constitutes education. Some theorists also include an additional category for stipulative definitions, which are sometimes used by individual researchers as shortcuts for what they mean when they use the term without claiming that these are the essential features commonly associated with all forms of education. Thick and prescriptive conceptions are closely related to the aims of education in the sense that they understand education as a process aimed at a certain valuable goal that constitutes an improvement of the learner. Such improvements are often understood in terms of mental states fostered by the educational process.
Role of aims
Many conceptions of education, in particular thick and prescriptive accounts, base their characterizations on the aims of education, i.e. in relation to the purpose that the process of education tries to realize. The transmission of knowledge has a central role in this regard, but most accounts include other aims as well, such as fostering the student's values, attitudes, skills, and sensibilities. However, it has been argued that picking up certain skills and know-how without the corresponding knowledge and conceptual scheme does not constitute education, strictly speaking. But the same limitation may also be true for pure knowledge that is not accompanied by positive practical effects on the individual's life. The various specific aims are sometimes divided into epistemic goods, skills, and character traits. Examples of epistemic goods are truth, knowledge, and understanding. Skill-based accounts, on the other hand, hold that the goal of education is to develop skills like rationality and critical thinking. For character-based accounts, its main purpose is to foster certain character traits or virtues, like kindness, justice, and honesty. Some theorists try to provide a wide overarching framework. The various specific goals are then seen as aims of education to the extent that they serve this overarching purpose. When this purpose is understood in relation to society, education may be defined as the process of transmitting, from one generation to the next, the accumulated knowledge and skills needed to function as a regular citizen in a specific society. In this regard, education is equivalent to socialization or enculturation. More liberal or person-centered definitions, on the other hand, see the overarching purpose in relation to the individual learner instead: education is to help them develop their potential in order to lead a good life or the life they wish to lead, independently of the social ramifications of this process.
Various conceptions emphasize the aim of critical thinking in order to differentiate education from indoctrination. Critical thinking is a form of thinking that is reasonable, reflective, careful, and focused on determining what to believe or how to act. It includes the metacognitive component of monitoring and assessing its achievements in regard to the standards of rationality and clarity. Many theorists hold that fostering this disposition distinguishes education from indoctrination, which only tries to instill beliefs in the student's mind without being interested in their evidential status or fostering the ability to question those beliefs. But not all researchers accept this hard distinction. A few hold that, at least in the early stages of education, some forms of indoctrination are necessary until the child's mind has developed sufficiently to assess and evaluate reasons for and against particular claims and thus employ critical thinking. In this regard, critical thinking may still be an important aim of education but not an essential feature characterizing all forms of education.
Teacher- or student-centered
Most conceptions of education either explicitly or implicitly hold that education involves the relation between teacher and student. Some theorists give their characterization mainly from the teacher's perspective, usually emphasizing the act of transmitting knowledge or other skills, while others focus more on the learning experience of the student. The teacher-centered perspective on education is often seen as the traditional position. An influential example is found in the early philosophy of R. S. Peters. In it, he considers education to be the transmission of knowledge and skills while emphasizing that teachers should achieve this in a morally appropriate manner that reflects the student's interests. A student-centered definition is given by John Dewey, who sees education as the "reconstruction or reorganization of experience which adds to the meaning of experience, and which increases the ability to direct the course of subsequent experience". This way, the student's future experience is enriched and the student thereby undergoes a form of growth. Opponents of this conception have criticized its lack of a normative component. For example, the increase of undesirable abilities, like learning how to become an expert burglar, should not be understood as a form of education even though it is a reorganization of experience that directs the course of subsequent experience.
Other theories aim to provide a more encompassing perspective that takes both the teacher's and the student's point of view into account. Peters, in response to the criticism of his initially proposed definition, has changed his conception of education by giving a wider and less precise definition, seeing it as a type of initiation in which worthwhile forms of thought and awareness are conveyed from teachers to their students. This is based on the idea that both teachers and students participate in the shared experience of a common world. The teachers are more familiar with this world and try to guide the students by passing on their knowledge and understanding. Ideally, this process is motivated by curiosity and excitement on the part of the students to discover what there is and what it is like so that they may one day themselves become authorities on the subject. This conception can be used for answering questions about the contents of the curriculum or what should be taught: whatever the students need most for discovering and participating in the common world.
The shared perspective of both teachers and students is also emphasized by Paulo Freire. In his influential Pedagogy of the Oppressed, he rejects teacher-centered definitions, many of which characterize education using what he refers to as the banking model of education. According to the banking model, students are seen as empty vessels in analogy to piggy banks. It is the role of the teacher to deposit knowledge into the passive students, thereby shaping their character and outlook on the world. Instead, Freire favors a libertarian conception of education. On this view, teachers and students work together in a common activity of posing and solving problems. The goal of this process is to discover a shared and interactive reality, not by consuming ideas created by others but by producing and acting upon one's own ideas. Students and teachers are co-investigators of reality and the role of the teacher is to guide this process by representing the universe instead of merely lecturing about it.
References
Definitions
Education
Education studies
Philosophy of education | 0.783957 | 0.988518 | 0.774955 |
Agnostic theism | Agnostic theism, agnostotheism, or agnostitheism is the philosophical view that encompasses both theism and agnosticism. An agnostic theist believes in the existence of one or more gods, but regards the basis of this proposition as unknown or inherently unknowable. The agnostic theist may also or alternatively be agnostic regarding the properties of the god or gods that they believe in.
Views of agnostic theism
There are numerous beliefs that can be included in agnostic theism, such as fideism, the doctrine that knowledge depends on faith or revelation; not all agnostic theists are fideists. Since agnosticism is in the philosophical rather than religious sense an epistemological position on knowledge regarding the divine and does not forbid belief in the existence of one or more deities, it is considered to be compatible with both atheistic and theistic positions.
The classical philosophical understanding of knowledge is that knowledge is justified true belief. The founder of logotherapy, Viktor Frankl, may have well exemplified this definition. Seidner expands upon this example and stresses Frankl's characterization of unconscious. Agnostic theism could be interpreted as an admission that it is not possible to justify one's belief in a god sufficiently for it to be considered known. This may be because they consider faith a requirement of their religion, or because of the influence of plausible-seeming scientific or philosophical criticism.
Christian agnostics practice a distinct form of agnosticism that applies only to the attributes of the Christian god. They hold that it is difficult or impossible to be sure of anything beyond the basic tenets of the Christian faith. They believe that the Christian god exists, that Jesus has a special relationship with him and is in some way divine, and that God might perhaps be worshipped. This belief system has deep roots in Judaism and the early days of the Christian Church.
See also
Agnostic atheism
Christian agnosticism
Cosmological argument
Deism
Doubt: Philosophy and ethics
Epistemology: Belief
Faith
Fideism
Ietsism
Ignosticism
Negative theology
Numinous
Secular theology
Sola fide
Truth
Unitarian Universalism
Unknown God
References
External links
Epistemology - from Stanford Encyclopedia of Philosophy
AGNOSTICISM - from Dictionary of the History of Ideas
Agnosticism
Theism
Philosophy of religion | 0.780345 | 0.993012 | 0.774892 |
Deontology | In moral philosophy, deontological ethics or deontology (from Greek: + ) is the normative ethical theory that the morality of an action should be based on whether that action itself is right or wrong under a series of rules and principles, rather than based on the consequences of the action. It is sometimes described as duty-, obligation-, or rule-based ethics. Deontological ethics is commonly contrasted to consequentialism, utilitarianism, virtue ethics, and pragmatic ethics. In this terminology, action is more important than the consequences.
The term deontological was first used to describe the current, specialised definition by C. D. Broad in his 1930 book, Five Types of Ethical Theory. Older usage of the term goes back to Jeremy Bentham, who coined it prior to 1816 as a synonym of dicastic or censorial ethics (i.e., ethics based on judgement). The more general sense of the word is retained in French, especially in the term code de déontologie (ethical code), in the context of professional ethics.
Depending on the system of deontological ethics under consideration, a moral obligation may arise from an external or internal source, such as a set of rules inherent to the universe (ethical naturalism), religious law, or a set of personal or cultural values (any of which may be in conflict with personal desires).
Deontological philosophies
There are numerous formulations of deontological ethics.
Kantianism
Immanuel Kant's theory of ethics is considered deontological for several different reasons. First, Kant argues that in order to act in the morally right way, people must act from duty (Pflicht). Second, Kant argued that it was not the consequences of actions that make them right or wrong, but the motives of the person who carries out the action.
Kant's first argument begins with the premise that the highest good must be both good in itself and good without qualification. Something is "good in itself" when it is intrinsically good; and is "good without qualification" when the addition of that thing never makes a situation ethically worse. Kant then argues that those things that are usually thought to be good, such as intelligence, perseverance, and pleasure, fail to be either intrinsically good or good without qualification. Pleasure, for example, appears not to be good without qualification, because when people take pleasure in watching someone suffer, this seems to make the situation ethically worse. He concludes that there is only one thing that is truly good:
Kant then argues that the consequences of an act of willing cannot be used to determine that the person has a good will; good consequences could arise by accident from an action that was motivated by a desire to cause harm to an innocent person, and bad consequences could arise from an action that was well-motivated. Instead, he claims, a person has a good will when they "act out of respect for the moral law." People "act out of respect for the moral law" when they act in some way because they have a duty to do so. Thus, the only thing that is truly good in itself is a good will, and a good will is only good when the willer chooses to do something because it is that person's duty; i.e., out of respect for the law. He defines respect as "the concept of a worth which thwarts my self-love."
Kant's three significant formulations of the categorical imperative (a way of evaluating motivations for action) are:
Act only according to that maxim by which you can also will that it would become a universal law;
Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end;
Every rational being must so act as if he were through his maxim always a legislating member in a universal kingdom of ends.
Kant argued that the only absolutely good thing is a good will, and so the single determining factor of whether an action is morally right is the will, or motive of the person doing it. If they are acting on a bad maxim—e.g., 'I will lie'—then their action is wrong, even if some good consequences come of it.
In his essay, "On a Supposed Right to Lie Because of Philanthropic Concerns", arguing against the position of Benjamin Constant, Des réactions politiques, Kant states that:Hence a lie defined merely as an intentionally untruthful declaration to another man does not require the additional condition that it must do harm to another, as jurists require in their definition (mendacium est falsiloquium in praeiudicium alterius). For a lie always harms another; if not some human being, then it nevertheless does harm to humanity in general, inasmuch as it vitiates the very source of right [].… All practical principles of right must contain rigorous truth.… This is because such exceptions would destroy the universality on account of which alone they bear the name of principles.
Divine command theory
Although not all deontologists are religious, some believe in the divine command theory, which is actually a cluster of related theories that essentially state that an action is right if God has decreed that it is right. According to English philosopher Ralph Cudworth, William of Ockham, René Descartes, and 18th-century Calvinists all accepted various versions of this moral theory, as they all held that moral obligations arise from God's commands.
The divine command theory is a form of deontology because, according to it, the rightness of any action depends upon that action being performed because it is a duty, not because of any good consequences arising from that action. If God commands people not to work on Sabbath, then people act rightly if they do not work on Sabbath because God has commanded that they do not do so. If they do not work on Sabbath because they are lazy, then their action is not, truly speaking, "right" even though the actual physical action performed is the same. If God commands not to covet a neighbour's goods, this theory holds that it would be immoral to do so, even if coveting provides the beneficial outcome of a drive to succeed or do well.
One thing that clearly distinguishes Kantian deontologism from divine command deontology is that Kantianism maintains that man, as a rational being, makes the moral law universal, whereas divine command maintains that God makes the moral law universal.
Ross's deontological pluralism
W. D. Ross objects to Kant's monistic deontology, which bases ethics in only one foundational principle, the categorical imperative. He contends that there is a plurality (7, although this number is seen to vary to interpretation) of prima facie duties determining what is right.
These duties are identified by W. D. Ross:
the duty of fidelity (to keep promises and to tell the truth)
the duty of reparation (to make amends for wrongful acts)
the duty of gratitude (to return kindnesses received)
the duty of non-injury (not to hurt others)
the duty of beneficence (to promote the maximum of aggregate good)
the duty of self-improvement (to improve one's own condition)
the duty of justice (to distribute benefits and burdens equably).
One problem the deontological pluralist has to face is that cases can arise where the demands of one duty violate another duty, so-called moral dilemmas. For example, there are cases where it is necessary to break a promise in order to relieve someone's distress. Ross makes use of the distinction between prima facie duties and absolute duty to solve this problem. The duties listed above are prima facie duties (moral actions that are required unless a greater obligation trumps them); they are general principles whose validity is self-evident to morally mature persons.They are factors that do not take all considerations into account. Absolute duty, on the other hand, is particular to one specific situation, taking everything into account, and has to be judged on a case-by-case basis. It is absolute duty that determines which acts are right or wrong.
Contemporary deontology
Contemporary deontologists (i.e., scholars born in the first half of the 20th century) include Józef Maria Bocheński, Thomas Nagel, T. M. Scanlon, and Roger Scruton.
Bocheński (1965) makes a distinction between deontic and epistemic authority:
A typical example of epistemic authority in Bocheński's usage would be "the relation of a teacher to her students." A teacher has epistemic authority when making declarative sentences that the student presumes is reliable knowledge and appropriate but feels no obligation to accept or obey.
An example of deontic authority would be "the relation between an employer and her employee." An employer has deontic authority in the act of issuing an order that the employee is obliged to accept and obey regardless of its reliability or appropriateness.
Scruton (2017), in his book On Human Nature, is critical of consequentialism and similar ethical theories, such as hedonism and utilitarianism, instead proposing a deontological ethical approach. He implies that proportional duty and obligation are essential components of the ways in which we decide to act, and he defends natural law against opposing theories. He also expresses admiration for virtue ethics, and believes that the two ethical theories are not, as is frequently portrayed, mutually exclusive.
Deontology and consequentialism
Principle of permissible harm
Frances Kamm's "Principle of Permissible Harm" (1996) is an effort to derive a deontological constraint that coheres with our considered case judgments while also relying heavily on Kant's categorical imperative. The principle states that one may harm in order to save more if and only if the harm is an effect or an aspect of the greater good itself. This principle is meant to address what Kamm feels are most people's considered case judgments, many of which involve deontological intuitions. For instance, Kamm argues that we believe it would be impermissible to kill one person to harvest his organs in order to save the lives of five others. Yet, we think it is morally permissible to divert a runaway trolley that would otherwise kill five innocent, immobile people, onto a sidetrack where only one innocent and immobile person will be killed. Kamm believes the Principle of Permissible Harm explains the moral difference between these and other cases, and more importantly expresses a constraint telling us exactly when we may not act to bring about good ends—such as in the organ harvesting case.
In 2007, Kamm published Intricate Ethics, a book that presents a new theory, the "Doctrine of Productive Purity", that incorporates aspects of her "Principle of Permissible Harm". Like the "Principle", the "Doctrine of Productive Purity" is an attempt to provide a deontological prescription for determining the circumstances in which people are permitted to act in a way that harms others.
Reconciling deontology with consequentialism
Various attempts have been made to reconcile deontology with consequentialism. Threshold deontology holds that rules ought to govern up to a point despite adverse consequences; but when the consequences become so dire that they cross a stipulated threshold, consequentialism takes over. Theories put forth by Thomas Nagel and Michael S. Moore attempt to reconcile deontology with consequentialism by assigning each a jurisdiction. Iain King's 2008 book How to Make Good Decisions and Be Right All the Time uses quasi-realism and a modified form of utilitarianism to develop deontological principles that are compatible with ethics based on virtues and consequences. King develops a hierarchy of principles to link his meta-ethics, which is more inclined towards consequentialism, with the deontological conclusions he presents in his book.
Secular deontology
Intuition-based deontology is a concept within secular ethics. A classical example of literature on secular ethics is the Kural text, authored by the ancient Tamil Indian philosopher Valluvar. It can be argued that some concepts from deontological ethics date back to this text. Concerning ethical intuitionism, 20th century philosopher C.D. Broad coined the term "deontological ethics" to refer to the normative doctrines associated with intuitionism, leaving the phrase "ethical intuitionism" free to refer to the epistemological doctrines.
See also
References
Bibliography
Beauchamp, Tom L. 1991. Philosophical Ethics: An Introduction to Moral Philosophy (2nd ed.) New York: McGraw Hill.
Broad, C. D. 1930. Five Types of Ethical Theory. New York: Harcourt, Brace and Co.
Flew, Antony. 1979. "Consequentialism." In A Dictionary of Philosophy (2nd ed.). New York: St. Martin's.
Kamm, Frances M. 1996. Morality, Mortality Vol. II: Rights, Duties, and Status. New York: Oxford University Press.
—— 2007. Intricate Ethics: Rights, Responsibilities, and Permissible Harm. Oxford: Oxford University Press. , .
«Législation, éthique et déontologie», Bruxelles: Editions de Boeck Université, 2011, Karine BREHAUX,
Olson, Robert G. 1967. "Deontological Ethics." In The Encyclopedia of Philosophy, edited by P. Edwards. London: Collier Macmillan.
Ross, W. D. 1930. The Right and the Good. Oxford: Clarendon Press.
Salzman, Todd A. 1995. Deontology and Teleology: An Investigation of the Normative Debate in Roman Catholic Moral Theology. University Press.
Waller, Bruce N. 2005. Consider Ethics: Theory, Readings, and Contemporary Issues. New York: Pearson Longman.
Wierenga, Edward. 1983. "A Defensible Divine Command Theory." Noûs 17(3):387–407.
External links
Kantian Ethics – Summary A concise summary of the key details of Kant's deontology
Freedom and the Boundary of Morals, Lecture 22 from Stephen Palmquist's book, The Tree of Philosophy (fourth edition, 2000).
Deontology and Ethical Ends
Stanford Encyclopedia of Philosophy entry on Special Obligations
Log in to ePortfolios@FedUni – ePortfolios@FedUni Deontology framework ethics
Morality
Normative ethics
Ethical theories | 0.775605 | 0.999022 | 0.774846 |
Process philosophy | Process philosophy, also ontology of becoming, or processism, is an approach in philosophy that identifies processes, changes, or shifting relationships as the only real experience of everyday living. In opposition to the classical view of change as illusory (as argued by Parmenides) or accidental (as argued by Aristotle), process philosophy posits transient occasions of change or becoming as the only fundamental things of the ordinary everyday real world.
Since the time of Plato and Aristotle, classical ontology has posited ordinary world reality as constituted of enduring substances, to which transient processes are ontologically subordinate, if they are not denied. If Socrates changes, becoming sick, Socrates is still the same (the substance of Socrates being the same), and change (his sickness) only glides over his substance: change is accidental, and devoid of primary reality, whereas the substance is essential.
In physics, Ilya Prigogine distinguishes between the "physics of being" and the "physics of becoming". Process philosophy covers not just scientific intuitions and experiences, but can be used as a conceptual bridge to facilitate discussions among religion, philosophy, and science.
Process philosophy is sometimes classified as closer to continental philosophy than analytic philosophy, because it is usually only taught in continental philosophy departments. However, other sources state that process philosophy should be placed somewhere in the middle between the poles of analytic versus continental methods in contemporary philosophy.
History
In ancient Greek thought
Heraclitus proclaimed that the basic nature of all things is change.
The quotation from Heraclitus appears in Plato's Cratylus twice; in 401d as:
Ta onta ienai te panta kai menein ouden"All entities move and nothing remains still"and in 402a
Panta chōrei kai ouden menei kai dis es ton auton potamon ouk an embaies
"Everything changes and nothing remains still ... and ... you cannot step twice into the same stream"
Heraclitus considered fire as the most fundamental element.
"All things are an interchange for fire, and fire for all things, just like goods for gold and gold for goods."
The following is an interpretation of Heraclitus's concepts into modern terms by Nicholas Rescher.
"...reality is not a constellation of things at all, but one of processes. The fundamental 'stuff' of the world is not material substance, but volatile flux, namely 'fire', and all things are versions thereof (puros tropai). Process is fundamental: the river is not an object, but a continuing flow; the sun is not a thing, but an enduring fire. Everything is a matter of process, of activity, of change (panta rhei)."
An early expression of this viewpoint is in Heraclitus's fragments. He posits strife, ἡ ἔρις (strife, conflict), as the underlying basis of all reality defined by change. The balance and opposition in strife were the foundations of change and stability in the flux of existence.
Similarly, the philosopher, Empedocles, who proposed the four elements (earth, air, water, fire), sees all of these as subject to an eternal flux, between the two, oscillating forces of Love (or attraction) and Strife (repulsion).
Nietzsche and Kierkegaard
In his written works, Friedrich Nietzsche proposed what has been regarded as a philosophy of becoming that encompasses a "naturalistic doctrine intended to counter the metaphysical preoccupation with being", and a theory of "the incessant shift of perspectives and interpretations in a world that lacks a grounding essence".
Søren Kierkegaard posed questions of individual becoming in Christianity which were opposed to the ancient Greek philosophers' focus on the indifferent becoming of the cosmos. However, he established as much of a focus on aporia as Heraclitus and others previously had, such as in his concept of the leap of faith which marks an individual becoming. As well as this, Kierkegaard opposed his philosophy to Hegel's system of philosophy approaching becoming and difference for what he saw as a "dialectical conflation of becoming and rationality", making the system take on the same trait of motionlessness as Parmenides' system.
Twentieth century
In the early twentieth century, the philosophy of mathematics was undertaken to develop mathematics as an airtight, axiomatic system in which every truth could be derived logically from a set of axioms. In the foundations of mathematics, this project is variously understood as logicism or as part of the formalist program of David Hilbert. Alfred North Whitehead and Bertrand Russell attempted to complete, or at least facilitate, this program with their seminal book Principia Mathematica, which purported to build a logically consistent set theory on which to found mathematics. After this, Whitehead extended his interest to natural science, which he held needed a deeper philosophical basis. He intuited that natural science was struggling to overcome a traditional ontology of timeless material substances that does not suit natural phenomena. According to Whitehead, material is more properly understood as 'process'.
Whitehead's Process and Reality
Alfred North Whitehead began teaching and writing on process and metaphysics when he joined Harvard University in 1924.
In his book Science and the Modern World (1925), Whitehead noted that the human intuitions and experiences of science, aesthetics, ethics, and religion influence the worldview of a community, but that in the last several centuries science dominates Western culture. Whitehead sought a holistic, comprehensive cosmology that provides a systematic descriptive theory of the world which can be used for the diverse human intuitions gained through ethical, aesthetic, religious, and scientific experiences, and not just the scientific.
In 1929, Whitehead produced the most famous work of process philosophy, Process and Reality, continuing the work begun by Hegel but describing a more complex and fluid dynamic ontology.
Process thought describes truth as "movement" in and through substance (Hegelian truth), rather than substances as fixed concepts or "things" (Aristotelian truth). Since Whitehead, process thought is distinguished from Hegel in that it describes entities that arise or coalesce in becoming, rather than being simply dialectically determined from prior posited determinates. These entities are referred to as complexes of occasions of experience. It is also distinguished in being not necessarily conflictual or oppositional in operation. Process may be integrative, destructive or both together, allowing for aspects of interdependence, influence, and confluence, and addressing coherence in universal as well as particular developments, i.e., those aspects not befitting Hegel's system. Additionally, instances of determinate occasions of experience, while always ephemeral, are nonetheless seen as important to define the type and continuity of those occasions of experience that flow from or relate to them.
Whitehead's influences were not restricted to philosophers or physicists or mathematicians. He was influenced by the French philosopher Henri Bergson (1859–1941), whom he credits along with William James and John Dewey in the preface to Process and Reality.
Process metaphysics
For Whitehead, metaphysics is about logical frameworks for the conduct of discussions of the character of the world. It is not directly and immediately about facts of nature, but only indirectly so, in that its task is to explicitly formulate the language and conceptual presuppositions that are used to describe the facts of nature. Whitehead thinks that discovery of previously unknown facts of nature can in principle call for reconstruction of metaphysics.
The process metaphysics elaborated in Process and Reality posits an ontology which is based on the two kinds of existence of an entity, that of actual entity and that of abstract entity or abstraction, also called 'object'.
Actual entity is a term coined by Whitehead to refer to the entities that really exist in the natural world. For Whitehead, actual entities are spatiotemporally extended events or processes. An actual entity is how something is happening, and how its happening is related to other actual entities. The actually existing world is a multiplicity of actual entities overlapping one another.
The ultimate abstract principle of actual existence for Whitehead is creativity. Creativity is a term coined by Whitehead to show a power in the world that allows the presence of an actual entity, a new actual entity, and multiple actual entities. Creativity is the principle of novelty. It is manifest in what can be called 'singular causality'. This term may be contrasted with the term 'nomic causality'. An example of singular causation is that I woke this morning because my alarm clock rang. An example of nomic causation is that alarm clocks generally wake people in the morning. Aristotle recognizes singular causality as efficient causality. For Whitehead, there are many contributory singular causes for an event. A further contributory singular cause of my being awoken by my alarm clock this morning was that I was lying asleep near it till it rang.
An actual entity is a general philosophical term for an utterly determinate and completely concrete individual particular of the actually existing world or universe of changeable entities considered in terms of singular causality, about which categorical statements can be made. Whitehead's most far-reaching and radical contribution to metaphysics is his invention of a better way of choosing the actual entities. Whitehead chooses a way of defining the actual entities that makes them all alike, qua actual entities, with a single exception.
For example, for Aristotle, the actual entities were the substances, such as Socrates. Besides Aristotle's ontology of substances, another example of an ontology that posits actual entities is in the monads of Leibniz, which are said to be 'windowless'.
Whitehead's actual entities
For Whitehead's ontology of processes as defining the world, the actual entities exist as the only fundamental elements of reality.
The actual entities are of two kinds, temporal and atemporal.
With one exception, all actual entities for Whitehead are temporal and are occasions of experience (which are not to be confused with consciousness). An entity that people commonly think of as a simple concrete object, or that Aristotle would think of as a substance, is, in this ontology, considered to be a temporally serial composite of indefinitely many overlapping occasions of experience. A human being is thus composed of indefinitely many occasions of experience.
The one exceptional actual entity is at once both temporal and atemporal: God. He is objectively immortal, as well as being immanent in the world. He is objectified in each temporal actual entity; but He is not an eternal object.
The occasions of experience are of four grades. The first grade comprises processes in a physical vacuum such as the propagation of an electromagnetic wave or gravitational influence across empty space. The occasions of experience of the second grade involve just inanimate matter; "matter" being the composite overlapping of occasions of experience from the previous grade. The occasions of experience of the third grade involve living organisms. Occasions of experience of the fourth grade involve experience in the mode of presentational immediacy, which means more or less what are often called the qualia of subjective experience. So far as we know, experience in the mode of presentational immediacy occurs in only more evolved animals. That some occasions of experience involve experience in the mode of presentational immediacy is the one and only reason why Whitehead makes the occasions of experience his actual entities; for the actual entities must be of the ultimately general kind. Consequently, it is inessential that an occasion of experience have an aspect in the mode of presentational immediacy; occasions of the grades one, two, and three, lack that aspect.
There is no mind-matter duality in this ontology, because "mind" is simply seen as an abstraction from an occasion of experience which has also a material aspect, which is of course simply another abstraction from it; thus the mental aspect and the material aspect are abstractions from one and the same concrete occasion of experience. The brain is part of the body, both being abstractions of a kind known as persistent physical objects, neither being actual entities. Though not recognized by Aristotle, there is biological evidence, written about by Galen, that the human brain is an essential seat of human experience in the mode of presentational immediacy. We may say that the brain has a material and a mental aspect, all three being abstractions from their indefinitely many constitutive occasions of experience, which are actual entities.
Time, causality, and process
Inherent in each actual entity is its respective dimension of time. Potentially, each Whiteheadean occasion of experience is causally consequential on every other occasion of experience that precedes it in time, and has as its causal consequences every other occasion of experience that follows it in time; thus it has been said that Whitehead's occasions of experience are 'all window', in contrast to Leibniz's 'windowless' monads. In time defined relative to it, each occasion of experience is causally influenced by prior occasions of experiences, and causally influences future occasions of experience. An occasion of experience consists of a process of prehending other occasions of experience, reacting to them. This is the process in process philosophy.
Such process is never deterministic. Consequently, free will is essential and inherent to the universe.
The causal outcomes obey the usual well-respected rule that the causes precede the effects in time. Some pairs of processes cannot be connected by cause-and-effect relations, and they are said to be spatially separated. This is in perfect agreement with the viewpoint of the Einstein theory of special relativity and with the Minkowski geometry of spacetime. It is clear that Whitehead respected these ideas, as may be seen for example in his 1919 book An Enquiry concerning the Principles of Natural Knowledge as well as in Process and Reality. In this view, time is relative to an inertial reference frame, different reference frames defining different versions of time.
Atomicity
The actual entities, the occasions of experience, are logically atomic in the sense that an occasion of experience cannot be cut and separated into two other occasions of experience. This kind of logical atomicity is perfectly compatible with indefinitely many spatio-temporal overlaps of occasions of experience. One can explain this kind of atomicity by saying that an occasion of experience has an internal causal structure that could not be reproduced in each of the two complementary sections into which it might be cut. Nevertheless, an actual entity can completely contain each of indefinitely many other actual entities.
Another aspect of the atomicity of occasions of experience is that they do not change. An actual entity is what it is. An occasion of experience can be described as a process of change, but it is itself unchangeable.
The atomicity of the actual entities is of a simply logical or philosophical kind, thoroughly different in concept from the natural kind of atomicity that describes the atoms of physics and chemistry.
Topology
Whitehead's theory of extension was concerned with the spatio-temporal features of his occasions of experience. Fundamental to both Newtonian and to quantum theoretical mechanics is the concept of momentum. The measurement of a momentum requires a finite spatiotemporal extent. Because it has no finite spatiotemporal extent, a single point of Minkowski space cannot be an occasion of experience, but is an abstraction from an infinite set of overlapping or contained occasions of experience, as explained in Process and Reality. Though the occasions of experience are atomic, they are not necessarily separate in extension, spatiotemporally, from one another. Indefinitely many occasions of experience can overlap in Minkowski space.
Nexus is a term coined by Whitehead to show the network actual entity from the universe. In the universe of actual entities spread actual entity. Actual entities are clashing with each other and form other actual entities. The birth of an actual entity based on an actual entity, actual entities around him referred to as nexus.
An example of a nexus of temporally overlapping occasions of experience is what Whitehead calls an enduring physical object, which corresponds closely with an Aristotelian substance. An enduring physical object has a temporally earliest and a temporally last member. Every member (apart from the earliest) of such a nexus is a causal consequence of the earliest member of the nexus, and every member (apart from the last) of such a nexus is a causal antecedent of the last member of the nexus. There are indefinitely many other causal antecedents and consequences of the enduring physical object, which overlap, but are not members, of the nexus. No member of the nexus is spatially separate from any other member. Within the nexus are indefinitely many continuous streams of overlapping nexūs, each stream including the earliest and the last member of the enduring physical object. Thus an enduring physical object, like an Aristotelian substance, undergoes changes and adventures during the course of its existence.
In some contexts, especially in the theory of relativity in physics, the word 'event' refers to a single point in Minkowski or in Riemannian space-time. A point event is not a process in the sense of Whitehead's metaphysics. Neither is a countable sequence or array of points. A Whiteheadian process is most importantly characterized by extension in space-time, marked by a continuum of uncountably many points in a Minkowski or a Riemannian space-time. The word 'event', indicating a Whiteheadian actual entity, is not being used in the sense of a point event.
Whitehead's abstractions
Whitehead's abstractions are conceptual entities that are abstracted from or derived from and founded upon his actual entities. Abstractions are themselves not actual entities. They are the only entities that can be real but are not actual entities. This statement is one form of Whitehead's 'ontological principle'.
An abstraction is a conceptual entity that refers to more than one single actual entity. Whitehead's ontology refers to importantly structured collections of actual entities as nexuses of actual entities. Collection of actual entities into a nexus emphasizes some aspect of those entities, and that emphasis is an abstraction, because it means that some aspects of the actual entities are emphasized or dragged away from their actuality, while other aspects are de-emphasized or left out or left behind.
'Eternal object' is a term coined by Whitehead. It is an abstraction, a possibility, or pure potential. It can be ingredient into some actual entity. It is a principle that can give a particular form to an actual entity.
Whitehead admitted indefinitely many eternal objects. An example of an eternal object is a number, such as the number 'two'. Whitehead held that eternal objects are abstractions of a very high degree of abstraction. Many abstractions, including eternal objects, are potential ingredients of processes.
Relation between actual entities and abstractions stated in the ontological principle
For Whitehead, besides its temporal generation by the actual entities which are its contributory causes, a process may be considered as a concrescence of abstract ingredient eternal objects. God enters into every temporal actual entity.
Whitehead's ontological principle is that whatever reality pertains to an abstraction is derived from the actual entities upon which it is founded or of which it is comprised.
Causation and concrescence of a process
Concrescence is a term coined by Whitehead to show the process of jointly forming an actual entity that was without form, but about to manifest itself into an entity Actual full (satisfaction) based on datums or for information on the universe. The process of forming an actual entity is the case based on the existing datums. Concretion process can be regarded as subjectification process.
Datum is a term coined by Whitehead to show the different variants of information possessed by actual entity. In process philosophy, datum is obtained through the events of concrescence. Every actual entity has a variety of datum.
Commentary on Whitehead and on process philosophy
Whitehead is not an idealist in the strict sense. Whitehead's thought may be regarded as related to the idea of panpsychism (also known as panexperientialism, because of Whitehead's emphasis on experience).
On God
Whitehead's philosophy is complex, subtle, and nuanced regarding the concept of "God". In Process and Reality Corrected Edition (1978), wherein regarding "God" the editors elaborate Whitehead's conception.
He is the unconditioned actuality of conceptual feeling at the base of things; so that by reason of this primordial actuality, there is an order in the relevance of eternal objects to the process of creation. [...] The particularities of the actual world presuppose it; while it merely presupposes the general metaphysical character of creative advance, of which it is the primordial exemplification. [emphasis in original]
Process philosophy, might be considered according to some theistic forms of religion to give God a special place in the universe of occasions of experience. Regarding Whitehead's use of the term "occasions" in reference to "God", Process and Reality Corrected Edition explains:
'Actual entities' - also termed 'actual occasions' - are the final real things of which the world is made up. There is no going behind actual entities to find anything more real. They differ among themselves: God is an actual entity, and so is the most trivial puff of existence in far-off empty space. But, though there are gradations of importance, and diversities of function, yet in the principles which actuality exemplifies all are on the same level. The final facts are, all alike, actual entities; and these actual entities are drops of experience, complex and interdependent.
It also can be assumed within some forms of theology that a God encompasses all the other occasions of experience but also transcends them and this might lead to it being argued that Whitehead endorses some form of panentheism. Since, it is argued theologically, that "free will" is inherent to the nature of the universe, Whitehead's God is not omnipotent in Whitehead's metaphysics. God's role is to offer enhanced occasions of experience. God participates in the evolution of the universe by offering possibilities, which may be accepted or rejected. Whitehead's thinking here has given rise to process theology, whose prominent advocates include Charles Hartshorne, John B. Cobb, Jr., and Hans Jonas, who was also influenced by the non-theological philosopher Martin Heidegger. However, other process philosophers have questioned Whitehead's theology, seeing it as a regressive Platonism.
Whitehead enumerated three essential natures of God. The primordial nature of God consists of all potentialities of existence for actual occasions, which Whitehead dubbed eternal objects. God can offer possibilities by ordering the relevance of eternal objects. The consequent nature of God prehends everything that happens in reality. As such, God experiences all of reality in a sentient manner. The last nature is the superjective. This is the way in which God's synthesis becomes a sense-datum for other actual entities. In some sense, God is prehended by existing actual entities.
Legacy and applications
Biology
In plant morphology, Rolf Sattler developed a process morphology (dynamic morphology) that overcomes the structure/process (or structure/function) dualism that is commonly taken for granted in biology. According to process morphology, structures such as leaves of plants do not have processes, they are processes.
In evolution and in development, the nature of the changes of biological objects are considered by many authors to be more radical than in physical systems. In biology, changes are not just changes of state in a pre-given space, instead the space and more generally the mathematical structures required to understand object change over time.
Ecology
With its perspective that everything is interconnected, that all life has value, and that non-human entities are also experiencing subjects, process philosophy has played an important role in discourse on ecology and sustainability. The first book to connect process philosophy with environmental ethics was John B. Cobb, Jr.'s 1971 work, Is It Too Late: A Theology of Ecology. In a more recent book (2018) edited by John B. Cobb, Jr. and Wm. Andrew Schwartz, Putting Philosophy to Work: Toward an Ecological Civilization contributors explicitly explore the ways in which process philosophy can be put to work to address the most urgent issues facing our world today, by contributing to a transition toward an ecological civilization. That book emerged from the largest international conference held on the theme of ecological civilization (Seizing an Alternative: Toward an Ecological Civilization) which was organized by the Center for Process Studies in June 2015. The conference brought together roughly 2,000 participants from around the world and featured such leaders in the environmental movement as Bill McKibben, Vandana Shiva, John B. Cobb, Jr., Wes Jackson, and Sheri Liao. The notion of ecological civilization is often affiliated with the process philosophy of Alfred North Whitehead—especially in China.
Mathematics
In the philosophy of mathematics, some of Whitehead's ideas re-emerged in combination with cognitivism as the cognitive science of mathematics and embodied mind theses.
Somewhat earlier, exploration of mathematical practice and quasi-empiricism in mathematics from the 1950s to 1980s had sought alternatives to metamathematics in social behaviours around mathematics itself: for instance, Paul Erdős's simultaneous belief in Platonism and a single "big book" in which all proofs existed, combined with his personal obsessive need or decision to collaborate with the widest possible number of other mathematicians. The process, rather than the outcomes, seemed to drive his explicit behaviour and odd use of language, as if the synthesis of Erdős and collaborators in seeking proofs, creating sense-datum for other mathematicians, was itself the expression of a divine will. Certainly, Erdős behaved as if nothing else in the world mattered, including money or love, as emphasized in his biography The Man Who Loved Only Numbers.
Medicine
Several fields of science and especially medicine seem to make liberal use of ideas in process philosophy, notably the theory of pain and healing of the late 20th century. The philosophy of medicine began to deviate somewhat from scientific method and an emphasis on repeatable results in the very late 20th century by embracing population thinking, and a more pragmatic approach to issues in public health, environmental health, and especially mental health. In this latter field, R. D. Laing, Thomas Szasz, and Michel Foucault were instrumental in moving medicine away from emphasis on "cures" and towards concepts of individuals in balance with their society, both of which are changing, and against which no benchmarks or finished "cures" were very likely to be measurable.
Psychology
In psychology, the subject of imagination was again explored more extensively since Whitehead, and the question of feasibility or "eternal objects" of thought became central to the impaired theory of mind explorations that framed postmodern cognitive science. A biological understanding of the most eternal object, that being the emerging of similar but independent cognitive apparatus, led to an obsession with the process "embodiment", that being, the emergence of these cognitions. Like Whitehead's God, especially as elaborated in J. J. Gibson's perceptual psychology emphasizing affordances, by ordering the relevance of eternal objects (especially the cognitions of other such actors), the world becomes. Or, it becomes simple enough for human beings to begin to make choices, and to prehend what happens as a result. These experiences may be summed in some sense but can only approximately be shared, even among very similar cognitions with identical DNA. An early explorer of this view was Alan Turing who sought to prove the limits of expressive complexity of human genes in the late 1940s, to put bounds on the complexity of human intelligence and so assess the feasibility of artificial intelligence emerging. Since 2000, Process Psychology has progressed as an independent academic and therapeutic discipline: In 2000, Michel Weber created the Whitehead Psychology Nexus: an open forum dedicated to the cross-examination of Alfred North Whitehead's process philosophy and the various facets of the contemporary psychological field.
Philosophy of movement
The philosophy of movement is a sub-area within process philosophy that treats processes as movements. It studies processes as flows, folds, and fields in historical patterns of centripetal, centrifugal, tensional, and elastic motion. See Thomas Nail's philosophy of movement and process materialism.
See also
Concepts
Actual idealism
Anicca, the Buddhist doctrine that all is "transient, evanescent, inconstant"
Panta rhei, Heraclitus's concept that "everything flows"
Dialectic
Dialectical monism
Elisionism
Holomovement
Pancreativism
Salishan languages#Nounlessness
Speculative realism
People
John B. Cobb
David Ray Griffin
Arthur Peacocke
Michel Weber
Arran Gare
Joseph A. Bracken
Milič Čapek
Wilmon Henry Sheldon
Thomas Nail
Iain McGilchrist
Eugene Gendlin
Rein Raud
Charles Hartshorne
References
External links
Academia pages of the Center for Philosophical Practice.
Whitehead Research Project
Process and Reality. Part V. Final Interpretation
Wolfgang Sohst: Prozessontologie. Ein systematischer Entwurf der Entstehung von Existenz (Berlin 2009)
Critique of a Metaphysics of Process (Antwerp 2012)
Holism
Religion and science
Subfields of metaphysics
Alfred North Whitehead | 0.780399 | 0.992871 | 0.774835 |
Scientific realism | Scientific realism is the view that the universe described by science is real regardless of how it may be interpreted. A believer of scientific realism takes the universe as described by science to be true (or approximately true), because of their assertion that science can be used to find the truth (or approximate truth) about both the physical and metaphysical in the Universe.
Within philosophy of science, this view is often an answer to the question "how is the success of science to be explained?" The discussion on the success of science in this context centers primarily on the status of unobservable entities apparently talked about by scientific theories. Generally, those who are scientific realists assert that one can make valid claims about unobservables (viz., that they have the same ontological status) as observables, as opposed to instrumentalism.
Main features
Scientific realism involves two basic positions:
First, it is a set of claims about the features of an ideal scientific theory; an ideal theory is the sort of theory science aims to produce.
Second, it is the commitment that science will eventually produce theories very much like an ideal theory and that science has done pretty well thus far in some domains. One might be a scientific realist regarding some sciences while not being a realist regarding others.
According to scientific realism, an ideal scientific theory has the following features:
The claims the theory makes are either true or false, depending on whether the entities talked about by the theory exist and are correctly described by the theory. This is the semantic commitment of scientific realism.
The entities described by the scientific theory exist objectively and mind-independently. This is the metaphysical commitment of scientific realism.
There are reasons to believe some significant portion of what the theory says. This is the epistemological commitment.
Combining the first and the second claim entails that an ideal scientific theory says definite things about genuinely existing entities. The third claim says that we have reasons to believe that many scientific claims about these entities are true, but not all.
Scientific realism usually holds that science makes progress, i.e. scientific theories usually get successively better, or, rather, answer more and more questions. For this reason, scientific realists or otherwise, hold that realism should make sense of the progress of science in terms of theories being successively more like the ideal theory that scientific realists describe.
Characteristic claims
The following claims are typical of those held by scientific realists. Due to the wide disagreements over the nature of science's success and the role of realism in its success, a scientific realist would agree with some but not all of the following positions.
The best scientific theories are at least partially true.
The best theories do not employ central terms that are non referring expressions.
To say that a theory is approximately true is sufficient explanation of the degree of its predictive success.
The approximate truth of a theory is the only explanation of its predictive success.
Even if a theory employs expressions that do not have a reference, a scientific theory may be approximately true.
Scientific theories are in a historical process of progress towards a true account of the physical world.
Scientific theories make genuine, existential claims.
Theoretical claims of scientific theories should be read literally and are definitively either true or false.
The degree of the predictive success of a theory is evidence of the referential success of its central terms.
The goal of science is an account of the physical world that is literally true. Science has been successful because this is the goal that it has been making progress towards.
History
Scientific realism is related to much older philosophical positions including rationalism and metaphysical realism. However, it is a thesis about science developed in the twentieth century. Portraying scientific realism in terms of its ancient, medieval, and early modern cousins is at best misleading.
Scientific realism is developed largely as a reaction to logical positivism. Logical positivism was the first philosophy of science in the twentieth century and the forerunner of scientific realism, holding that a sharp distinction can be drawn between theoretical terms and observational terms, the latter capable of semantic analysis in observational and logical terms.
Logical positivism encountered difficulties with:
The verificationist theory of meaning—see Hempel (1950).
Troubles with the analytic-synthetic distinction—see Quine (1950).
The theory-ladenness of observation—see Hanson (1958) Kuhn (1970) and Quine (1960).
Difficulties moving from the observationality of terms to observationality of sentences—see Putnam (1962).
The vagueness of the observational-theoretical distinction—see G. Maxwell (1962).
These difficulties for logical positivism suggest, but do not entail, scientific realism, and led to the development of realism as a philosophy of science.
Realism became the dominant philosophy of science after positivism. Bas van Fraassen in his book The Scientific Image (1980) developed constructive empiricism as an alternative to realism. He argues against scientific realism that scientific theories do not aim for truth about unobservable entities. Responses to van Fraassen have sharpened realist positions and led to some revisions of scientific realism.
Arguments for and against scientific realism
No miracles argument
One of the main arguments for scientific realism centers on the notion that scientific knowledge is progressive in nature, and that it is able to predict phenomena successfully. Many scientific realists (e.g., Ernan McMullin, Richard Boyd) think the operational success of a theory lends credence to the idea that its more unobservable aspects exist, because they were how the theory reasoned its predictions. For example, a scientific realist would argue that science must derive some ontological support for atoms from the outstanding phenomenological success of all the theories using them.
Arguments for scientific realism often appeal to abductive reasoning or "inference to the best explanation" (Lipton, 2004). For instance, one argument commonly used—the "miracle argument" or "no miracles argument"—starts out by observing that scientific theories are highly successful in predicting and explaining a variety of phenomena, often with great accuracy. Thus, it is argued that the best explanation—the only explanation that renders the success of science to not be what Hilary Putnam calls "a miracle"—is the view that our scientific theories (or at least the best ones) provide true descriptions of the world, or approximately so.
Bas van Fraassen replies with an evolutionary analogy: "I claim that the success of current scientific theories is no miracle. It is not even surprising to the scientific (Darwinist) mind. For any scientific theory is born into a life of fierce competition, a jungle red in tooth and claw. Only the successful theories survive—the ones which in fact latched on to actual regularities in nature." (The Scientific Image, 1980)
Some philosophers (e.g. Colin Howson) have argued that the no miracles argument commits the base rate fallacy.
Pessimistic induction
Pessimistic induction, one of the main arguments against realism, argues that the history of science contains many theories once regarded as empirically successful but which are now believed to be false. Additionally, the history of science contains many empirically successful theories whose unobservable terms are not believed to genuinely refer. For example, the effluvium theory of static electricity (a theory of the 16th Century physicist William Gilbert) is an empirically successful theory whose central unobservable terms have been replaced by later theories.
Realists reply that replacement of particular realist theories with better ones is to be expected due to the progressive nature of scientific knowledge, and when such replacements occur only superfluous unobservables are dropped. For example, Albert Einstein's theory of special relativity showed that the concept of the luminiferous ether could be dropped because it had contributed nothing to the success of the theories of mechanics and electromagnetism. On the other hand, when theory replacement occurs, a well-supported concept, such as the concept of atoms, is not dropped but is incorporated into the new theory in some form. These replies can lead scientific realists to structural realism.
Constructivist epistemology
Social constructivists might argue that scientific realism is unable to account for the rapid change that occurs in scientific knowledge during periods of scientific revolution. Constructivists may also argue that the success of theories is only a part of the construction.
However, these arguments ignore the fact that many scientists are not realists. During the development of quantum mechanics in the 1920s, the dominant philosophy of science was logical positivism. The alternative realist Bohm interpretation and many-worlds interpretation of quantum mechanics do not make such a revolutionary break with the concepts of classical physics.
Underdetermination problem
Another argument against scientific realism, deriving from the underdetermination problem, is not so historically motivated as these others. It claims that observational data can in principle be explained by multiple theories that are mutually incompatible. Realists might counter by saying that there have been few actual cases of underdetermination in the history of science. Usually the requirement of explaining the data is so exacting that scientists are lucky to find even one theory that fulfills it. Furthermore, if we take the underdetermination argument seriously, it implies that we can know about only what we have directly observed. For example, we could not theorize that dinosaurs once lived based on the fossil evidence because other theories (e.g., that the fossils are clever hoaxes) can account for the same data.
Incompatible models argument
According to the incompatible models argument, in certain cases the existence of diverse models for a single phenomenon can be taken as evidence of anti-realism. One example is due to Margaret Morrison, who worked to show that the shell model and the liquid-drop model give contradictory descriptions of the atomic nucleus, even though both models are predictive.
See also
Anti-realism
Constructivist epistemology
Critical realism (philosophy of perception)
Dialectical materialism
Instrumentalism
Musgrave's scientific realism
Naïve realism
Pessimistic induction
Scientific materialism
Scientific perspectivism
Social constructionism
Footnotes
Further reading
Boyd, R. N. (1988). "How to Be A Moral Realist", in G. Sayre-McCord, ed., Essays on Moral Realism, Cornell University Press, pp. 181–228.
Bunge, Mario. (2006). Chasing Reality: Strife over Realism. Toronto Studies in Philosophy: University of Toronto Press
Bunge, Mario. (2001). Scientific Realism: Selected Essays of Mario Bunge. Mahner, M. (Ed.) New York: Prometheus Books
Devitt, Michael, "Scientific realism". In: Oxford handbook of contemporary analytic philosophy (2005)
Hempel, Carl. (1950). "Empiricist Criteria of Cognitive Significance" in Boyd, Richard et al. eds. (1990). The Philosophy of Science Cambridge: MIT Press..
Hunt, Shelby D. (2003). "Controversy in Marketing Theory: For Reason, Realism, Truth, and Objectivity." Armonk, NY: M.E. Sharpe, Inc.
Hunt Shelby D. (2011). "Theory Status, Inductive Realism, And Approximate Truth: No Miracles, No Charades." International Studies in the Philosophy of Science, 25(2), 159–178.
Kukla, A. (2000). Social constructivism and the philosophy of science. London: Routledge.
Kuhn, Thomas. (1970). The Structure of Scientific Revolutions, 2nd Edition. Chicago: University of Chicago Press.
Laudan, Larry. (1981). "A Confutation of Convergent Realism" Philosophy of Science
Leplin, Jarrett. (1984). Scientific Realism. California: University of California Press.
Leplin, Jarrett. (1997). A Novel Defense of Scientific Realism. Oxford: Oxford University Press.
Lipton, Peter. (2004). Inference to the best explanation, 2nd edition. London: Routledge.
Maxwell, G. (1962). "The Ontological Status of Theoretical Entities" in H. Feigl and G. Maxwell Scientific Explanation, Space, and Time vol. 3, Minnesota Studies in the Philosophy of Science, 3-15.
Merritt, D. (2023). "Touching Reality". A critique of scientific realism in the context of cosmology.
Okasha, Samir. (2002). Philosophy of science: A very short introduction. Oxford: Oxford University Press. See especially chapter 4, "Realism and Anti-Realism."
Putnam, Hilary. (1962). "What Theories are Not" in Ernst Nagel et al. (1962). Logic, Methodology, and Philosophy of Science Stanford University Press.
Psillos, Stathis. (1999). Scientific realism: How science tracks truth. London: Routledge.
Quine, W.V.O. (1951). "Two Dogmas of Empiricism" in his (1953). From a Logical Point of View Cambridge: Harvard University Press.
Quine, W.V.O. (1960). Word and Object'' Cambridge: MIT Press.
Sankey, H. (2001). "Scientific Realism: An Elaboration and a Defense" retrieved from http://philsci-archive.pitt.edu
External links
Scientific Realism. Stanford Encyclopedia of Philosophy
Metatheory of science
Metaphysical realism
Metaphysics of science | 0.785999 | 0.985777 | 0.77482 |
Environmentalism | Environmentalism or environmental rights is a broad philosophy, ideology, and social movement about supporting life, habitats, and surroundings. While environmentalism focuses more on the environmental and nature-related aspects of green ideology and politics, ecologism combines the ideology of social ecology and environmentalism. Ecologism is more commonly used in continental European languages, while environmentalism is more commonly used in English but the words have slightly different connotations.
Environmentalism advocates the preservation, restoration and improvement of the natural environment and critical earth system elements or processes such as the climate, and may be referred to as a movement to control pollution or protect plant and animal diversity. For this reason, concepts such as a land ethics, environmental ethics, biodiversity, ecology, and the biophilia hypothesis figure predominantly. The environmentalist movement encompasses various approaches to addressing environmental issues, including free market environmentalism, evangelical environmentalism, and the environmental conservation movement.
At its crux, environmentalism is an attempt to balance relations between humans and the various natural systems on which they depend in such a way that all the components are accorded a proper degree of sustainability. The exact measures and outcomes of this balance is controversial and there are many different ways for environmental concerns to be expressed in practice. Environmentalism and environmental concerns are often represented by the colour green, but this association has been appropriated by the marketing industries for the tactic known as greenwashing.
Environmentalism is opposed by anti-environmentalism, which says that the Earth is less fragile than some environmentalists maintain, and portrays environmentalism as overreacting to the human contribution to climate change or opposing human advancement.
Definitions
Environmentalism denotes a social movement that seeks to influence the political process by lobbying, activism, and education in order to protect natural resources and ecosystems.
An environmentalist is a person who may speak out about our natural environment and the sustainable management of its resources through changes in public policy or individual behaviour. This may include supporting practices such as informed consumption, conservation initiatives, investment in renewable resources, improved efficiencies in the materials economy, transitioning to new accounting paradigms such as ecological economics, renewing and revitalizing our connections with non-human life or even opting to have one less child to reduce consumption and pressure on resources.
In various ways (for example, grassroots activism and protests), environmentalists and environmental organisations seek to give the natural world a stronger voice in human affairs.
In general terms, environmentalists advocate the sustainable management of resources, and the protection (and restoration, when necessary) of the natural environment through changes in public policy and individual behaviour. In its recognition of humanity as a participant in ecosystems, the movement is centered around ecology, health, and human rights.
History
A concern for environmental protection has recurred in diverse forms, in different parts of the world, throughout history.
The earliest ideas of environmental protectionism can be found in Jainism, a religion from ancient India revived by Mahavira in the 6th century BC. Jainism offers a view that is in many ways compatible with core values associated with environmental activism, such as the protection of life by nonviolence, which could form a strong ecological ethos for global protection of the environment. Mahavira's teachings on the symbiosis between all living beings—as well as the five elements of earth, water, air, fire, and space—are core to environmental thought today.
In West Asia, the Caliph Abu Bakr in the 630s AD commanded his army to "Bring no harm to the trees, nor burn them with fire," and to "Slay not any of the enemy's flock, save for your food." Various Islamic medical treatises during the 9th to 13th centuries dealt with environmentalism and environmental science, including the issue of pollution. The authors of such treatises included Al-Kindi, Qusta ibn Luqa, Al-Razi, Ibn Al-Jazzar, al-Tamimi, al-Masihi, Avicenna, Ali ibn Ridwan, Ibn Jumay, Isaac Israeli ben Solomon, Abd-el-latif, Ibn al-Quff, and Ibn al-Nafis. Their works covered a number of subjects related to pollution, such as air pollution, water pollution, soil contamination, and the mishandling of municipal solid waste. They also included assessments of certain localities' environmental impact.
In Europe, King Edward I of England banned the burning and sale of "sea-coal" in 1272 by proclamation in London, after its smoke had become a prevalent annoyance throughout the city. This fuel, common in London due to the local scarcity of wood, was given this early name because it could be found washed up on some shores, from where it was carted away on a wheelbarrow.
Early environmental legislation
The origins of the environmental movement lay in the response to increasing levels of smoke pollution in the atmosphere during the Industrial Revolution. The emergence of great factories and the concomitant immense growth in coal consumption gave rise to an unprecedented level of air pollution in industrial centers; after 1900 the large volume of industrial chemical discharges added to the growing load of untreated human waste. The first large-scale, modern environmental laws came in the form of Britain's Alkali Acts, passed in 1863, to regulate the deleterious air pollution (gaseous hydrochloric acid) given off by the Leblanc process, used to produce soda ash. An Alkali inspector and four sub-inspectors were appointed to curb this pollution. The inspectorate's responsibilities were gradually expanded, culminating in the Alkali Order 1958 which placed all major heavy industries that emitted smoke, grit, dust and fumes under supervision.
In industrial cities, local experts and reformers, especially after 1890, took the lead in identifying environmental degradation and pollution, and initiating grass-roots movements to demand and achieve reforms. Typically the highest priority went to water and air pollution. The Coal Smoke Abatement Society was formed in 1898 making it one of the oldest environmental NGOs. It was founded by artist Sir William Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, the Public Health Act 1875 required all furnaces and fireplaces to consume their own smoke. It also provided for sanctions against factories that emitted large amounts of black smoke. This law's provisions were extended in 1926 with the Smoke Abatement Act to include other emissions, such as soot, ash, and gritty particles, and to empower local authorities to impose their own regulations.
It was only under the impetus of the Great Smog of 1952 in London, which almost brought the city to a standstill and may have caused upward of 6,000 deaths, that the Clean Air Act 1956 was passed and airborne pollution in the city was first tackled. Financial incentives were offered to householders to replace open coal fires with alternatives (such as installing gas fires) or those who preferred, to burn coke instead (a byproduct of town gas production) which produces minimal smoke. 'Smoke control areas' were introduced in some towns and cities where only smokeless fuels could be burnt and power stations were relocated away from cities. The act formed an important impetus to modern environmentalism and caused a rethinking of the dangers of environmental degradation to people's quality of life.
The late 19th century also saw the passage of the first wildlife conservation laws.
The zoologist Alfred Newton published a series of investigations into the Desirability of establishing a 'Close-time' for the preservation of indigenous animals between 1872 and 1903. His advocacy for legislation to protect animals from hunting during the mating season led to the formation of the Royal Society for the Protection of Birds and influenced the passage of the Sea Birds Preservation Act in 1869 as the first nature protection law in the world.
During the Spanish Revolution, anarchist-controlled territories undertook several environmental reforms, which were possibly the largest in the world at the time. Daniel Guerin notes that anarchist territories would diversify crops, extend irrigation, initiate reforestation, start tree nurseries and help to establish naturist communities. Once there was a link discovered between air pollution and tuberculosis, the CNT shut down several metal factories.
First environmental movements
Early interest in the environment was a feature of the Romantic movement in the early 19th century. One of the earliest modern pronouncements on thinking about human industrial advancement and its influence on the environment was written by Japanese geographer, educator, philosopher and author Tsunesaburo Makiguchi in his 1903 publication Jinsei Chirigaku (A Geography of Human Life). In Britain the poet William Wordsworth travelled extensively in the Lake District and wrote that it is a "sort of national property in which every man has a right and interest who has an eye to perceive and a heart to enjoy".
Systematic efforts on behalf of the environment only began in the late 19th century; it grew out of the amenity movement in Britain in the 1870s, which was a reaction to industrialisation, the growth of cities, and worsening air and water pollution. Starting with the formation of the Commons Preservation Society in 1865, the movement championed rural preservation against the encroachments of industrialisation. Robert Hunter, solicitor for the society, worked with Hardwicke Rawnsley, Octavia Hill, and John Ruskin to lead a successful campaign to prevent the construction of railways to carry slate from the quarries, which would have ruined the unspoiled valleys of Newlands and Ennerdale. This success led to the formation of the Lake District Defence Society (later to become The Friends of the Lake District).
Peter Kropotkin wrote about ecology in economics, agricultural science, conservation, ethology, criminology, urban planning, geography, geology and biology. He observed in Swiss and Siberian glaciers that they had been slowly melting since the dawn of the industrial revolution, possibly making him one of the first predictors for climate change. He also observed the damage done from deforestation and hunting. Kropotkin's writings would become influential in the 1970s and became a major inspiration for the intentional community movement as well as his ideas becoming the basis for the theory of social ecology.
In 1893 Hill, Hunter and Rawnsley agreed to set up a national body to coordinate environmental conservation efforts across the country; the "National Trust for Places of Historic Interest or Natural Beauty" was formally inaugurated in 1894. The organisation obtained secure footing through the 1907 National Trust Bill, which gave the trust the status of a statutory corporation. and the bill was passed in August 1907.
An early "Back-to-Nature" movement, which anticipated the romantic ideal of modern environmentalism, was advocated by intellectuals such as John Ruskin, William Morris, George Bernard Shaw and Edward Carpenter, who were all against consumerism, pollution and other activities that were harmful to the natural world. The movement was a reaction to the urban conditions of the industrial towns, where sanitation was awful, pollution levels intolerable and housing terribly cramped. Idealists championed the rural life as a mythical utopia and advocated a return to it. John Ruskin argued that people should return to a "small piece of English ground, beautiful, peaceful, and fruitful. We will have no steam engines upon it ... we will have plenty of flowers and vegetables ... we will have some music and poetry; the children will learn to dance to it and sing it."
Practical ventures in the establishment of small cooperative farms were even attempted and old rural traditions, without the "taint of manufacture or the canker of artificiality", were enthusiastically revived, including the Morris dance and the maypole.
These ideas also inspired various environmental groups in the UK, such as the Royal Society for the Protection of Birds, established in 1889 by Emily Williamson as a protest group to campaign for greater protection for the indigenous birds of the island. The Society attracted growing support from the suburban middle-classes as well as support from many other influential figures, such as the ornithologist Professor Alfred Newton. By 1900, public support for the organisation had grown, and it had over 25,000 members. The garden city movement incorporated many environmental concerns into its urban planning manifesto; the Socialist League and The Clarion movement also began to advocate measures of nature conservation.
The movement in the United States began in the late 19th century, out of concerns for protecting the natural resources of the West, with individuals such as John Muir and Henry David Thoreau making key philosophical contributions. Thoreau was interested in peoples' relationship with nature and studied this by living close to nature in a simple life. He published his experiences in the book Walden, which argues that people should become intimately close with nature. Muir came to believe in nature's inherent right, especially after spending time hiking in Yosemite Valley and studying both the ecology and geology. He successfully lobbied congress to form Yosemite National Park and went on to set up the Sierra Club in 1892. The conservationist principles as well as the belief in an inherent right of nature were to become the bedrock of modern environmentalism.
In the 20th century, environmental ideas continued to grow in popularity and recognition. Efforts were starting to be made to save some wildlife, particularly the American bison. The death of the last passenger pigeon as well as the endangerment of the American bison helped to focus the minds of conservationists and to popularise their concerns. In 1916, the National Park Service was founded by US President Woodrow Wilson.
The Forestry Commission was set up in 1919 in Britain to increase the amount of woodland in Britain by buying land for afforestation and reforestation. The commission was also tasked with promoting forestry and the production of timber for trade. During the 1920s the Commission focused on acquiring land to begin planting out new forests; much of the land was previously used for agricultural purposes. By 1939 the Forestry Commission was the largest landowner in Britain.
During the 1930s the Nazis had elements that were supportive of animal rights, zoos and wildlife, and took several measures to ensure their protection. In 1933 the government created a stringent animal-protection law and in 1934, (The Reich Hunting Law) was enacted which limited hunting. Several Nazis were environmentalists (notably Rudolf Hess), and species protection and animal welfare were significant issues in the regime. In 1935, the regime enacted the "Reich Nature Protection Act". The concept of the (best translated as the "perpetual forest") which included concepts such as forest management and protection was promoted and efforts were also made to curb air pollution.
In 1949, A Sand County Almanac by Aldo Leopold was published. It explained Leopold's belief that humankind should have moral respect for the environment and that it is unethical to harm it. The book is sometimes called the most influential book on conservation.
Throughout the 1950s, 1960s, 1970s and beyond, photography was used to enhance public awareness of the need for protecting land and recruiting members to environmental organisations. David Brower, Ansel Adams and Nancy Newhall created the Sierra Club Exhibit Format Series, which helped raise public environmental awareness and brought a rapidly increasing flood of new members to the Sierra Club and to the environmental movement in general. This Is Dinosaur, edited by Wallace Stegner with photographs by Martin Litton and Philip Hyde, prevented the building of dams within Dinosaur National Monument by becoming part of a new kind of activism called environmentalism that combined the conservationist ideals of Thoreau, Leopold and Muir with hard-hitting advertising, lobbying, book distribution, letter writing campaigns, and more. The powerful use of photography in addition to the written word for conservation dated back to the creation of Yosemite National Park, when photographs persuaded Abraham Lincoln to preserve the beautiful glacier carved landscape for all time. The Sierra Club Exhibit Format Series galvanised public opposition to building dams in the Grand Canyon and protected many other national treasures. The Sierra Club often led a coalition of many environmental groups including the Wilderness Society and many others.
After a focus on preserving wilderness in the 1950s and 1960s, the Sierra Club and other groups broadened their focus to include such issues as air and water pollution, population concern, and curbing the exploitation of natural resources.
The prevailing belief regarding the origins of early environmentalism suggests that it emerged as a local response to the adverse impacts of industrialization in Western nations and communities. In terms of conservation efforts, there is a widespread view that the conservation movement began as a predominantly elite concern in North America, focusing on the preservation of local natural areas. A less prevailing view, however, attributes the roots of early environmentalism to a growing public concern about the influence of Western economic forces, particularly in connection with colonization, on tropical environments. Richard Grove, in a 1990 report, points out that little attention has been given to the significance of the colonial experience, particularly the European colonial experience, in shaping early European environmentalism.
Grove argues that as European colonization expanded, so did the European interaction with land and indigenous people, providing Europeans with an awareness of the destructive consequences of their economic and colonial activities on the newly "discovered" lands. As global trade expanded through colonization, the European concept of nature underwent a transformation, with the foreign tropical environments of their conquests evolving into romantic symbols of idyllic landscapes that required care and protection by Europeans. Examples of this impact of colonization on the Western mindset can be found in prominent cultural references, such as William Shakespeare's play "The Tempest" and Andrew Marvell's poem "Bermoothes."
Although this newfound self-awareness among Europeans about the destructive impacts of colonization on the environment did not halt the expansion of colonization itself, it did pave the way for a different approach to colonization – one focused on the preservation and protection of foreign natural resources. This phenomenon can be linked to the emergence of Edenic thinking, or the quest for Eden on Earth. This quest to locate Eden gained prominence in the 15th century, coinciding with colonization, and fostered the belief that newly "discovered" lands, especially tropical ones, had the potential to be heavenly paradises.
Post-war expansion
In 1962, Silent Spring by American biologist Rachel Carson was published. The book cataloged the environmental impacts of the indiscriminate spraying of DDT in the US and questioned the logic of releasing large amounts of chemicals into the environment without fully understanding their effects on human health and ecology. The book suggested that DDT and other pesticides may cause cancer and that their agricultural use was a threat to wildlife, particularly birds. The resulting public concern led to the creation of the United States Environmental Protection Agency in 1970 which subsequently banned the agricultural use of DDT in the US in 1972. The limited use of DDT in disease vector control continues to this day in certain parts of the world and remains controversial. The book's legacy was to produce a far greater awareness of environmental issues and interest into how people affect the environment. With this new interest in environment came interest in problems such as air pollution and petroleum spills, and environmental interest grew. New pressure groups formed, notably Greenpeace and Friends of the Earth (US), as well as notable local organisations such as the Wyoming Outdoor Council, which was founded in 1967. From 1962 to 1998, the environmental movement founded 772 national organizations in the United States.
In the 1970s, the environmental movement gained rapid speed around the world as a productive outgrowth of the counterculture movement.
The world's first political parties to campaign on a predominantly environmental platform were the United Tasmania Group of Tasmania, Australia, and the Values Party of New Zealand. The first green party in Europe was the Popular Movement for the Environment, founded in 1972 in the Swiss canton of Neuchâtel. The first national green party in Europe was PEOPLE, founded in Britain in February 1973, which eventually turned into the Ecology Party, and then the Green Party.
Protection of the environment also became important in the developing world; the Chipko movement was formed in India under the influence of Mhatmas Gandhi and they set up peaceful resistance to deforestation by literally hugging trees (leading to the term "tree huggers"). Their peaceful methods of protest and slogan "ecology is permanent economy" were very influential.
Another milestone in the movement was the creation of Earth Day. Earth Day was first observed in San Francisco and other cities on 21 March 1970, the first day of spring. It was created to give awareness to environmental issues. On 21 March 1971, United Nations Secretary-General U Thant spoke of a spaceship Earth on Earth Day, hereby referring to the ecosystem services the earth supplies to us, and hence our obligation to protect it (and with it, ourselves). Earth Day is now coordinated globally by the Earth Day Network, and is celebrated in more than 192 countries every year.
The UN's first major conference on international environmental issues, the United Nations Conference on the Human Environment (also known as the Stockholm Conference), was held on 5–16 June 1972. It marked a turning point in the development of international environmental politics.
By the mid-1970s, many felt that people were on the edge of environmental catastrophe. The back-to-the-land movement started to form and ideas of environmental ethics joined with anti-Vietnam War sentiments and other political issues. These individuals lived outside normal society and started to take on some of the more radical environmental theories such as deep ecology. Around this time more mainstream environmentalism was starting to show force with the signing of the Endangered Species Act in 1973 and the formation of CITES in 1975. Significant amendments were also enacted to the United States Clean Air Act and Clean Water Act.
In 1979, James Lovelock, a British scientist, published Gaia: A new look at life on Earth, which put forth the Gaia hypothesis; it proposes that life on earth can be understood as a single organism. This became an important part of the Deep Green ideology. Throughout the rest of the history of environmentalism there has been debate and argument between more radical followers of this Deep Green ideology and more mainstream environmentalists.
21st century and beyond
Environmentalism continues to evolve to face up to new issues such as global warming, overpopulation, genetic engineering, and plastic pollution.
Research demonstrates a precipitous decline in the US public's interest in 19 different areas of environmental concern. Americans are less likely to be actively participating in an environmental movement or organisation and more likely to identify as "unsympathetic" to an environmental movement than in 2000. This is likely a lingering factor of the Great Recession in 2008. Since 2005, the percentage of Americans agreeing that the environment should be given priority over economic growth has dropped 10 points; in contrast, those feeling that growth should be given priority "even if the environment suffers to some extent" has risen 12 percent. Nevertheless, a recent National Geographic survey indicated strong desire for commitment across a dozen countries, indicating a majority were in favour of more than half of the Earth's land surface being protected.
New forms of ecoactivism
Tree sitting is a form of activism in which the protester sits in a tree in an attempt to stop the removal of a tree or to impede the demolition of an area with the longest and most famous tree-sitter being Julia Butterfly Hill, who spent 738 days in a California Redwood, saving a three-acre tract of forest. Also notable is the Yellow Finch tree sit, which was a 932-day blockade of the Mountain Valley Pipeline from 2018 to 2021.
Sit-ins can be used to encourage social change, such as the Greensboro sit-ins, a series of protests in 1960 to stop racial segregation, but can also be used in ecoactivism, as in the Dakota Access Pipeline Protest.
Before the Syrian civil war, Rojava had been ecologically damaged by monoculture, oil extraction, damming of rivers, deforestation, drought, topsoil loss and general pollution. The DFNS launched a campaign titled 'Make Rojava Green Again' (a parody of Make America Great Again) which is attempting to provide renewable energy to communities (especially solar energy), reforestation, protecting water sources, planting gardens, promoting urban agriculture, creating wildlife reserves, water recycling, beekeeping, expanding public transportation and promoting environmental awareness within their communities.
The Rebel Zapatista Autonomous Municipalities are firmly environmentalist and have stopped the extraction of oil, uranium, timber and metal from the Lacandon Jungle and stopped the use of pesticides and chemical fertilisers in farming.
The CIPO-RFM has engaged in sabotage and direct action against wind farms, shrimp farms, eucalyptus plantations and the timber industry. They have also set up corn and coffee worker cooperatives and built schools and hospitals to help the local populations. They have also created a network of autonomous community radio stations to educate people about dangers to the environment and inform the surrounding communities about new industrial projects that would destroy more land. In 2001, the CIPO-RFM defeated the construction of a highway that was part of Plan Puebla Panama.
Environmental movement
The environmental movement (a term that sometimes includes the conservation and green movements) is a diverse scientific, social, and political movement. Though the movement is represented by a range of organisations, because of the inclusion of environmentalism in the classroom curriculum, the environmental movement has a younger demographic than is common in other social movements (see green seniors).
Environmentalism as a movement covers broad areas of institutional oppression, including for example: consumption of ecosystems and natural resources into waste, dumping waste into disadvantaged communities, air pollution, water pollution, weak infrastructure, exposure of organic life to toxins, mono-culture, anti-polythene drive (jhola movement) and various other focuses. Because of these divisions, the environmental movement can be categorized into these primary focuses: environmental science, environmental activism, environmental advocacy, and environmental justice.
Free market environmentalism
Free market environmentalism is a theory that argues that the free market, property rights, and tort law provide the best tools to preserve the health and sustainability of the environment. It considers environmental stewardship to be natural, as well as the expulsion of polluters and other aggressors through individual and class action.
Evangelical environmentalism
Evangelical environmentalism is an environmental movement in the United States in which some Evangelicals have emphasized biblical mandates concerning humanity's role as steward and subsequent responsibility for the care taking of Creation. While the movement has focused on different environmental issues, it is best known for its focus of addressing climate action from a biblically grounded theological perspective. This movement is controversial among some non-Christian environmentalists due to its rooting in a specific religion.
Preservation and conservation
Environmental preservation in the United States and other parts of the world, including Australia, is viewed as the setting aside of natural resources to prevent damage caused by contact with humans or by certain human activities, such as logging, mining, hunting, and fishing, often to replace them with new human activities such as tourism and recreation. Regulations and laws may be enacted for the preservation of natural resources.
Exergy and availability of resources
Thermodynamic derived environmentalism is based on the second law of thermodynamics, minimization of exergy disruption (or entropy generation)and the concept of availability. It moves from he milestone work of Jan Szargut who emphasized the relation between exergy and availability,
it is necessary to remember "Exergy Ecology and Democracy".
by Goran Wall, a short essay, which evidences the strict relation that relates exergy disruption with environmental and social disruption. More recently it has verified that governmental emissions and impacts balances underestimate the effective GHG production by means of human processes. In fact, they often neglects the impacts of import/export related emissions. In addition they have analyzed the UN SDGs and the methods which are suggested for verifying the advances of the countries. This activity has evidenced that objective and coherent parameters are missing. Therefore, they suggest the introduction of exergy analysis as the most effective method for estimating the environmental degradation.
Therefore, a novel fiscal model based on Exergy and availability disruption has been defined as the only possible way for overcoming the problems induced by the globalized markets.
Organisations and conferences
Environmental organisations can be global, regional, national or local; they can be government-run or private (NGO). Environmentalist activity exists in almost every country. Moreover, groups dedicated to community development and social justice also focus on environmental concerns.
Some US environmental organisations, among them the Natural Resources Defense Council and the Environmental Defense Fund, specialise in bringing lawsuits (a tactic seen as particularly useful in that country). Other groups, such as the US-based National Wildlife Federation, Earth Day, National Cleanup Day, the Nature Conservancy, and The Wilderness Society, and global groups like the World Wide Fund for Nature and Friends of the Earth, disseminate information, participate in public hearings, lobby, stage demonstrations, and may purchase land for preservation. Statewide nonprofit organisations such as the Wyoming Outdoor Council often collaborate with these national organisations and employ similar strategies. Smaller groups, including Wildlife Conservation International, conduct research on endangered species and ecosystems. More radical organisations, such as Greenpeace, Earth First!, and the Earth Liberation Front, have more directly opposed actions they regard as environmentally harmful. While Greenpeace is devoted to nonviolent confrontation as a means of bearing witness to environmental wrongs and bringing issues into the public realm for debate, the underground Earth Liberation Front engages in the clandestine destruction of property, the release of caged or penned animals, and other criminal acts. Such tactics are regarded as unusual within the movement, however.
On an international level, concern for the environment was the subject of a United Nations Conference on the Human Environment in Stockholm in 1972, attended by 114 nations. Out of this meeting developed the United Nations Environment Programme (UNEP) and the follow-up United Nations Conference on Environment and Development in 1992. Other international organisations in support of environmental policies development include the Commission for Environmental Cooperation (as part of NAFTA), the European Environment Agency (EEA), and the Intergovernmental Panel on Climate Change (IPCC).
Environmental protests
Notable environmental protests and campaigns include:
2010 Xinfa aluminum plant protest
Anti-WAAhnsinns Festival
Car-Free Days
Camp for Climate Action
Campaign against Climate Change
Climate Rush
Cofán people oil drilling protest (Ecuador)
Earth Day
Earth First!
Earthlife Africa
Extinction Rebellion
Global Climate Strikes
Global Day of Action
Gurindji Strike
Hands off our Forest
Homes before Roads
Water Protectors
Just Stop Oil
Kupa Piti Kungka Tjuta
Love Canal protests
March Against Monsanto
Nevada Desert Experience
Plane Mad
Plane Stupid
Qidong protest
Save Manapouri Campaign
Say Yes demonstrations
Shifang protest
Stop Climate Chaos
Environmentalists
Notable advocates for environmental protection and sustainability include:
Edward Abbey (author)
David Attenborough (broadcaster, naturalist)
John James Audubon (naturalist)
Judi Bari (environmentalist)
Frances Beinecke (environmentalist and former president of the Natural Resources Defense Council)
David Bellamy (botanist)
Wendell Berry (farmer, philosopher)
Murray Bookchin (anarchist, philosopher, social ecologist)
Erin Brockovich (environmental lawyer and activist)
David Brower (writer, activist)
Bob Brown (activist and politician)
Lester Brown (environmental analyst, author)
Carol Browner (lawyer and activist)
Kevin Buzzacott (Aboriginal activist)
Berta Caceres (environmental and indigenous rights activist)
Helen Caldicott (medical doctor)
Rachel Carson (biologist, writer)
Majora Carter (urban revitalization strategist)
Charles III (British Royal Family member)
Barry Commoner (biologist, politician)
Jacques-Yves Cousteau (explorer, ecologist)
Herman Daly (ecological economist and steady-state theorist)
Peter Dauvergne (political scientist)
Laurie David (activist and producer)
Marina DeBris (environmental artist)
Leonardo DiCaprio (actor and environmentalist)
Sylvia Earle (marine biologist)
Paul R. Ehrlich (population biologist)
Hans-Josef Fell (German Green Party member)
Jane Fonda (actor)
Josh Fox (filmmaker, environmental activist)
Mizuho Fukushima (politician, activist)
Peter Garrett (musician, politician)
Jane Goodall (primatologist, anthropologist, and UN Messenger of Peace)
Lois Gibbs (Founder of the Center for Health, Environment and Justice)
Al Gore (former Vice President of the United States)
Daryl Hannah (activist)
James Hansen (scientist)
Garrett Hardin (ecologist, ecophilosopher)
Denis Hayes (environmentalist and solar power advocate)
Julia Butterfly Hill (activist)
Robert Hunter (journalist, co-founder and first president of Greenpeace)
Tetsunari Iida (sustainable energy advocate)
Lisa P. Jackson (chemical engineer and former administrator of the United States Environmental Protection Agency)
Naomi Klein (writer, activist)
Winona LaDuke (environmentalist)
Aldo Leopold (ecologist)
A. Carl Leopold (plant physiologist)
James Lovelock (scientist)
Amory Lovins (energy policy analyst)
Hunter Lovins (environmentalist)
Caroline Lucas (politician)
Wangari Maathai (activist and Nobel laureate)
Jarid Manos (CEO of the Great Plains Restoration Council)
Xiuhtezcatl Martinez (environmental activist, hip-hop artist)
Bill McKibben (writer, activist)
David McTaggart (activist)
Chico Mendes (activist)
Joni Mitchell (musician, environmental activist)
George Monbiot (journalist)
John Muir (naturalist, activist)
Ralph Nader (activist)
Gaylord Nelson (politician)
Alan Pears (environmental consultant and energy efficiency pioneer)
Gifford Pinchot (first chief of the USFS)
Jonathon Porritt (politician)
John Wesley Powell (second director of the USGS)
Barbara Pyle (documentarian and executive producer of Captain Planet and the Planeteers)
Phil Radford (environmental, clean energy and democracy advocate, Greenpeace Executive Director)
Bonnie Raitt (musician)
Theodore Roosevelt (former President of the United States)
Habiba Sarobi (politician and activist)
E. F. Schumacher (author of Small Is Beautiful)
Vandana Shiva (ecofeminist and activist)
Marina Silva (politician and activist)
Alicia Silverstone (activist and author of The Kind Diet)
Lauren Singer (activist and entrepreneur)
Swami Sundaranand (Yogi, photographer, and mountaineer)
Cass Sunstein (environmental lawyer)
David Suzuki (scientist, broadcaster)
Henry David Thoreau (writer, philosopher)
Greta Thunberg (environmentalist)
Stewart Udall (former United States Secretary of the Interior)
Jo Valentine (politician and activist)
Dominique Voynet (politician and environmentalist)
Christopher O. Ward (water infrastructure expert)
Alice Waters (activist and restaurateur)
Gabriel Willow (environmental educator, naturalist)
Howard Zahniser (author of the 1964 Wilderness Act)
Assassinations
Every year, more than 100 environmental activists are murdered throughout the world. Most recent deaths are in Brazil, where activists combat logging in the Amazon rainforest.
116 environmental activists were assassinated in 2014, and 185 in 2015. This represents more than two environmentalists assassinated every week in 2014 and three every week in 2015. More than 200 environmental activists were assassinated worldwide between 2016 and early 2018. A 2020 incident saw several rangers murdered in the Congo Rainforest by poaching squads. Occurrences like this are relatively common, and account for a large number of deaths.
In popular culture
The U.S. Forest Service created Smokey the Bear in 1944; he appeared in countless posters, radio and television programs, movies, press releases, and other guises to warn about forest fires.
The comic strip Mark Trail, by environmentalist Ed Dodd, began in 1946; it still appears weekly in 175 newspapers.
The children's animated show Captain Planet and the Planeteers, created by Ted Turner and Barbara Pyle in 1989 to inform children about environmental issues. The show aired for six seasons and 113 episodes, in 100 countries worldwide from 1990 to 1996.
In 1974, Spokane, Washington, became one of the smallest cities ever to host a World's Fair. From Saturday, 4 May, to Sunday, 3 November 1974, Spokane hosted Expo 74, the first world's fair to focus on the environment. The theme of Expo 74 was "Celebrating Tomorrow's Fresh New Environment." (In 1982, Knoxville, Tennessee, was another small city to host a world's fair: Expo '82, with the theme, "Energy Turns the World.")
FernGully: The Last Rainforest is an animated motion picture released in 1992, which focuses exclusively on the environment. The movie is based on a book under the same title by Diana Young. In 1998, a sequel, FernGully 2: The Magical Rescue, was introduced.
Miss Earth is one of the Big Four international beauty pageants. (The other three are Miss Universe, Miss International, and Miss World.) Out of these four beauty pageants, Miss Earth is the only international beauty pageant that promotes "environmental awareness." The reigning titleholders dedicate their year to promote specific projects and often address issues concerning the environment and other global issues through school tours, tree planting activities, street campaigns, coastal clean ups, speaking engagements, shopping mall tours, media guesting, environmental fair, storytelling programs, eco-fashion shows, and other environmental activities. The Miss Earth winner is the spokesperson for the Miss Earth Foundation, the United Nations Environment Programme (UNEP) and other environmental organizations. The Miss Earth Foundation also works with the environmental departments and ministries of participating countries, various private sectors and corporations, as well as Greenpeace and the World Wildlife Foundation (WWF).
Another area of environmentalism is to use art to raise awareness about misuse of the environment. One example is trashion, using trash to create clothes, jewelry, and other objects for the home. Marina DeBris is one trashion artist, who focuses on ocean and beach trash to design clothes and for fund raising, education.
Criticism and alternative views
When environmentalism first became popular during the early 20th century, the focus was wilderness protection and wildlife preservation. These goals reflected the interests of the movement's initial, primarily white middle and upper class supporters, including through viewing preservation and protection via a lens that failed to appreciate the centuries-long work of indigenous communities who had lived without ushering in the types of environmental devastation these settler colonial "environmentalists" now sought to mitigate. The actions of many mainstream environmental organizations still reflect these early principles. Numerous low-income minorities felt isolated or negatively impacted by the movement, exemplified by the Southwest Organizing Project's (SWOP) Letter to the Group of 10, a letter sent to major environmental organizations by several local environmental justice activists. The letter argued that the environmental movement was so concerned about cleaning up and preserving nature that it ignored the negative side-effects that doing so caused communities nearby, namely less job growth. In addition, the NIMBY movement has transferred locally unwanted land uses (LULUs) from middle-class neighborhoods to poor communities with large minority populations. Therefore, vulnerable communities with fewer political opportunities are more often exposed to hazardous waste and toxins. This has resulted in the PIBBY principle, or at least the PIMBY (Place-in-minorities'-backyard), as supported by the United Church of Christ's study in 1987.
As a result, some minorities have viewed the environmental movement as elitist. Environmental elitism manifested itself in three different forms:
Compositional – Environmentalists are from the middle and upper class.
Ideological – The reforms benefit the movement's supporters but impose costs on nonparticipants.
Impact – The reforms have "regressive social impacts". They disproportionately benefit environmentalists and harm underrepresented populations.
Many environmentalists believe that human interference with 'nature' should be restricted or minimised as a matter of urgency (for the sake of life, or the planet, or just for the benefit of the human species), whereas environmental skeptics and anti-environmentalists do not believe that there is such a need. One can also regard oneself as an environmentalist and believe that human 'interference' with 'nature' should be increased. Nevertheless, there is a risk that the shift from emotional environmentalism into the technical management of natural resources and hazards could decrease the touch of humans with nature, leading to less concern with environment preservation. Increasingly, typical conservation rhetoric is being replaced with restoration approaches and larger landscape initiatives that seek to create more holistic impacts.
In the 2000s, American author, film director, medical graduate and intellect Michael Crichton criticized environmentalism as being religiously motivated rather than grounded in empirical evidence, arguing that climate change was a natural part of Earth's history and had been occurring long before humans dominated the planet. Also claiming to argue from his minor education in anthropology, he stated that religion was a part of human social make-up and that if it was suppressed, it would simply re-emerge in another form. With the decline of Christianity and Church attendance in the Western world, environmentalism has become more popular according to him, which he termed as "the religion of urban atheists".
Others seek a balance that involves both caring deeply for the environment while letting science guide human actions affecting it. Such an approach would avoid the emotionalism which, for example, anti-GMO activism has been criticized for, and protect the integrity of science. Planting trees, for another example, can be emotionally satisfying but should also involve being conscious of ecological concerns such as the effect on water cycles and the use of nonnative, potentially invasive species.
See also
Anti-environmentalism
Bright green environmentalism
Climate movement
Conservation movement
Ecomodernism
Ecosia
Ecotage
Ecotechnology
Environmental history of the United States
Environmental planning
Environmental, social, and governance
Environmental studies
Environmental technology
Greening
Green building
Human ecology
Human impact on the environment
List of climate scientists
List of women climate scientists and activists
Nature conservation
Outline of environmentalism
Radical environmentalism
Religion and environmentalism
Sustainability
Tree planting
References
Further reading
Borowy, Iris. "Before UNEP: who was in charge of the global environment? The struggle for institutional responsibility 1968–72." Journal of Global History 14.1 (2019): 87–106.
Daynes, Byron W., and Glen Sussman, eds. White House Politics and the Environment: Franklin D. Roosevelt to George W. Bush (Texas A&M University Press; 2010) 300 pages; evaluates how 12 presidents helped or hindered the cause of environmental protection.
Johnson, Erik W., and Scott Frickel, (2011). "Ecological Threat and the Founding of U.S. National Environmental Movement Organizations, 1962–1998," Social Problems 58 (Aug. 2011), 305–29.
McCormick, John. 1995. The Global Environmental Movement. John Wiley. London. 312 pp.
Palmer, Joy. Fifty Key Thinkers on the Environment (Routledge, 2001)
de Steiguer, J. Edward. 2006. The Origins of Modern Environmental Thought. University of Arizona Press. Tucson. 246 pp.
Tooze, Adam, "Democracy and Its Discontents", The New York Review of Books, vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. "Democracy has no clear answer for the mindless operation of bureaucratic and [technological power. We may indeed be witnessing its extension in the form of artificial intelligence and robotics. Likewise, after decades of dire warning, the environmental problem remains fundamentally unaddressed.... Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly.... Finally, there is the threat du jour: corporations and the technologies they promote." (pp. 56–57.)
Verweij, Marco; Thompson, Michael (eds), 2006, Clumsy Solutions for a Complex World: Governance, Politics and Plural Perceptions, Basingstoke: Palgrave Macmillan,
Vogel, David. California Greenin': How the Golden State Became an Environmental Leader (2018) 280 pp online review
Woodhouse, Keith M. "The Politics of Ecology: Environmentalism and Liberalism in the 1960s," Journal for the Study of Radicalism, Volume 2, Number 2, 2009, pp. 53–84
World Bank, 2003, "Sustainable Development in a Dynamic World: Transforming Institutions, Growth, and Quality of Life" , World Development Report 2003, International Bank for Reconstruction and Development and Oxford University Press.
External links
Westland – A Canadian television series (1984–2007) on a broad range of environmental issues, from the UBC Library Digital Collections
The Directory of Environmental Websites
Green politics
Habitat
Environmental social science concepts
1920s neologisms | 0.777136 | 0.996952 | 0.774768 |
Praxis (process) | Praxis is the process by which a theory, lesson, or skill is enacted, embodied, realized, applied, or put into practice. "Praxis" may also refer to the act of engaging, applying, exercising, realizing, or practising ideas. This has been a recurrent topic in the field of philosophy, discussed in the writings of Plato, Aristotle, St. Augustine, Francis Bacon, Immanuel Kant, Søren Kierkegaard, Ludwig von Mises, Karl Marx, Antonio Gramsci, Martin Heidegger, Hannah Arendt, Jean-Paul Sartre, Paulo Freire, Murray Rothbard, and many others. It has meaning in the political, educational, spiritual and medical realms.
Origins
The word praxis is from . In Ancient Greek the word praxis (πρᾶξις) referred to activity engaged in by free people. The philosopher Aristotle held that there were three basic activities of humans: theoria (thinking), poiesis (making), and praxis (doing). Corresponding to these activities were three types of knowledge: theoretical, the end goal being truth; poietical, the end goal being production; and practical, the end goal being action. Aristotle further divided the knowledge derived from praxis into ethics, economics, and politics. He also distinguished between eupraxia (εὐπραξία, "good praxis") and dyspraxia (δυσπραξία, "bad praxis, misfortune").
Marxism
Young Hegelian August Cieszkowski was one of the earliest philosophers to use the term praxis to mean "action oriented towards changing society" in his 1838 work Prolegomena zur Historiosophie (Prolegomena to a Historiosophy). Cieszkowski argued that while absolute truth had been achieved in the speculative philosophy of Hegel, the deep divisions and contradictions in man's consciousness could only be resolved through concrete practical activity that directly influences social life. Although there is no evidence that Karl Marx himself read this book, it may have had an indirect influence on his thought through the writings of his friend Moses Hess.
Marx uses the term "praxis" to refer to the free, universal, creative and self-creative activity through which man creates and changes his historical world and himself. Praxis is an activity unique to man, which distinguishes him from all other beings. The concept appears in two of Marx's early works: the Economic and Philosophical Manuscripts of 1844 and the Theses on Feuerbach (1845). In the former work, Marx contrasts the free, conscious productive activity of human beings with the unconscious, compulsive production of animals. He also affirms the primacy of praxis over theory, claiming that theoretical contradictions can only be resolved through practical activity. In the latter work, revolutionary practice is a central theme:
Marx here criticizes the materialist philosophy of Ludwig Feuerbach for envisaging objects in a contemplative way. Marx argues that perception is itself a component of man's practical relationship to the world. To understand the world does not mean considering it from the outside, judging it morally or explaining it scientifically. Society cannot be changed by reformers who understand its needs, only by the revolutionary praxis of the mass whose interest coincides with that of society as a whole—the proletariat. This will be an act of society understanding itself, in which the subject changes the object by the very fact of understanding it.
Seemingly inspired by the Theses, the nineteenth century socialist Antonio Labriola called Marxism the "philosophy of praxis". This description of Marxism would appear again in Antonio Gramsci's Prison Notebooks and the writings of the members of the Frankfurt School. Praxis is also an important theme for Marxist thinkers such as Georg Lukacs, Karl Korsch, Karel Kosik and Henri Lefebvre, and was seen as the central concept of Marx's thought by Yugoslavia's Praxis School, which established a journal of that name in 1964.
Jean-Paul Sartre
In the Critique of Dialectical Reason, Jean-Paul Sartre posits a view of individual praxis as the basis of human history. In his view, praxis is an attempt to negate human need. In a revision of Marxism and his earlier existentialism, Sartre argues that the fundamental relation of human history is scarcity. Conditions of scarcity generate competition for resources, exploitation of one over another and division of labor, which in its turn creates struggle between classes. Each individual experiences the other as a threat to his or her own survival and praxis; it is always a possibility that one's individual freedom limits another's. Sartre recognizes both natural and man-made constraints on freedom: he calls the non-unified practical activity of humans the "practico-inert". Sartre opposes to individual praxis a "group praxis" that fuses each individual to be accountable to each other in a common purpose. Sartre sees a mass movement in a successful revolution as the best exemplar of such a fused group.
Hannah Arendt
In The Human Condition, Hannah Arendt argues that Western philosophy too often has focused on the contemplative life (vita contemplativa) and has neglected the active life (vita activa). This has led humanity to frequently miss much of the everyday relevance of philosophical ideas to real life. For Arendt, praxis is the highest and most important level of the active life. Thus, she argues that more philosophers need to engage in everyday political action or praxis, which she sees as the true realization of human freedom. According to Arendt, our capacity to analyze ideas, wrestle with them, and engage in active praxis is what makes us uniquely human.
In Maurizio Passerin d'Etreves's estimation, "Arendt's theory of action and her revival of the ancient notion of praxis represent one of the most original contributions to twentieth century political thought. ... Moreover, by viewing action as a mode of human togetherness, Arendt is able to develop a conception of participatory democracy which stands in direct contrast to the bureaucratized and elitist forms of politics so characteristic of the modern epoch."
Education
Praxis is used by educators to describe a recurring passage through a cyclical process of experiential learning, such as the cycle described and popularised by David A. Kolb.
Paulo Freire defines praxis in Pedagogy of the Oppressed as "reflection and action directed at the structures to be transformed." Through praxis, oppressed people can acquire a critical awareness of their own condition, and, with teacher-students and students-teachers, struggle for liberation.
In the British Channel 4 television documentary New Order: Play at Home, Factory Records owner Tony Wilson describes praxis as "doing something, and then only afterwards, finding out why you did it".
Praxis may be described as a form of critical thinking and comprises the combination of reflection and action. Praxis can be viewed as a progression of cognitive and physical actions:
Taking the action
Considering the impacts of the action
Analysing the results of the action by reflecting upon it
Altering and revising conceptions and planning following reflection
Implementing these plans in further actions
This creates a cycle which can be viewed in terms of educational settings, learners and educational facilitators.
Scott and Marshall (2009) refer to praxis as "a philosophical term referring to human action on the natural and social world". Furthermore, Gramsci (1999) emphasises the power of praxis in Selections from the Prison Notebooks by stating that "The philosophy of praxis does not tend to leave the simple in their primitive philosophy of common sense but rather to lead them to a higher conception of life".
To reveal the inadequacies of religion, folklore, intellectualism and other such 'one-sided' forms of reasoning, Gramsci appeals directly in his later work to Marx's 'philosophy of praxis', describing it as a 'concrete' mode of reasoning. This principally involves the juxtaposition of a dialectical and scientific audit of reality; against all existing normative, ideological, and therefore counterfeit accounts. Essentially a 'philosophy' based on 'a practice', Marx's philosophy, is described correspondingly in this manner, as the only 'philosophy' that is at the same time a 'history in action' or a 'life' itself (Gramsci, Hoare and Nowell-Smith, 1972, p. 332).
Spirituality
Praxis is also key in meditation and spirituality, where emphasis is placed on gaining first-hand experience of concepts and certain areas, such as union with the Divine, which can only be explored through praxis due to the inability of the finite mind (and its tool, language) to comprehend or express the infinite. In an interview for YES! Magazine, Matthew Fox explained it this way:
According to Strong's Concordance, the Hebrew word ta‛am is, properly, a taste. This is, figuratively, perception and, by implication, intelligence; transitively, a mandate: advice, behaviour, decree, discretion, judgment, reason, taste, understanding.
Medicine
Praxis is the ability to perform voluntary skilled movements. The partial or complete inability to do so in the absence of primary sensory or motor impairments is known as apraxia.
See also
Apraxia
Christian theological praxis
Hexis
Lex artis
Orthopraxy
Praxeology
Praxis Discussion Series
Praxis (disambiguation)
Praxis intervention
Praxis school
Practice (social theory)
Theses on Feuerbach
References
Further reading
Paulo Freire (1970), Pedagogy of the Oppressed, Continuum International Publishing Group.
External links
Entry for "praxis" at the Encyclopaedia of Informal Education
Der Begriff Praxis
Concepts in the philosophy of mind
Marxism | 0.776487 | 0.997456 | 0.774511 |
Haecceity | Haecceity (; from the Latin haecceitas, which translates as "thisness") is a term from medieval scholastic philosophy, first coined by followers of Duns Scotus to denote a concept that he seems to have originated: the irreducible determination of a thing that makes it this particular thing. Haecceity is a person's or object's thisness, the individualising difference between the concept "a man" and the concept "Socrates" (i.e., a specific person). In modern philosophy of physics, it is sometimes referred to as primitive thisness.
Etymology
Haecceity is a Latin neologism formed as an abstract noun derived from the demonstrative pronoun "haec(ce)", meaning "this (very)" (feminine singular) or "these (very)" (feminine or neuter plural). It is apparently formed on the model of another (much older) neologism, viz. "qui(d)ditas" ("whatness"), which is a calque of Aristotle's Greek to ti esti (τὸ τί ἐστι) or "the what (it) is."
Haecceity vs. quiddity
Haecceity may be defined in some dictionaries as simply the "essence" of a thing, or as a simple synonym for quiddity or hypokeimenon. However, in proper philosophical usage these terms have not only distinct but opposite meanings. Whereas haecceity refers to aspects of a thing that make it a particular thing, quiddity refers to the universal qualities of a thing, its "whatness", or the aspects of a thing it may share with other things and by which it may form part of a genus of things.
Haecceity in scholasticism
Duns Scotus makes the following distinction:
In Scotism and the scholastic usage in general, therefore, "haecceity" properly means the irreducible individuating differentia which together with the specific essence (i.e. quiddity) constitutes the individual (or the individual essence), in analogy to the way specific differentia combined with the genus (or generic essence) constitutes the species (or specific essence). Haecceity differs, however, from the specific differentia, by not having any conceptually specifiable content: it does not add any further specification to the whatness of a thing but merely determines it to be a particular unrepeatable instance of the kind specified by the quiddity. This is connected with Aristotle's notion that an individual cannot be defined.
Individuals are more perfect than the specific essence and thus have not solely a higher degree of unity, but also a greater degree of truth and goodness. God multiplied individuals to communicate to them His goodness and beatitude.
Haecceity in anglophone philosophy
In analytical philosophy, the meaning of "haecceity" shifted somewhat. Charles Sanders Peirce used the term as a non-descriptive reference to an individual. Alvin Plantinga and other analytical philosophers used "haecceity" in the sense of "individual essence". The "haecceity" of analytical philosophers thus comprises not only the individuating differentia (the scholastic haecceity) but the entire essential determination of an individual (i.e., including that which the scholastics would call its quiddity).
Haecceity in sociology and continental philosophy
Harold Garfinkel, the founder of ethnomethodology, used the term "haecceity", to emphasize the unavoidable and irremediable indexical character of any expression, behavior or situation. For Garfinkel indexicality was not a problem. He treated the haecceities and contingencies of social practices as a resource for making sense together. In contrast to theoretical generalizations, Garfinkel introduced "haecceities" in "Parson's Plenum" (1988), to indicate the importance of the infinite contingencies in both situations and practices for the local accomplishment of social order. According to Garfinkel, members display and produce the social order they refer to within the setting that they contribute to. The study of practical action and situations in their "haecceities" — aimed at disclosing the ordinary, ongoing social order that is constructed by the members' practices — is the work of ethnomethodology. Garfinkel described ethnomethodological studies as investigations of "haecceities", i.e.,
Gilles Deleuze uses the term in a different way to denote entities that exist on the plane of immanence. The usage was likely chosen in line with his esoteric concept of difference and individuation, and critique of object-centered metaphysics.
Michael Lynch (1991) described the ontological production of objects in the natural sciences as "assemblages of haecceities", thereby offering an alternate reading of Deleuze and Guattari's (1980) discussion of "memories of haecceity" in the light of Garfinkel's treatment of "haecceity".
Other uses
Gerard Manley Hopkins drew on Scotus — whom he described as “of reality the rarest-veined unraveller” — to construct his poetic theory of inscape.
James Joyce made similar use of the concept of haecceitas to develop his idea of the secular epiphany.
James Wood refers extensively to haecceitas (as "thisness") in developing an argument about conspicuous detail in aesthetic literary criticism.
See also
Entitativity
Formal distinction
Haecceitism
Hypostasis
Identity of indiscernibles
Irreducibility
Objective precision
Tathātā
Open individualism
Ostensive definition
Personal identity
Principle of individuation
Quiddity
Rigid designation
Scotism
Scotistic realism
Ship of Theseus
Sine qua non
Cf. Sanskrit tathata, "thus-ness"
Type-token distinction
Vertiginous question
References
Further reading
E. Gilson, The Philosophy of the Middle Ages (1955)
A. Heuser, The Shaping Vision of Gerard Manley Hopkins (OUP 1955)
E. Longpre, La Philosophie du B. Duns Scotus (Paris 1924)
Gilles Deleuze and Félix Guattari. 1980. A Thousand Plateaus. Trans. Brian Massumi. London and New York: Continuum, 2004. Vol. 2 of Capitalism and Schizophrenia. 2 vols. 1972–1980. Trans. of Mille Plateaux. Paris: Les Editions de Minuit. ISBN
Gilles Deleuze and Félix Guattari. 1991/1994. "What is Philosophy?". Trans. Hugh Tomlinson and Gregory Burchell. New York: Columbia University Press, 1994.
Harold Garfinkel, 'Evidence for Locally Produced, Naturally Accountable Phenomena of Order, Logic, Meaning, Method, etc., in and as of the Essentially Unavoidable and Irremediable Haecceity of Immortal Ordinary Society', Sociological Theory Spring 1988, (6)1:103-109
External links
Singularity
Stanford Encyclopedia of Philosophy article — "Medieval Theories of Haecceity"
Essentialism
Ontology
Scotism
Substance theory | 0.782759 | 0.989428 | 0.774484 |
Normative ethics | Normative ethics is the study of ethical behaviour and is the branch of philosophical ethics that investigates questions regarding how one ought to act, in a moral sense.
Normative ethics is distinct from meta-ethics in that the former examines standards for the rightness and wrongness of actions, whereas the latter studies the meaning of moral language and the metaphysics of moral facts. Likewise, normative ethics is distinct from applied ethics in that the former is more concerned with 'who ought one be' rather than the ethics of a specific issue (e.g. if, or when, abortion is acceptable). Normative ethics is also distinct from descriptive ethics, as the latter is an empirical investigation of people's moral beliefs. In this context normative ethics is sometimes called prescriptive, as opposed to descriptive ethics. However, on certain versions of the meta-ethical view of moral realism, moral facts are both descriptive and prescriptive at the same time.
Most traditional moral theories rest on principles that determine whether an action is right or wrong. Classical theories in this vein include utilitarianism, Kantianism, and some forms of contractarianism. These theories mainly offered the use of overarching moral principles to resolve difficult moral decisions.
Normative ethical theories
There are disagreements about what precisely gives an action, rule, or disposition its ethical force. There are three competing views on how moral questions should be answered, along with hybrid positions that combine some elements of each: virtue ethics, deontological ethics; and consequentialism. The former focuses on the character of those who are acting. In contrast, both deontological ethics and consequentialism focus on the status of the action, rule, or disposition itself, and come in various forms.
Virtue ethics
Virtue ethics, advocated by Aristotle with some aspects being supported by Saint Thomas Aquinas, focuses on the inherent character of a person rather than on specific actions. There has been a significant revival of virtue ethics since the 1950s, through the work of such philosophers as G. E. M. Anscombe, Philippa Foot, Alasdair MacIntyre, and Rosalind Hursthouse.
Deontological ethics
Deontology argues that decisions should be made considering the factors of one's duties and one's rights. Some deontological theories include:
Immanuel Kant's categorical imperative, which roots morality in humanity's rational capacity and asserts certain inviolable moral laws.
The contractualism of John Rawls, which holds that the moral acts are those that we would all agree to if we were unbiased, behind a "veil of ignorance."
Natural rights theories, such that of John Locke or Robert Nozick, which hold that human beings have absolute, natural rights.
Consequentialism
Consequentialism argues that the morality of an action is contingent on the action's outcome or result. Consequentialist theories, varying in what they consider to be valuable (i.e., axiology), include:
Utilitarianism holds that an action is right if it leads to the most happiness for the greatest number of people. Prior to the coining of the term "consequentialism" by G. E. M. Anscombe in 1958 and the adoption of that term in the literature that followed, utilitarianism was the generic term for consequentialism, referring to all theories that promoted maximizing any form of utility, not just those that promoted maximizing happiness.
State consequentialism, or Mohist consequentialism, holds that an action is right if it leads to state welfare, through order, material wealth, and population growth.
Situational ethics emphasizes the particular context of an act when evaluating it ethically. Specifically, Christian forms of situational ethics hold that the correct action is the one that creates the most loving result, and that love should always be people's goal.
Intellectualism dictates that the best action is the one that best fosters and promotes knowledge.
Welfarism, which argues that the best action is the one that most increases economic well-being or welfare.
Preference utilitarianism, which holds that the best action is the one that leads to the most overall preference satisfaction.
Other theories
Ethics of care, or relational ethics, founded by feminist theorists, notably Carol Gilligan, argues that morality arises out of the experiences of empathy and compassion. It emphasizes the importance of interdependence and relationships in achieving ethical goals.
Pragmatic ethics is difficult to classify fully within any of the four preceding conceptions. This view argues that moral correctness evolves similarly to other kinds of knowledge—socially over the course of many lifetimes—and that norms, principles, and moral criteria are likely to be improved as a result of inquiry. Charles Sanders Peirce, William James, and John Dewey are known as the founders of pragmatism; a more recent proponent of pragmatic ethics was James D. Wallace
Role ethics is based on the concept of family roles.
Morality as a binding force
It can be unclear what it means to say that a person "ought to do X because it is moral, whether they like it or not." Morality is sometimes presumed to have some kind of special binding force on behaviour, though some philosophers believe that, used this way, the word "ought" seems to wrongly attribute magic powers to morality. For instance, G. E. M. Anscombe worries that "ought" has become "a word of mere mesmeric force."
The British ethicist Philippa Foot elaborates that morality does not seem to have any special binding force, and she clarifies that people only behave morally when motivated by other factors. Foot says "People talk, for instance, about the 'binding force' of morality, but it is not clear what this means if not that we feel ourselves unable to escape." The idea is that, faced with an opportunity to steal a book because we can get away with it, moral obligation itself has no power to stop us unless we feel an obligation. Morality may therefore have no binding force beyond regular human motivations, and people must be motivated to behave morally. The question then arises: what role does reason play in motivating moral behaviour?
Motivating morality
The categorical imperative perspective suggests that proper reason always leads to particular moral behaviour. As mentioned above, Foot instead believes that humans are actually motivated by desires. Proper reason, on this view, allows humans to discover actions that get them what they want (i.e., hypothetical imperatives)—not necessarily actions that are moral.
Social structure and motivation can make morality binding in a sense, but only because it makes moral norms feel inescapable, according to Foot.
John Stuart Mill adds that external pressures, to please others for instance, also influence this felt binding force, which he calls human "conscience". Mill says that humans must first reason about what is moral, then try to bring the feelings of our conscience in line with our reason. At the same time, Mill says that a good moral system (in his case, utilitarianism) ultimately appeals to aspects of human nature—which, must themselves be nurtured during upbringing. Mill explains:
This firm foundation is that of the social feelings of mankind; the desire to be in unity with our fellow creatures, which is already a powerful principle in human nature, and happily one of those which tend to become stronger, even without express inculcation, from the influences of advancing civilisation.
Mill thus believes that it is important to appreciate that it is feelings that drive moral behavior, but also that they may not be present in some people (e.g. psychopaths). Mill goes on to describe factors that help ensure people develop a conscience and behave morally.
Popular texts such as Joseph Daleiden's The Science of Morality: The Individual, Community, and Future Generations (1998) describe how societies can use science to figure out how to make people more likely to be good.
See also
Axiological ethics
Free will
Norm (philosophy)
Normative
Secular ethics
References
External links
Consequentialism and utilitarianism:
Introduction to Utilitarianism, an introductory online textbook on utilitarianism coauthored by William MacAskill.
Deontology:
Virtue ethics:
Ethical theories
Ethics
Philosophy of life | 0.776661 | 0.997141 | 0.774441 |
Learning theory (education) | Learning theory describes how students receive, process, and retain knowledge during learning. Cognitive, emotional, and environmental influences, as well as prior experience, all play a part in how understanding, or a worldview, is acquired or changed and knowledge and skills retained.
Behaviorists look at learning as an aspect of conditioning and advocating a system of rewards and targets in education. Educators who embrace cognitive theory believe that the definition of learning as a change in behaviour is too narrow, and study the learner rather than their environment—and in particular the complexities of human memory. Those who advocate constructivism believe that a learner's ability to learn relies largely on what they already know and understand, and the acquisition of knowledge should be an individually tailored process of construction. Transformative learning theory focuses on the often-necessary change required in a learner's preconceptions and worldview. Geographical learning theory focuses on the ways that contexts and environments shape the learning process.
Outside the realm of educational psychology, techniques to directly observe the functioning of the brain during the learning process, such as event-related potential and functional magnetic resonance imaging, are used in educational neuroscience. The theory of multiple intelligences, where learning is seen as the interaction between dozens of different functional areas in the brain each with their own individual strengths and weaknesses in any particular human learner, has also been proposed, but empirical research has found the theory to be unsupported by evidence.
Educational philosophy
Classical theorists
Plato
Plato (428 BC–347 BC) proposed the question: "How does an individual learn something new when the topic is brand new to that person?", This question may seem trivial; however, think of a human-like a computer. The question would then become: How does a computer take in any factual information without previous programming? Plato answered his own question by stating that knowledge is present at birth and all information learned by a person is merely a recollection of something the soul has already learned previously, which is called the Theory of Recollection or Platonic epistemology. This answer could be further justified by a paradox: If a person knows something, they don't need to question it, and if a person does not know something, they don't know to question it. Plato says that if one did not previously know something, then they cannot learn it. He describes learning as a passive process, where information and knowledge are ironed into the soul over time. However, Plato's theory elicits even more questions about knowledge: If we can only learn something when we already had the knowledge impressed onto our souls, then how did our souls gain that knowledge in the first place? Plato's theory can seem convoluted; however, his classical theory can still help us understand knowledge today.
Locke
John Locke (1632–1704) offered an answer to Plato's question as well. Locke offered the "blank slate" theory where humans are born into the world with no innate knowledge and are ready to be written on and influenced by the environment. The thinker maintained that knowledge and ideas originate from two sources, which are sensation and reflection. The former provides insights regarding external objects (including their properties) while the latter provides the ideas about one's mental faculties (volition and understanding). In the theory of empiricism, these sources are direct experience and observation. Locke, like David Hume, is considered an empiricist because he locates the source of human knowledge in the empirical world.
Locke recognized that something had to be present, however. This something, to Locke, seemed to be "mental powers". Locke viewed these powers as a biological ability the baby is born with, similar to how a baby knows how to biologically function when born. So as soon as the baby enters the world, it immediately has experiences with its surroundings and all of those experiences are being transcribed to the baby's "slate". All of the experiences then eventually culminate into complex and abstract ideas. This theory can still help teachers understand their students' learning today.
Educational psychology
Behavior analysis
The term "behaviorism" was coined by American psychologist John Watson (1878–1959). Watson believed the behaviorist view is a purely objective experimental branch of natural science with a goal to predict and control behavior. In an article in the Psychological Review, he stated that, "Its theoretical goal is the prediction and control of behavior. Introspection forms no essential part of its methods, nor is the scientific value of its data dependent upon the readiness with which they lend themselves to interpretation in terms of consciousness."
Methodological behaviorism is based on the theory of only explaining public events, or observable behavior. B.F. Skinner introduced another type of behaviorism called radical behaviorism, or the conceptual analysis of behavior, which is based on the theory of also explaining private events; particularly, thinking and feelings. Radical behaviorism forms the conceptual piece of behavior analysis.
In behavior analysis, learning is the acquisition of a new behavior through conditioning and social learning.
Learning and conditioning
The three main types of conditioning and learning:
Classical conditioning, where the behavior becomes a reflex response to an antecedent stimulus.
Operant conditioning, where antecedent stimuli results from the consequences that follow the behavior through a reward (reinforcement) or a punishment.
Social learning theory, where an observation of behavior is followed by modeling.
Ivan Pavlov discovered classical conditioning. He observed that if dogs come to associate the delivery of food with a white lab coat or the ringing of a bell, they produce saliva, even when there is no sight or smell of food. Classical conditioning considers this form of learning the same, whether in dogs or in humans. Operant conditioning reinforces this behavior with a reward or a punishment. A reward increases the likelihood of the behavior recurring, a punishment decreases its likelihood. Social learning theory observes behavior and is followed with modeling.
These three learning theories form the basis of applied behavior analysis, the application of behavior analysis, which uses analyzed antecedents, functional analysis, replacement behavior strategies, and often data collection and reinforcement to change behavior. The old practice was called behavior modification, which only used assumed antecedents and consequences to change behavior without acknowledging the conceptual analysis; analyzing the function of behavior and teaching of new behaviors that would serve the same function was never relevant in behavior modification.
Behaviorists view the learning process as a change in behavior, and arrange the environment to elicit desired responses through such devices as behavioral objectives, Competency-based learning, and skill development and training. Educational approaches such as Early Intensive Behavioral Intervention, curriculum-based measurement, and direct instruction have emerged from this model.
Transfer of learning
Transfer of learning is the idea that what one learns in school somehow carries over to situations different from that particular time and that particular setting. Transfer was amongst the first phenomena tested in educational psychology. Edward Lee Thorndike was a pioneer in transfer research. He found that though transfer is extremely important for learning, it is a rarely occurring phenomenon. In fact, he held an experiment where he had the subjects estimate the size of a specific shape and then he would switch the shape. He found that the prior information did not help the subjects; instead it impeded their learning.
One explanation of why transfer does not occur often involves surface structure and deep structure. The surface structure is the way a problem is framed. The deep structure is the steps for the solution. For example, when a math story problem changes contexts from asking how much it costs to reseed a lawn to how much it costs to varnish a table, they have different surface structures, but the steps for getting the answers are the same. However, many people are more influenced by the surface structure. In reality, the surface structure is unimportant. Nonetheless, people are concerned with it because they believe that it provides background knowledge on how to do the problem. Consequently, this interferes with their understanding of the deep structure of the problem. Even if somebody tries to concentrate on the deep structure, transfer still may be unsuccessful because the deep structure is not usually obvious. Therefore, surface structure gets in the way of people's ability to see the deep structure of the problem and transfer the knowledge they have learned to come up with a solution to a new problem.
Current learning pedagogies focus on conveying rote knowledge, independent of the context that gives it meaning. Because of this, students often struggle to transfer this stand-alone information into other aspects of their education. Students need much more than abstract concepts and self-contained knowledge; they need to be exposed to learning that is practiced in the context of authentic activity and culture. Critics of situated cognition, however, would argue that by discrediting stand-alone information, the transfer of knowledge across contextual boundaries becomes impossible. There must be a balance between situating knowledge while also grasping the deep structure of material, or the understanding of how one arrives to know such information.
Some theorists argue that transfer does not even occur at all. They believe that students transform what they have learned into the new context. They say that transfer is too much of a passive notion. They believe students, instead, transform their knowledge in an active way. Students don't simply carry over knowledge from the classroom, but they construct the knowledge in a way that they can understand it themselves. The learner changes the information they have learned to make it best adapt to the changing contexts that they use the knowledge in. This transformation process can occur when a learner feels motivated to use the knowledge—however, if the student does not find the transformation necessary, it is less likely that the knowledge will ever transform.
Techniques and benefits of transfer of learning
There are many different conditions that influence transfer of learning in the classroom. These conditions include features of the task, features of the learner, features of the organization and social context of the activity. The features of the task include practicing through simulations, problem-based learning, and knowledge and skills for implementing new plans. The features of learners include their ability to reflect on past experiences, their ability to participate in group discussions, practice skills, and participate in written discussions. All the unique features contribute to a student's ability to use transfer of learning. There are structural techniques that can aid learning transfer in the classroom. These structural strategies include hugging and bridging.
Hugging uses the technique of simulating an activity to encourage reflexive learning. An example of the hugging strategy is when a student practices teaching a lesson or when a student role plays with another student. These examples encourage critical thinking that engages the student and helps them understand what they are learning—one of the goals of transfer of learning and desirable difficulties.
Bridging is when instruction encourages thinking abstractly by helping to identify connections between ideas and to analyze those connections. An example is when a teacher lets the student analyze their past test results and the way they got those results. This includes amount of study time and study strategies. Looking at their past study strategies can help them come up with strategies to improve performance. These are some of the ideas important to successful to hugging and bridging practices.
There are many benefits of transfer of learning in the classroom. One of the main benefits is the ability to quickly learn a new task. This has many real-life applications such as language and speech processing. Transfer of learning is also very useful in teaching students to use higher cognitive thinking by applying their background knowledge to new situations.
Cognitivism
Gestalt theory
Cognitive theories grew out of Gestalt psychology. Gestalt psychology was developed in Germany in the early 1900s by Wolfgang Kohler and was brought to America in the 1920s. The German word Gestalt is roughly equivalent to the English "emergence (of a form-as in the game pictionary, when all of a sudden one recognises what the person is trying to convey - the form and meaning "emerge")", configuration or organization and emphasizes the whole of human experience. Over the years, the Gestalt psychologists provided demonstrations and described principles to explain the way we organize our sensations into perceptions. Max Wertheimer, one of the founding fathers of Gestalt Theory, observed that sometimes we interpret motion when there is no motion at all. For example: a powered sign used at a convenience store to indicate that the store is open or closed might be seen as a sign with "constant light". However, the lights are actually flashing. Each light has been programmed to blink rapidly at their own individual pace. Perceived as a whole however, the sign appears fully lit without flashes. If perceived individually, the lights turn off and on at designated times. Another example of this would be a brick house: As a whole, it is viewed as a standing structure. However, it is actually composed of many smaller parts, which are individual bricks. People tend to see things from a holistic point of view rather than breaking it down into sub units.
In Gestalt theory, psychologists say that instead of obtaining knowledge from what's in front of us, we often learn by making sense of the relationship between what's new and old. Because we have a unique perspective of the world, humans have the ability to generate their own learning experiences and interpret information that may or may not be the same for someone else.
Gestalt psychologists criticize behaviorists for being too dependent on overt behavior to explain learning. They propose looking at the patterns rather than isolated events. Gestalt views of learning have been incorporated into what have come to be labeled cognitive theories. Two key assumptions underlie this cognitive approach: that the memory system is an active organized processor of information and that prior knowledge plays an important role in learning. Gestalt theorists believe that for learning to occur, prior knowledge must exist on the topic. When the learner applies their prior knowledge to the advanced topic, the learner can understand the meaning in the advanced topic, and learning can occur. Cognitive theories look beyond behavior to consider how human memory works to promote learning, and an understanding of short-term memory and long-term memory is important to educators influenced by cognitive theory. They view learning as an internal mental process (including insight, information processing, memory and perception) where the educator focuses on building intelligence and cognitive development. The individual learner is more important than the environment.
Other cognitive theories
Once memory theories like the Atkinson–Shiffrin memory model and Baddeley's working memory model were established as a theoretical framework in cognitive psychology, new cognitive frameworks of learning began to emerge during the 1970s, 80s, and 90s. Today, researchers are concentrating on topics like cognitive load and information processing theory. These theories of learning play a role in influencing instructional design. Cognitive theory is used to explain such topics as social role acquisition, intelligence and memory as related to age.
In the late twentieth century, situated cognition emerged as a theory that recognized current learning as primarily the transfer of decontextualized and formal knowledge. Bredo (1994) depicts situated cognition as "shifting the focus from individual in environment to individual and environment". In other words, individual cognition should be considered as intimately related with the context of social interactions and culturally constructed meaning. Learning through this perspective, in which knowing and doing become inseparable, becomes both applicable and whole.
Much of the education students receive is limited to the culture of schools, without consideration for authentic cultures outside of education. Curricula framed by situated cognition can bring knowledge to life by embedding the learned material within the culture students are familiar with. For example, formal and abstract syntax of math problems can be transformed by placing a traditional math problem within a practical story problem. This presents an opportunity to meet that appropriate balance between situated and transferable knowledge. Lampert (1987) successfully did this by having students explore mathematical concepts that are continuous with their background knowledge. She does so by using money, which all students are familiar with, and then develops the lesson to include more complex stories that allow for students to see various solutions as well as create their own. In this way, knowledge becomes active, evolving as students participate and negotiate their way through new situations.
Constructivism
Founded by Jean Piaget, constructivism emphasizes the importance of the active involvement of learners in constructing knowledge for themselves. Students are thought to use background knowledge and concepts to assist them in their acquisition of novel information. On approaching such new information, the learner faces a loss of equilibrium with their previous understanding, and this demands a change in cognitive structure. This change effectively combines previous and novel information to form an improved cognitive schema. Constructivism can be both subjectively and contextually based. Under the theory of radical constructivism, coined by Ernst von Glasersfeld, understanding relies on one's subjective interpretation of experience as opposed to objective "reality". Similarly, William Cobern's idea of contextual constructivism encompasses the effects of culture and society on experience.
Constructivism asks why students do not learn deeply by listening to a teacher, or reading from a textbook. To design effective teaching environments, it believes one needs a good understanding of what children already know when they come into the classroom. The curriculum should be designed in a way that builds on the pupil's background knowledge and is allowed to develop with them. Begin with complex problems and teach basic skills while solving these problems. The learning theories of John Dewey, Maria Montessori, and David A. Kolb serve as the foundation of the application of constructivist learning theory in the classroom. Constructivism has many varieties such as active learning, discovery learning, and knowledge building, but all versions promote a student's free exploration within a given framework or structure. The teacher acts as a facilitator who encourages students to discover principles for themselves and to construct knowledge by working answering open-ended questions and solving real-world problems. To do this, a teacher should encourage curiosity and discussion among his/her students as well as promoting their autonomy. In scientific areas in the classroom, constructivist teachers provide raw data and physical materials for the students to work with and analyze.
Transformative learning theory
Transformative learning theory seeks to explain how humans revise and reinterpret meaning. Transformative learning is the cognitive process of effecting change in a frame of reference. A frame of reference defines our view of the world. The emotions are often involved. Adults have a tendency to reject any ideas that do not correspond to their particular values, associations and concepts.
Our frames of reference are composed of two dimensions: habits of mind and points of view. Habits of mind, such as ethnocentrism, are harder to change than points of view. Habits of mind influence our point of view and the resulting thoughts or feelings associated with them, but points of view may change over time as a result of influences such as reflection, appropriation and feedback. Transformative learning takes place by discussing with others the "reasons presented in support of competing interpretations, by critically examining evidence, arguments, and alternative points of view". When circumstances permit, transformative learners move toward a frame of reference that is more inclusive, discriminating, self-reflective, and integrative of experience.
Educational neuroscience
American Universities such as Harvard, Johns Hopkins, and University of Southern California began offering majors and degrees dedicated to educational neuroscience or neuroeducation in the first decade of the twenty-first century. Such studies seek to link an understanding of brain processes with classroom instruction and experiences. Neuroeducation analyzes biological changes in the brain from processing new information. It looks at what environmental, emotional, and social situations best help the brain store and retain new information via the linking of neurons—and best keep the dendrites from being reabsorbed, losing the information. The 1990s were designated "The Decade of the Brain", and advances took place in neuroscience at an especially rapid pace. The three dominant methods for measuring brain activities are event-related potential, functional magnetic resonance imaging and magnetoencephalography (MEG).
The integration and application to education of what we know about the brain was strengthened in 2000 when the American Federation of Teachers stated: "It is vital that we identify what science tells us about how people learn in order to improve the education curriculum." What is exciting about this new field in education is that modern brain imaging techniques now make it possible, in some sense, to watch the brain as it learns, and the question then arises: can the results of neuro-scientific studies of brains as they are learning usefully inform practice in this area? The neuroscience field is young. Researchers expected that new technologies and ways of observing will produce new scientific evidence that helps refine the paradigms of what students need and how they learn best. In particular, it may bring more informed strategies for teaching students with learning disabilities.
Formal and mental discipline
All individuals have the ability to develop mental discipline and the skill of mindfulness, the two go hand in hand. Mental discipline is huge in shaping what people do, say, think and feel. It's critical in terms of the processing of information and involves the ability to recognize and respond appropriately to new things and information people come across, or have recently been taught. Mindfulness is important to the process of learning in many aspects. Being mindful means to be present with and engaged in whatever you are doing at a specific moment in time. Being mindful can aid in helping us to more critically think, feel and understand the new information we are in the process of absorbing.
The formal discipline approach seeks to develop causation between the advancement of the mind by exercising it through exposure to abstract school subjects such as science, language and mathematics. With student's repetitive exposure to these particular subjects, some scholars feel that the acquisition of knowledge pertaining to science, language and math is of "secondary importance", and believe that the strengthening and further development of the mind that this curriculum provides holds far greater significance to the progressing learner in the long haul. D.C. Phillips and Jonas F. Soltis provide some skepticism to this notion. Their skepticism stems largely in part from feeling that the relationship between formal discipline and the overall advancement of the mind is not as strong as some would say. They illustrate their skepticism by opining that it is foolish to blindly assume that people are better off in life, or at performing certain tasks, because of taking particular, yet unrelated courses.
Multiple intelligences
The existence of multiple intelligences is proposed by psychologist Howard Gardner, who suggests that different kinds of intelligence exists in human beings. It is a theory that has been fashionable in continuous professional development (CPD) training courses for teachers. However, the theory of multiple intelligences is often cited as an example of pseudoscience because it lacks empirical evidence or falsifiability.
Multimedia learning
Multimedia learning refers to the use of visual and auditory teaching materials that may include video, computer and other information technology. Multimedia learning theory focuses on the principles that determine the effective use of multimedia in learning, with emphasis on using both the visual and auditory channels for information processing.
The auditory channel deals with information that is heard, and the visual channel processes information that is seen. The visual channel holds less information than the auditory channel. If both the visual and auditory channels are presented with information, more knowledge is retained. However, if too much information is delivered it is inadequately processed, and long-term memory is not acquired. Multimedia learning seeks to give instructors the ability to stimulate both the visual and auditory channels of the learner, resulting in better progress.
Using online games for learning
Many educators and researchers believe that information technology could bring innovation on traditional educational instructions. Teachers and technologists are searching for new and innovative ways to design learner-centered learning environments effectively, trying to engage learners more in the learning process. Claims have been made that online games have the potential to teach, train and educate and they are effective means for learning skills and attitudes that are not so easy to learn by rote memorization.
There has been a lot of research done in identifying the learning effectiveness in game based learning. Learner characteristics and cognitive learning outcomes have been identified as the key factors in research on the implementation of games in educational settings. In the process of learning a language through an online game, there is a strong relationship between the learner's prior knowledge of that language and their cognitive learning outcomes. For the people with prior knowledge of the language, the learning effectiveness of the games is much more than those with none or less knowledge of the language.
Other learning theories
Other learning theories have also been developed for more specific purposes. For example, andragogy is the art and science to help adults learn. Connectivism is a recent theory of networked learning, which focuses on learning as making connections. The Learning as a Network (LaaN) theory builds upon connectivism, complexity theory, and double-loop learning. It starts from the learner and views learning as the continuous creation of a personal knowledge network (PKN).
Learning style theories
Learning style theories propose that individuals learn in different ways, that there are distinct learning styles and that knowledge of a learner's preferred learning style leads to faster and more satisfactory improvement. However, the current research has not been able to find solid scientific evidence to support the main premises of learning styles theory.
Affective Context Model
People remember how things made them feel, and use those emotional imprints to create memories on demand.
Informal and post-modern theories
In theories that make use of cognitive restructuring, an informal curriculum promotes the use of prior knowledge to help students gain a broad understanding of concepts. New knowledge cannot be told to students, it believes, but rather the students' current knowledge must be challenged. In this way, students adjust their ideas to more closely resemble actual theories or concepts. By using this method students gain the broad understanding they're taught and later are more willing to learn and keep the specifics of the concept or theory. This theory further aligns with the idea that teaching the concepts and the language of a subject should be split into multiple steps.
Other informal learning theories look at the sources of motivation for learning. Intrinsic motivation may create a more self-regulated learner, yet schools undermine intrinsic motivation. Critics argue that the average student learning in isolation performs significantly less well than those learning with collaboration and mediation. Students learn through talk, discussion, and argumentation.
Educational anthropology
Philosophical anthropology
According to Theodora Polito, "every well-constructed theory of education [has] at [its] center a philosophical anthropology," which is "a philosophical reflection on some basic problems of mankind." Philosophical anthropology is an exploration of human nature and humanity. Aristotle, an early influence on the field, deemed human nature to be "rational animality," wherein humans are closely related to other animals but still set apart by their ability to form rational thought. Philosophical anthropology expanded upon these ideas by clarifying that rationality is, "determined by the biological and social conditions in which the lives of human beings are embedded." Fully developed learning theories address some of the "basic problems of mankind" by examining these biological and social conditions to understand and manipulate the rationality of humanity in the context of learning.
Philosophical anthropology is evident in behaviorism, which requires an understanding of humanity and human nature in order to assert that the similarities between humans and other animals are critical and influential to the process of learning. Situated cognition focuses on how humans interact with each other and their environments, which would be considered the "social conditions" explored within the field of philosophical anthropology. Transformative learning theories operate with the assumption that humans are rational creatures capable of examining and redefining perspectives, something that is heavily considered within philosophical anthropology.
An awareness and understanding of philosophical anthropology contributes to a greater comprehension and practice of any learning theory. In some cases, philosophy can be used to further explore and define uncertain terms within the field of education. Philosophy can also be a vehicle to explore the purpose of education, which can greatly influence an educational theory.
Criticism
Critics of learning theories that seek to displace traditional educational practices claim that there is no need for such theories; that the attempt to comprehend the process of learning through the construction of theories creates problems and inhibits personal freedom.
See also
Andragogical learning theory
Cognitivism (learning theory)
Connectivism (learning theory)
Constructivism (learning theory)
Cultural-historical psychology
Evidence-based education
Instructional theory
Instructional design
Kinesthetic learning
Learning by teaching
Learning environment
Learning space
Psychology of learning
Science, technology, society and environment education
About accelerating the learning process
Cognitive acceleration
Spaced repetition
Incremental reading
Forward testing effect
About the mechanisms of memory and learning
Neural networks in the brain
Sleep and learning
Latent learning
Memory consolidation
Short-term memory versus working memory
Long-term memory
Desirable difficulties
Declarative memory versus procedural memory
The cerebellum and motor learning
About learning theories related to classroom learning
Contemporary Educational Psychology/Chapter 2: The Learning Process
References
76. Teaching for Transfer of Learning. Thomas, Ruth; National Center for Research in Vocational Education, Berkeley, CA.. 93 NCRVE, December 1992.
77. Perkins, D. (1992). Transfer of Learning. International Encyclopedia of Education, 2. Retrieved March 23, 2015.
Further reading
External links
Social Science Research Network. How to Become an Expert Law Teacher by Understanding the Neurobiology of Learning
ERIC Digest. How People Learn (and What Technology Might Have To Do with It)
Instructional Design Learning theories
Learning theories Wiki Learning theories
Psychological theories
Education theory | 0.778323 | 0.994869 | 0.774329 |
Thought | In their most common sense, the terms thought and thinking refer to cognitive processes that can happen independently of sensory stimulation. Their most paradigmatic forms are judging, reasoning, concept formation, problem solving, and deliberation. But other mental processes, like considering an idea, memory, or imagination, are also often included. These processes can happen internally independent of the sensory organs, unlike perception. But when understood in the widest sense, any mental event may be understood as a form of thinking, including perception and unconscious mental processes. In a slightly different sense, the term thought refers not to the mental processes themselves but to mental states or systems of ideas brought about by these processes.
Various theories of thinking have been proposed, some of which aim to capture the characteristic features of thought. Platonists hold that thinking consists in discerning and inspecting Platonic forms and their interrelations. It involves the ability to discriminate between the pure Platonic forms themselves and the mere imitations found in the sensory world. According to Aristotelianism, to think about something is to instantiate in one's mind the universal essence of the object of thought. These universals are abstracted from sense experience and are not understood as existing in a changeless intelligible world, in contrast to Platonism. Conceptualism is closely related to Aristotelianism: it identifies thinking with mentally evoking concepts instead of instantiating essences. Inner speech theories claim that thinking is a form of inner speech in which words are silently expressed in the thinker's mind. According to some accounts, this happens in a regular language, like English or French. The language of thought hypothesis, on the other hand, holds that this happens in the medium of a unique mental language called Mentalese. Central to this idea is that linguistic representational systems are built up from atomic and compound representations and that this structure is also found in thought. Associationists understand thinking as the succession of ideas or images. They are particularly interested in the laws of association that govern how the train of thought unfolds. Behaviorists, by contrast, identify thinking with behavioral dispositions to engage in public intelligent behavior as a reaction to particular external stimuli. Computationalism is the most recent of these theories. It sees thinking in analogy to how computers work in terms of the storage, transmission, and processing of information.
Various types of thinking are discussed in academic literature. A judgment is a mental operation in which a proposition is evoked and then either affirmed or denied. Reasoning, on the other hand, is the process of drawing conclusions from premises or evidence. Both judging and reasoning depend on the possession of the relevant concepts, which are acquired in the process of concept formation. In the case of problem solving, thinking aims at reaching a predefined goal by overcoming certain obstacles. Deliberation is an important form of practical thought that consists in formulating possible courses of action and assessing the reasons for and against them. This may lead to a decision by choosing the most favorable option. Both episodic memory and imagination present objects and situations internally, in an attempt to accurately reproduce what was previously experienced or as a free rearrangement, respectively. Unconscious thought is thought that happens without being directly experienced. It is sometimes posited to explain how difficult problems are solved in cases where no conscious thought was employed.
Thought is discussed in various academic disciplines. Phenomenology is interested in the experience of thinking. An important question in this field concerns the experiential character of thinking and to what extent this character can be explained in terms of sensory experience. Metaphysics is, among other things, interested in the relation between mind and matter. This concerns the question of how thinking can fit into the material world as described by the natural sciences. Cognitive psychology aims to understand thought as a form of information processing. Developmental psychology, on the other hand, investigates the development of thought from birth to maturity and asks which factors this development depends on. Psychoanalysis emphasizes the role of the unconscious in mental life. Other fields concerned with thought include linguistics, neuroscience, artificial intelligence, biology, and sociology. Various concepts and theories are closely related to the topic of thought. The term "law of thought" refers to three fundamental laws of logic: the law of contradiction, the law of excluded middle, and the principle of identity. Counterfactual thinking involves mental representations of non-actual situations and events in which the thinker tries to assess what would be the case if things had been different. Thought experiments often employ counterfactual thinking in order to illustrate theories or to test their plausibility. Critical thinking is a form of thinking that is reasonable, reflective, and focused on determining what to believe or how to act. Positive thinking involves focusing one's attention on the positive aspects of one's situation and is intimately related to optimism.
Definition
The terms "thought" and "thinking" refer to a wide variety of psychological activities. In their most common sense, they are understood as conscious processes that can happen independently of sensory stimulation. This includes various different mental processes, like considering an idea or proposition or judging it to be true. In this sense, memory and imagination are forms of thought but perception is not. In a more restricted sense, only the most paradigmatic cases are considered thought. These involve conscious processes that are conceptual or linguistic and sufficiently abstract, like judging, inferring, problem solving, and deliberating. Sometimes the terms "thought" and "thinking" are understood in a very wide sense as referring to any form of mental process, conscious or unconscious. In this sense, they may be used synonymously with the term "mind". This usage is encountered, for example, in the Cartesian tradition, where minds are understood as thinking things, and in the cognitive sciences. But this sense may include the restriction that such processes have to lead to intelligent behavior to be considered thought. A contrast sometimes found in the academic literature is that between thinking and feeling. In this context, thinking is associated with a sober, dispassionate, and rational approach to its topic while feeling involves a direct emotional engagement.
The terms "thought" and "thinking" can also be used to refer not to the mental processes themselves but to mental states or systems of ideas brought about by these processes. In this sense, they are often synonymous with the term "belief" and its cognates and may refer to the mental states which either belong to an individual or are common among a certain group of people. Discussions of thought in the academic literature often leave it implicit which sense of the term they have in mind.
The word thought comes from Old English þoht, or geþoht, from the stem of þencan "to conceive of in the mind, consider".
Theories of thinking
Various theories of thinking have been proposed. They aim to capture the characteristic features of thinking. The theories listed here are not exclusive: it may be possible to combine some without leading to a contradiction.
Platonism
According to Platonism, thinking is a spiritual activity in which Platonic forms and their interrelations are discerned and inspected. This activity is understood as a form of silent inner speech in which the soul talks to itself. Platonic forms are seen as universals that exist in a changeless realm different from the sensible world. Examples include the forms of goodness, beauty, unity, and sameness. On this view, the difficulty of thinking consists in being unable to grasp the Platonic forms and to distinguish them as the original from the mere imitations found in the sensory world. This means, for example, distinguishing beauty itself from derivative images of beauty. One problem for this view is to explain how humans can learn and think about Platonic forms belonging to a different realm. Plato himself tries to solve this problem through his theory of recollection, according to which the soul already was in contact with the Platonic forms before and is therefore able to remember what they are like. But this explanation depends on various assumptions usually not accepted in contemporary thought.
Aristotelianism and conceptualism
Aristotelians hold that the mind is able to think about something by instantiating the essence of the object of thought. So while thinking about trees, the mind instantiates tree-ness. This instantiation does not happen in matter, as is the case for actual trees, but in mind, though the universal essence instantiated in both cases is the same. In contrast to Platonism, these universals are not understood as Platonic forms existing in a changeless intelligible world. Instead, they only exist to the extent that they are instantiated. The mind learns to discriminate universals through abstraction from experience. This explanation avoids various of the objections raised against Platonism.
Conceptualism is closely related to Aristotelianism. It states that thinking consists in mentally evoking concepts. Some of these concepts may be innate, but most have to be learned through abstraction from sense experience before they can be used in thought.
It has been argued against these views that they have problems in accounting for the logical form of thought. For example, to think that it will either rain or snow, it is not sufficient to instantiate the essences of rain and snow or to evoke the corresponding concepts. The reason for this is that the disjunctive relation between the rain and the snow is not captured this way. Another problem shared by these positions is the difficulty of giving a satisfying account of how essences or concepts are learned by the mind through abstraction.
Inner speech theory
Inner speech theories claim that thinking is a form of inner speech. This view is sometimes termed psychological nominalism. It states that thinking involves silently evoking words and connecting them to form mental sentences. The knowledge a person has of their thoughts can be explained as a form of overhearing one's own silent monologue. Three central aspects are often ascribed to inner speech: it is in an important sense similar to hearing sounds, it involves the use of language and it constitutes a motor plan that could be used for actual speech. This connection to language is supported by the fact that thinking is often accompanied by muscle activity in the speech organs. This activity may facilitate thinking in certain cases but is not necessary for it in general. According to some accounts, thinking happens not in a regular language, like English or French, but has its own type of language with the corresponding symbols and syntax. This theory is known as the language of thought hypothesis.
Inner speech theory has a strong initial plausibility since introspection suggests that indeed many thoughts are accompanied by inner speech. But its opponents usually contend that this is not true for all types of thinking. It has been argued, for example, that forms of daydreaming constitute non-linguistic thought. This issue is relevant to the question of whether animals have the capacity to think. If thinking is necessarily tied to language then this would suggest that there is an important gap between humans and animals since only humans have a sufficiently complex language. But the existence of non-linguistic thoughts suggests that this gap may not be that big and that some animals do indeed think.
Language of thought hypothesis
There are various theories about the relation between language and thought. One prominent version in contemporary philosophy is called the language of thought hypothesis. It states that thinking happens in the medium of a mental language. This language, often referred to as Mentalese, is similar to regular languages in various respects: it is composed of words that are connected to each other in syntactic ways to form sentences. This claim does not merely rest on an intuitive analogy between language and thought. Instead, it provides a clear definition of the features a representational system has to embody in order to have a linguistic structure. On the level of syntax, the representational system has to possess two types of representations: atomic and compound representations. Atomic representations are basic whereas compound representations are constituted either by other compound representations or by atomic representations. On the level of semantics, the semantic content or the meaning of the compound representations should depend on the semantic contents of its constituents. A representational system is linguistically structured if it fulfills these two requirements.
The language of thought hypothesis states that the same is true for thinking in general. This would mean that thought is composed of certain atomic representational constituents that can be combined as described above. Apart from this abstract characterization, no further concrete claims are made about how human thought is implemented by the brain or which other similarities to natural language it has. The language of thought hypothesis was first introduced by Jerry Fodor. He argues in favor of this claim by holding that it constitutes the best explanation of the characteristic features of thinking. One of these features is productivity: a system of representations is productive if it can generate an infinite number of unique representations based on a low number of atomic representations. This applies to thought since human beings are capable of entertaining an infinite number of distinct thoughts even though their mental capacities are quite limited. Other characteristic features of thinking include systematicity and inferential coherence. Fodor argues that the language of thought hypothesis is true as it explains how thought can have these features and because there is no good alternative explanation. Some arguments against the language of thought hypothesis are based on neural networks, which are able to produce intelligent behavior without depending on representational systems. Other objections focus on the idea that some mental representations happen non-linguistically, for example, in the form of maps or images.
Computationalists have been especially interested in the language of thought hypothesis since it provides ways to close the gap between thought in the human brain and computational processes implemented by computers. The reason for this is that processes over representations that respect syntax and semantics, like inferences according to the modus ponens, can be implemented by physical systems using causal relations. The same linguistic systems may be implemented through different material systems, like brains or computers. In this way, computers can think.
Associationism
An important view in the empiricist tradition has been associationism, the view that thinking consists in the succession of ideas or images. This succession is seen as being governed by laws of association, which determine how the train of thought unfolds. These laws are different from logical relations between the contents of thoughts, which are found in the case of drawing inferences by moving from the thought of the premises to the thought of the conclusion. Various laws of association have been suggested. According to the laws of similarity and contrast, ideas tend to evoke other ideas that are either very similar to them or their opposite. The law of contiguity, on the other hand, states that if two ideas were frequently experienced together, then the experience of one tends to cause the experience of the other. In this sense, the history of an organism's experience determines which thoughts the organism has and how these thoughts unfold. But such an association does not guarantee that the connection is meaningful or rational. For example, because of the association between the terms "cold" and "Idaho", the thought "this coffee shop is cold" might lead to the thought "Russia should annex Idaho".
One form of associationism is imagism. It states that thinking involves entertaining a sequence of images where earlier images conjure up later images based on the laws of association. One problem with this view is that we can think about things that we cannot imagine. This is especially relevant when the thought involves very complex objects or infinities, which is common, for example, in mathematical thought. One criticism directed at associationism in general is that its claim is too far-reaching. There is wide agreement that associative processes as studied by associationists play some role in how thought unfolds. But the claim that this mechanism is sufficient to understand all thought or all mental processes is usually not accepted.
Behaviorism
According to behaviorism, thinking consists in behavioral dispositions to engage in certain publicly observable behavior as a reaction to particular external stimuli. On this view, having a particular thought is the same as having a disposition to behave in a certain way. This view is often motivated by empirical considerations: it is very difficult to study thinking as a private mental process but it is much easier to study how organisms react to a certain situation with a given behavior. In this sense, the capacity to solve problems not through existing habits but through creative new approaches is particularly relevant. The term "behaviorism" is also sometimes used in a slightly different sense when applied to thinking to refer to a specific form of inner speech theory. This view focuses on the idea that the relevant inner speech is a derivative form of regular outward speech. This sense overlaps with how behaviorism is understood more commonly in philosophy of mind since these inner speech acts are not observed by the researcher but merely inferred from the subject's intelligent behavior. This remains true to the general behaviorist principle that behavioral evidence is required for any psychological hypothesis.
One problem for behaviorism is that the same entity often behaves differently despite being in the same situation as before. This problem consists in the fact that individual thoughts or mental states usually do not correspond to one particular behavior. So thinking that the pie is tasty does not automatically lead to eating the pie, since various other mental states may still inhibit this behavior, for example, the belief that it would be impolite to do so or that the pie is poisoned.
Computationalism
Computationalist theories of thinking, often found in the cognitive sciences, understand thinking as a form of information processing. These views developed with the rise of computers in the second part of the 20th century, when various theorists saw thinking in analogy to computer operations. On such views, the information may be encoded differently in the brain, but in principle, the same operations take place there as well, corresponding to the storage, transmission, and processing of information. But while this analogy has some intuitive attraction, theorists have struggled to give a more explicit explanation of what computation is. A further problem consists in explaining the sense in which thinking is a form of computing. The traditionally dominant view defines computation in terms of Turing machines, though contemporary accounts often focus on neural networks for their analogies. A Turing machine is capable of executing any algorithm based on a few very basic principles, such as reading a symbol from a cell, writing a symbol to a cell, and executing instructions based on the symbols read. This way it is possible to perform deductive reasoning following the inference rules of formal logic as well as simulating many other functions of the mind, such as language processing, decision making, and motor control. But computationalism does not only claim that thinking is in some sense similar to computation. Instead, it is claimed that thinking just is a form of computation or that the mind is a Turing machine.
Computationalist theories of thought are sometimes divided into functionalist and representationalist approaches. Functionalist approaches define mental states through their causal roles but allow both external and internal events in their causal network. Thought may be seen as a form of program that can be executed in the same way by many different systems, including humans, animals, and even robots. According to one such view, whether something is a thought only depends on its role "in producing further internal states and verbal outputs". Representationalism, on the other hand, focuses on the representational features of mental states and defines thoughts as sequences of intentional mental states. In this sense, computationalism is often combined with the language of thought hypothesis by interpreting these sequences as symbols whose order is governed by syntactic rules.
Various arguments have been raised against computationalism. In one sense, it seems trivial since almost any physical system can be described as executing computations and therefore as thinking. For example, it has been argued that the molecular movements in a regular wall can be understood as computing an algorithm since they are "isomorphic to the formal structure of the program" in question under the right interpretation. This would lead to the implausible conclusion that the wall is thinking. Another objection focuses on the idea that computationalism captures only some aspects of thought but is unable to account for other crucial aspects of human cognition.
Types of thinking
A great variety of types of thinking are discussed in the academic literature. A common approach divides them into those forms that aim at the creation of theoretical knowledge and those that aim at producing actions or correct decisions, but there is no universally accepted taxonomy summarizing all these types.
Entertaining, judging, and reasoning
Thinking is often identified with the act of judging. A judgment is a mental operation in which a proposition is evoked and then either affirmed or denied. It involves deciding what to believe and aims at determining whether the judged proposition is true or false. Various theories of judgment have been proposed. The traditionally dominant approach is the combination theory. It states that judgments consist in the combination of concepts. On this view, to judge that "all men are mortal" is to combine the concepts "man" and "mortal". The same concepts can be combined in different ways, corresponding to different forms of judgment, for example, as "some men are mortal" or "no man is mortal".
Other theories of judgment focus more on the relation between the judged proposition and reality. According to Franz Brentano, a judgment is either a belief or a disbelief in the existence of some entity. In this sense, there are only two fundamental forms of judgment: "A exists" and "A does not exist". When applied to the sentence "all men are mortal", the entity in question is "immortal men", of whom it is said that they do not exist. Important for Brentano is the distinction between the mere representation of the content of the judgment and the affirmation or the denial of the content. The mere representation of a proposition is often referred to as "entertaining a proposition". This is the case, for example, when one considers a proposition but has not yet made up one's mind about whether it is true or false. The term "thinking" can refer both to judging and to mere entertaining. This difference is often explicit in the way the thought is expressed: "thinking that" usually involves a judgment whereas "thinking about" refers to the neutral representation of a proposition without an accompanying belief. In this case, the proposition is merely entertained but not yet judged. Some forms of thinking may involve the representation of objects without any propositions, as when someone is thinking about their grandmother.
Reasoning is one of the most paradigmatic forms of thinking. It is the process of drawing conclusions from premises or evidence. Types of reasoning can be divided into deductive and non-deductive reasoning. Deductive reasoning is governed by certain rules of inference, which guarantee the truth of the conclusion if the premises are true. For example, given the premises "all men are mortal" and "Socrates is a man", it follows deductively that "Socrates is mortal". Non-deductive reasoning, also referred to as defeasible reasoning or non-monotonic reasoning, is still rationally compelling but the truth of the conclusion is not ensured by the truth of the premises. Induction is one form of non-deductive reasoning, for example, when one concludes that "the sun will rise tomorrow" based on one's experiences of all the previous days. Other forms of non-deductive reasoning include the inference to the best explanation and analogical reasoning.
Fallacies are faulty forms of thinking that go against the norms of correct reasoning. Formal fallacies concern faulty inferences found in deductive reasoning. Denying the antecedent is one type of formal fallacy, for example, "If Othello is a bachelor, then he is male. Othello is not a bachelor. Therefore, Othello is not male". Informal fallacies, on the other hand, apply to all types of reasoning. The source of their flaw is to be found in the content or the context of the argument. This is often caused by ambiguous or vague expressions in natural language, as in "Feathers are light. What is light cannot be dark. Therefore, feathers cannot be dark". An important aspect of fallacies is that they seem to be rationally compelling on the first look and thereby seduce people into accepting and committing them. Whether an act of reasoning constitutes a fallacy does not depend on whether the premises are true or false but on their relation to the conclusion and, in some cases, on the context.
Concept formation
Concepts are general notions that constitute the fundamental building blocks of thought. They are rules that govern how objects are sorted into different classes. A person can only think about a proposition if they possess the concepts involved in this proposition. For example, the proposition "wombats are animals" involves the concepts "wombat" and "animal". Someone who does not possess the concept "wombat" may still be able to read the sentence but cannot entertain the corresponding proposition. Concept formation is a form of thinking in which new concepts are acquired. It involves becoming familiar with the characteristic features shared by all instances of the corresponding type of entity and developing the ability to identify positive and negative cases. This process usually corresponds to learning the meaning of the word associated with the type in question. There are various theories concerning how concepts and concept possession are to be understood. The use of metaphor may aid in the processes of concept formation.
According to one popular view, concepts are to be understood in terms of abilities. On this view, two central aspects characterize concept possession: the ability to discriminate between positive and negative cases and the ability to draw inferences from this concept to related concepts. Concept formation corresponds to acquiring these abilities. It has been suggested that animals are also able to learn concepts to some extent, due to their ability to discriminate between different types of situations and to adjust their behavior accordingly.
Problem solving
In the case of problem solving, thinking aims at reaching a predefined goal by overcoming certain obstacles. This process often involves two different forms of thinking. On the one hand, divergent thinking aims at coming up with as many alternative solutions as possible. On the other hand, convergent thinking tries to narrow down the range of alternatives to the most promising candidates. Some researchers identify various steps in the process of problem solving. These steps include recognizing the problem, trying to understand its nature, identifying general criteria the solution should meet, deciding how these criteria should be prioritized, monitoring the progress, and evaluating the results.
An important distinction concerns the type of problem that is faced. For well-structured problems, it is easy to determine which steps need to be taken to solve them, but executing these steps may still be difficult. For ill-structured problems, on the other hand, it is not clear what steps need to be taken, i.e. there is no clear formula that would lead to success if followed correctly. In this case, the solution may sometimes come in a flash of insight in which the problem is suddenly seen in a new light. Another way to categorize different forms of problem solving is by distinguishing between algorithms and heuristics. An algorithm is a formal procedure in which each step is clearly defined. It guarantees success if applied correctly. The long multiplication usually taught in school is an example of an algorithm for solving the problem of multiplying big numbers. Heuristics, on the other hand, are informal procedures. They are rough rules-of-thumb that tend to bring the thinker closer to the solution but success is not guaranteed in every case even if followed correctly. Examples of heuristics are working forward and working backward. These approaches involve planning one step at a time, either starting at the beginning and moving forward or starting at the end and moving backward. So when planning a trip, one could plan the different stages of the trip from origin to destiny in the chronological order of how the trip will be realized, or in the reverse order.
Obstacles to problem solving can arise from the thinker's failure to take certain possibilities into account by fixating on one specific course of action. There are important differences between how novices and experts solve problems. For example, experts tend to allocate more time for conceptualizing the problem and work with more complex representations whereas novices tend to devote more time to executing putative solutions.
Deliberation and decision
Deliberation is an important form of practical thinking. It aims at formulating possible courses of action and assessing their value by considering the reasons for and against them. This involves foresight to anticipate what might happen. Based on this foresight, different courses of action can be formulated in order to influence what will happen. Decisions are an important part of deliberation. They are about comparing alternative courses of action and choosing the most favorable one. Decision theory is a formal model of how ideal rational agents would make decisions. It is based on the idea that they should always choose the alternative with the highest expected value. Each alternative can lead to various possible outcomes, each of which has a different value. The expected value of an alternative consists in the sum of the values of each outcome associated with it multiplied by the probability that this outcome occurs. According to decision theory, a decision is rational if the agent chooses the alternative associated with the highest expected value, as assessed from the agent's own perspective.
Various theorists emphasize the practical nature of thought, i.e. that thinking is usually guided by some kind of task it aims to solve. In this sense, thinking has been compared to trial-and-error seen in animal behavior when faced with a new problem. On this view, the important difference is that this process happens inwardly as a form of simulation. This process is often much more efficient since once the solution is found in thought, only the behavior corresponding to the found solution has to be outwardly carried out and not all the others.
Episodic memory and imagination
When thinking is understood in a wide sense, it includes both episodic memory and imagination. In episodic memory, events one experienced in the past are relived. It is a form of mental time travel in which the past experience is re-experienced. But this does not constitute an exact copy of the original experience since the episodic memory involves additional aspects and information not present in the original experience. This includes both a feeling of familiarity and chronological information about the past event in relation to the present. Memory aims at representing how things actually were in the past, in contrast to imagination, which presents objects without aiming to show how things actually are or were. Because of this missing link to actuality, more freedom is involved in most forms of imagination: its contents can be freely varied, changed, and recombined to create new arrangements never experienced before. Episodic memory and imagination have in common with other forms of thought that they can arise internally without any stimulation of the sensory organs. But they are still closer to sensation than more abstract forms of thought since they present sensory contents that could, at least in principle, also be perceived.
Unconscious thought
Conscious thought is the paradigmatic form of thinking and is often the focus of the corresponding research. But it has been argued that some forms of thought also happen on the unconscious level. Unconscious thought is thought that happens in the background without being experienced. It is therefore not observed directly. Instead, its existence is usually inferred by other means. For example, when someone is faced with an important decision or a difficult problem, they may not be able to solve it straight away. But then, at a later time, the solution may suddenly flash before them even though no conscious steps of thinking were taken towards this solution in the meantime. In such cases, the cognitive labor needed to arrive at a solution is often explained in terms of unconscious thoughts. The central idea is that a cognitive transition happened and we need to posit unconscious thoughts to be able to explain how it happened.
It has been argued that conscious and unconscious thoughts differ not just concerning their relation to experience but also concerning their capacities. According to unconscious thought theorists, for example, conscious thought excels at simple problems with few variables but is outperformed by unconscious thought when complex problems with many variables are involved. This is sometimes explained through the claim that the number of items one can consciously think about at the same time is rather limited whereas unconscious thought lacks such limitations. But other researchers have rejected the claim that unconscious thought is often superior to conscious thought. Other suggestions for the difference between the two forms of thinking include that conscious thought tends to follow formal logical laws while unconscious thought relies more on associative processing and that only conscious thinking is conceptually articulated and happens through the medium of language.
In various disciplines
Phenomenology
Phenomenology is the science of the structure and contents of experience. The term "cognitive phenomenology" refers to the experiential character of thinking or what it feels like to think. Some theorists claim that there is no distinctive cognitive phenomenology. On such a view, the experience of thinking is just one form of sensory experience. According to one version, thinking just involves hearing a voice internally. According to another, there is no experience of thinking apart from the indirect effects thinking has on sensory experience. A weaker version of such an approach allows that thinking may have a distinct phenomenology but contends that thinking still depends on sensory experience because it cannot occur on its own. On this view, sensory contents constitute the foundation from which thinking may arise.
An often-cited thought experiment in favor of the existence of a distinctive cognitive phenomenology involves two persons listening to a radio broadcast in French, one who understands French and the other who does not. The idea behind this example is that both listeners hear the same sounds and therefore have the same non-cognitive experience. In order to explain the difference, a distinctive cognitive phenomenology has to be posited: only the experience of the first person has this additional cognitive character since it is accompanied by a thought that corresponds to the meaning of what is said. Other arguments for the experience of thinking focus on the direct introspective access to thinking or on the thinker's knowledge of their own thoughts.
Phenomenologists are also concerned with the characteristic features of the experience of thinking. Making a judgment is one of the prototypical forms of cognitive phenomenology. It involves epistemic agency, in which a proposition is entertained, evidence for and against it is considered, and, based on this reasoning, the proposition is either affirmed or rejected. It is sometimes argued that the experience of truth is central to thinking, i.e. that thinking aims at representing how the world is. It shares this feature with perception but differs from it in the way how it represents the world: without the use of sensory contents.
One of the characteristic features often ascribed to thinking and judging is that they are predicative experiences, in contrast to the pre-predicative experience found in immediate perception. On such a view, various aspects of perceptual experience resemble judgments without being judgments in the strict sense. For example, the perceptual experience of the front of a house brings with it various expectations about aspects of the house not directly seen, like the size and shape of its other sides. This process is sometimes referred to as apperception. These expectations resemble judgments and can be wrong. This would be the case when it turns out upon walking around the "house" that it is no house at all but only a front facade of a house with nothing behind it. In this case, the perceptual expectations are frustrated and the perceiver is surprised. There is disagreement as to whether these pre-predicative aspects of regular perception should be understood as a form of cognitive phenomenology involving thinking. This issue is also important for understanding the relation between thought and language. The reason for this is that the pre-predicative expectations do not depend on language, which is sometimes taken as an example for non-linguistic thought. Various theorists have argued that pre-predicative experience is more basic or fundamental since predicative experience is in some sense built on top of it and therefore depends on it.
Another way how phenomenologists have tried to distinguish the experience of thinking from other types of experiences is in relation to empty intentions in contrast to intuitive intentions. In this context, "intention" means that some kind of object is experienced. In intuitive intentions, the object is presented through sensory contents. Empty intentions, on the other hand, present their object in a more abstract manner without the help of sensory contents. So when perceiving a sunset, it is presented through sensory contents. The same sunset can also be presented non-intuitively when merely thinking about it without the help of sensory contents. In these cases, the same properties are ascribed to objects. The difference between these modes of presentation concerns not what properties are ascribed to the presented object but how the object is presented. Because of this commonality, it is possible for representations belonging to different modes to overlap or to diverge. For example, when searching one's glasses one may think to oneself that one left them on the kitchen table. This empty intention of the glasses lying on the kitchen table are then intuitively fulfilled when one sees them lying there upon arriving in the kitchen. This way, a perception can confirm or refute a thought depending on whether the empty intuitions are later fulfilled or not.
Metaphysics
The mind–body problem concerns the explanation of the relationship that exists between minds, or mental processes, and bodily states or processes. The main aim of philosophers working in this area is to determine the nature of the mind and mental states/processes, and how—or even if—minds are affected by and can affect the body.
Human perceptual experiences depend on stimuli which arrive at one's various sensory organs from the external world and these stimuli cause changes in one's mental state, ultimately causing one to feel a sensation, which may be pleasant or unpleasant. Someone's desire for a slice of pizza, for example, will tend to cause that person to move his or her body in a specific manner and in a specific direction to obtain what he or she wants. The question, then, is how it can be possible for conscious experiences to arise out of a lump of gray matter endowed with nothing but electrochemical properties. A related problem is to explain how someone's propositional attitudes (e.g. beliefs and desires) can cause that individual's neurons to fire and his muscles to contract in exactly the correct manner. These comprise some of the puzzles that have confronted epistemologists and philosophers of mind from at least the time of René Descartes.
The above reflects a classical, functional description of how we work as cognitive, thinking systems. However the apparently irresolvable mind–body problem is said to be overcome, and bypassed, by the embodied cognition approach, with its roots in the work of Heidegger, Piaget, Vygotsky, Merleau-Ponty and the pragmatist John Dewey.
This approach states that the classical approach of separating the mind and analysing its processes is misguided: instead, we should see that the mind, actions of an embodied agent, and the environment it perceives and envisions, are all parts of a whole which determine each other. Therefore, functional analysis of the mind alone will always leave us with the mind–body problem which cannot be solved.
Psychology
Psychologists have concentrated on thinking as an intellectual exertion aimed at finding an answer to a question or the solution of a practical problem. Cognitive psychology is a branch of psychology that investigates internal mental processes such as problem solving, memory, and language; all of which are used in thinking. The school of thought arising from this approach is known as cognitivism, which is interested in how people mentally represent information processing. It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who provided a theory of stages/phases that describes children's cognitive development.
Cognitive psychologists use psychophysical and experimental approaches to understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. They study various aspects of thinking, including the psychology of reasoning, and how people make decisions and choices, solve problems, as well as engage in creative discovery and imaginative thought. Cognitive theory contends that solutions to problems either take the form of algorithms: rules that are not necessarily understood but promise a solution, or of heuristics: rules that are understood but that do not always guarantee solutions. Cognitive science differs from cognitive psychology in that algorithms that are intended to simulate human behavior are implemented or implementable on a computer. In other instances, solutions may be found through insight, a sudden awareness of relationships.
In developmental psychology, Jean Piaget was a pioneer in the study of the development of thought from birth to maturity. In his theory of cognitive development, thought is based on actions on the environment. That is, Piaget suggests that the environment is understood through assimilations of objects in the available schemes of action and these accommodate to the objects to the extent that the available schemes fall short of the demands. As a result of this interplay between assimilation and accommodation, thought develops through a sequence of stages that differ qualitatively from each other in mode of representation and complexity of inference and understanding. That is, thought evolves from being based on perceptions and actions at the sensorimotor stage in the first two years of life to internal representations in early childhood. Subsequently, representations are gradually organized into logical structures which first operate on the concrete properties of the reality, in the stage of concrete operations, and then operate on abstract principles that organize concrete properties, in the stage of formal operations. In recent years, the Piagetian conception of thought was integrated with information processing conceptions. Thus, thought is considered as the result of mechanisms that are responsible for the representation and processing of information. In this conception, speed of processing, cognitive control, and working memory are the main functions underlying thought. In the neo-Piagetian theories of cognitive development, the development of thought is considered to come from increasing speed of processing, enhanced cognitive control, and increasing working memory.
Positive psychology emphasizes the positive aspects of human psychology as equally important as the focus on mood disorders and other negative symptoms. In Character Strengths and Virtues, Peterson and Seligman list a series of positive characteristics. One person is not expected to have every strength, nor are they meant to fully capsulate that characteristic entirely. The list encourages positive thought that builds on a person's strengths, rather than how to "fix" their "symptoms".
Psychoanalysis
The "id", "ego" and "super-ego" are the three parts of the "psychic apparatus" defined in Sigmund Freud's structural model of the psyche; they are the three theoretical constructs in terms of whose activity and interaction mental life is described. According to this model, the uncoordinated instinctual trends are encompassed by the "id", the organized realistic part of the psyche is the "ego", and the critical, moralizing function is the "super-ego".
For psychoanalysis, the unconscious does not include all that is not conscious, rather only what is actively repressed from conscious thought or what the person is averse to knowing consciously. In a sense this view places the self in relationship to their unconscious as an adversary, warring with itself to keep what is unconscious hidden. If a person feels pain, all he can think of is alleviating the pain. Any of his desires, to get rid of pain or enjoy something, command the mind what to do. For Freud, the unconscious was a repository for socially unacceptable ideas, wishes or desires, traumatic memories, and painful emotions put out of mind by the mechanism of psychological repression. However, the contents did not necessarily have to be solely negative. In the psychoanalytic view, the unconscious is a force that can only be recognized by its effects—it expresses itself in the symptom.
The collective unconscious, sometimes known as collective subconscious, is a term of analytical psychology, coined by Carl Jung. It is a part of the unconscious mind, shared by a society, a people, or all humanity, in an interconnected system that is the product of all common experiences and contains such concepts as science, religion, and morality. While Freud did not distinguish between "individual psychology" and "collective psychology", Jung distinguished the collective unconscious from the personal subconscious particular to each human being. The collective unconscious is also known as "a reservoir of the experiences of our species".
In the "Definitions" chapter of Jung's seminal work Psychological Types, under the definition of "collective" Jung references representations collectives, a term coined by Lucien Lévy-Bruhl in his 1910 book How Natives Think. Jung says this is what he describes as the collective unconscious. Freud, on the other hand, did not accept the idea of a collective unconscious.
Related concepts and theories
Laws of thought
Traditionally, the term "laws of thought" refers to three fundamental laws of logic: the law of contradiction, the law of excluded middle, and the principle of identity. These laws by themselves are not sufficient as axioms of logic but they can be seen as important precursors to the modern axiomatization of logic. The law of contradiction states that for any proposition, it is impossible that both it and its negation are true: . According to the law of excluded middle, for any proposition, either it or its opposite is true: . The principle of identity asserts that any object is identical to itself: . There are different conceptions of how the laws of thought are to be understood. The interpretations most relevant to thinking are to understand them as prescriptive laws of how one should think or as formal laws of propositions that are true only because of their form and independent of their content or context. Metaphysical interpretations, on the other hand, see them as expressing the nature of "being as such".
While there is a very wide acceptance of these three laws among logicians, they are not universally accepted. Aristotle, for example, held that there are some cases in which the law of excluded middle is false. This concerns primarily uncertain future events. On his view, it is currently "not ... either true or false that there will be a naval battle tomorrow". Modern intuitionist logic also rejects the law of excluded middle. This rejection is based on the idea that mathematical truth depends on verification through a proof. The law fails for cases where no such proof is possible, which exist in every sufficiently strong formal system, according to Gödel's incompleteness theorems. Dialetheists, on the other hand, reject the law of contradiction by holding that some propositions are both true and false. One motivation of this position is to avoid certain paradoxes in classical logic and set theory, like the liar's paradox and Russell's paradox. One of its problems is to find a formulation that circumvents the principle of explosion, i.e. that anything follows from a contradiction.
Some formulations of the laws of thought include a fourth law: the principle of sufficient reason. It states that everything has a sufficient reason, ground, or cause. It is closely connected to the idea that everything is intelligible or can be explained in reference to its sufficient reason. According to this idea, there should always be a full explanation, at least in principle, to questions like why the sky is blue or why World War II happened. One problem for including this principle among the laws of thought is that it is a metaphysical principle, unlike the other three laws, which pertain primarily to logic.
Counterfactual thinking
Counterfactual thinking involves mental representations of non-actual situations and events, i.e. of what is "contrary to the facts". It is usually conditional: it aims at assessing what would be the case if a certain condition had obtained. In this sense, it tries to answer "What if"-questions. For example, thinking after an accident that one would be dead if one had not used the seatbelt is a form of counterfactual thinking: it assumes, contrary to the facts, that one had not used the seatbelt and tries to assess the result of this state of affairs. In this sense, counterfactual thinking is normally counterfactual only to a small degree since just a few facts are changed, like concerning the seatbelt, while most other facts are kept in place, like that one was driving, one's gender, the laws of physics, etc. When understood in the widest sense, there are forms of counterfactual thinking that do not involve anything contrary to the facts at all. This is the case, for example, when one tries to anticipate what might happen in the future if an uncertain event occurs and this event actually occurs later and brings with it the anticipated consequences. In this wider sense, the term "subjunctive conditional" is sometimes used instead of "counterfactual conditional". But the paradigmatic cases of counterfactual thinking involve alternatives to past events.
Counterfactual thinking plays an important role since we evaluate the world around us not only by what actually happened but also by what could have happened. Humans have a greater tendency to engage in counterfactual thinking after something bad happened because of some kind of action the agent performed. In this sense, many regrets are associated with counterfactual thinking in which the agent contemplates how a better outcome could have been obtained if only they had acted differently. These cases are known as upward counterfactuals, in contrast to downward counterfactuals, in which the counterfactual scenario is worse than actuality. Upward counterfactual thinking is usually experienced as unpleasant, since it presents the actual circumstances in a bad light. This contrasts with the positive emotions associated with downward counterfactual thinking. But both forms are important since it is possible to learn from them and to adjust one's behavior accordingly to get better results in the future.
Thought experiments
Thought experiments involve thinking about imaginary situations, often with the aim of investigating the possible consequences of a change to the actual sequence of events. It is a controversial issue to what extent thought experiments should be understood as actual experiments. They are experiments in the sense that a certain situation is set up and one tries to learn from this situation by understanding what follows from it. They differ from regular experiments in that imagination is used to set up the situation and counterfactual reasoning is employed to evaluate what follows from it, instead of setting it up physically and observing the consequences through perception. Counterfactual thinking, therefore, plays a central role in thought experiments.
The Chinese room argument is a famous thought experiment proposed by John Searle. It involves a person sitting inside a closed-off room, tasked with responding to messages written in Chinese. This person does not know Chinese but has a giant rule book that specifies exactly how to reply to any possible message, similar to how a computer would react to messages. The core idea of this thought experiment is that neither the person nor the computer understands Chinese. This way, Searle aims to show that computers lack a mind capable of deeper forms of understanding despite acting intelligently.
Thought experiments are employed for various purposes, for example, for entertainment, education, or as arguments for or against theories. Most discussions focus on their use as arguments. This use is found in fields like philosophy, the natural sciences, and history. It is controversial since there is a lot of disagreement concerning the epistemic status of thought experiments, i.e. how reliable they are as evidence supporting or refuting a theory. Central to the rejection of this usage is the fact that they pretend to be a source of knowledge without the need to leave one's armchair in search of any new empirical data. Defenders of thought experiments usually contend that the intuitions underlying and guiding the thought experiments are, at least in some cases, reliable. But thought experiments can also fail if they are not properly supported by intuitions or if they go beyond what the intuitions support. In the latter sense, sometimes counter thought experiments are proposed that modify the original scenario in slight ways in order to show that initial intuitions cannot survive this change. Various taxonomies of thought experiments have been suggested. They can be distinguished, for example, by whether they are successful or not, by the discipline that uses them, by their role in a theory, or by whether they accept or modify the actual laws of physics.
Critical thinking
Critical thinking is a form of thinking that is reasonable, reflective, and focused on determining what to believe or how to act. It holds itself to various standards, like clarity and rationality. In this sense, it involves not just cognitive processes trying to solve the issue at hand but at the same time meta-cognitive processes ensuring that it lives up to its own standards. This includes assessing both that the reasoning itself is sound and that the evidence it rests on is reliable. This means that logic plays an important role in critical thinking. It concerns not just formal logic, but also informal logic, specifically to avoid various informal fallacies due to vague or ambiguous expressions in natural language. No generally accepted standard definition of "critical thinking" exists but there is significant overlap between the proposed definitions in their characterization of critical thinking as careful and goal-directed. According to some versions, only the thinker's own observations and experiments are accepted as evidence in critical thinking. Some restrict it to the formation of judgments but exclude action as its goal.
A concrete everyday example of critical thinking, due to John Dewey, involves observing foam bubbles moving in a direction that is contrary to one's initial expectations. The critical thinker tries to come up with various possible explanations of this behavior and then slightly modifies the original situation in order to determine which one is the right explanation. But not all forms of cognitively valuable processes involve critical thinking. Arriving at the correct solution to a problem by blindly following the steps of an algorithm does not qualify as critical thinking. The same is true if the solution is presented to the thinker in a sudden flash of insight and accepted straight away.
Critical thinking plays an important role in education: fostering the student's ability to think critically is often seen as an important educational goal. In this sense, it is important to convey not just a set of true beliefs to the student but also the ability to draw one's own conclusions and to question pre-existing beliefs. The abilities and dispositions learned this way may profit not just the individual but also society at large. Critics of the emphasis on critical thinking in education have argued that there is no universal form of correct thinking. Instead, they contend that different subject matters rely on different standards and education should focus on imparting these subject-specific skills instead of trying to teach universal methods of thinking. Other objections are based on the idea that critical thinking and the attitude underlying it involve various unjustified biases, like egocentrism, distanced objectivity, indifference, and an overemphasis of the theoretical in contrast to the practical.
Positive thinking
Positive thinking is an important topic in positive psychology. It involves focusing one's attention on the positive aspects of one's situation and thereby withdrawing one's attention from its negative sides. This is usually seen as a global outlook that applies especially to thinking but includes other mental processes, like feeling, as well. In this sense, it is closely related to optimism. It includes expecting positive things to happen in the future. This positive outlook makes it more likely for people to seek to attain new goals. It also increases the probability of continuing to strive towards pre-existing goals that seem difficult to reach instead of just giving up.
The effects of positive thinking are not yet thoroughly researched, but some studies suggest that there is a correlation between positive thinking and well-being. For example, students and pregnant women with a positive outlook tend to be better at dealing with stressful situations. This is sometimes explained by pointing out that stress is not inherent in stressful situations but depends on the agent's interpretation of the situation. Reduced stress may therefore be found in positive thinkers because they tend to see such situations in a more positive light. But the effects also include the practical domain in that positive thinkers tend to employ healthier coping strategies when faced with difficult situations. This effects, for example, the time needed to fully recover from surgeries and the tendency to resume physical exercise afterward.
But it has been argued that whether positive thinking actually leads to positive outcomes depends on various other factors. Without these factors, it may lead to negative results. For example, the tendency of optimists to keep striving in difficult situations can backfire if the course of events is outside the agent's control. Another danger associated with positive thinking is that it may remain only on the level of unrealistic fantasies and thereby fail to make a positive practical contribution to the agent's life. Pessimism, on the other hand, may have positive effects since it can mitigate disappointments by anticipating failures.
Positive thinking is a recurrent topic in the self-help literature. Here, often the claim is made that one can significantly improve one's life by trying to think positively, even if this means fostering beliefs that are contrary to evidence. Such claims and the effectiveness of the suggested methods are controversial and have been criticized due to their lack of scientific evidence. In the New Thought movement, positive thinking figures in the law of attraction, the pseudoscientific claim that positive thoughts can directly influence the external world by attracting positive outcomes.
See also
Animal cognition
Freethought
Outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more
Outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more
Rethinking
References
Further reading
Bayne, Tim (21 September 2013), "Thoughts", New Scientist. 7-page feature article on the topic.
Fields, R. Douglas, "The Brain Learns in Unexpected Ways: Neuroscientists have discovered a set of unfamiliar cellular mechanisms for making fresh memories", Scientific American, vol. 322, no. 3 (March 2020), pp. 74–79. "Myelin, long considered inert insulation on axons, is now seen as making a contribution to learning by controlling the speed at which signals travel along neural wiring." (p. 79.)
Rajvanshi, Anil K. (2010), Nature of Human Thought, .
Simon, Herbert, Models of Thought, Vol I, 1979, ; Vol II, 1989, , Yale University Press.
External links
Concepts in epistemology
Concepts in metaphilosophy
Concepts in metaphysics
Concepts in the philosophy of mind
Mental content
Neuropsychological assessment
Psychological concepts
Sensory systems
Sources of knowledge
Unsolved problems in neuroscience | 0.775012 | 0.998893 | 0.774154 |
Epistemological realism | Epistemological realism is a philosophical position, a subcategory of objectivism, holding that what can be known about an object exists independently of one's mind. It is opposed to epistemological idealism.
Epistemological realism is related directly to the correspondence theory of truth, which claims that the world exists independently and innately to our perceptions of it. Our sensory data then reflect or correspond to the innate world.
See also
Epistemic theories of truth
Epistemic optimism (in the philosophy of science)
Epistemology
Philosophical realism
References
Philosophical realism
Epistemological theories | 0.809027 | 0.956832 | 0.774103 |
Naturalized epistemology | Naturalized epistemology (a term coined by W. V. O. Quine) is a collection of philosophic views about the theory of knowledge that emphasize the role of natural scientific methods. This shared emphasis on scientific methods of studying knowledge shifts the focus of epistemology away from many traditional philosophical questions, and towards the empirical processes of knowledge acquisition. There are noteworthy distinctions within naturalized epistemology. Replacement naturalism maintains that we should abandon traditional epistemology and replace it with the methodologies of the natural sciences. The general thesis of cooperative naturalism is that traditional epistemology can benefit in its inquiry by using the knowledge we have gained from cognitive sciences. Substantive naturalism focuses on an asserted equality of facts of knowledge and natural facts.
Objections to naturalized epistemology have targeted features of the general project as well as characteristics of specific versions. Some objectors argue that natural scientific knowledge cannot be circularly grounded by the knowledge obtained through cognitive science, which is itself a natural science. This objection from circularity has been aimed specifically at strict replacement naturalism. There are similar challenges to substance naturalism that maintain that the substance naturalists' thesis that all facts of knowledge are natural facts is not only circular but fails to accommodate certain facts. Several other objectors have found fault in the inability of naturalized methods to adequately address questions about what value forms of potential knowledge have or lack.
Forms of naturalism
Replacement naturalism
W. V. O. Quine's version of naturalized epistemology considers reasons for serious doubt about the fruitfulness of traditional philosophic study of scientific knowledge. These concerns are raised in light of the long attested incapacity of philosophers to find a satisfactory answer to the problems of radical scepticism and, more particularly, to David Hume's criticism of induction. But also, because of the contemporaneous attempts and failures to reduce mathematics to pure logic by those in or philosophically sympathetic to The Vienna Circle. Quine concludes that studies of scientific knowledge concerned with meaning or truth fail to achieve the Cartesian goal of certainty. The failures in the reduction of mathematics to pure logic imply that scientific knowledge can at best be defined with the aid of less certain set-theoretic notions. Even if set theory's lacking the certainty of pure logic is deemed acceptable, the usefulness of constructing an encoding of scientific knowledge as logic and set theory is undermined by the inability to construct a useful translation from logic and set-theory back to scientific knowledge. If no translation between scientific knowledge and the logical structures can be constructed that works both ways, then the properties of the purely logical and set-theoretic constructions do not usefully inform understanding of scientific knowledge.
On Quine's account, attempts to pursue the traditional project of finding the meanings and truths of science philosophically have failed on their own terms and failed to offer any advantage over the more direct methods of psychology. Quine rejects the analytic-synthetic distinction and emphasizes the holistic nature of our beliefs. Since traditional philosophic analysis of knowledge fails, those wishing to study knowledge ought to employ natural scientific methods. Scientific study of knowledge differs from philosophic study by focusing on how humans acquire knowledge rather than speculative analysis of knowledge. According to Quine, this appeal to science to ground the project of studying knowledge, which itself underlies science, should not be dismissed for its circularity since it is the best option available after ruling out traditional philosophic methods for their more serious flaws. This identification and tolerance of circularity is reflected elsewhere in Quine's works.
Cooperative naturalism
Cooperative naturalism is a version of naturalized epistemology which states that while there are evaluative questions to pursue, the empirical results from psychology concerning how individuals actually think and reason are essential and useful for making progress in these evaluative questions. This form of naturalism says that our psychological and biological limitations and abilities are relevant to the study of human knowledge. Empirical work is relevant to epistemology but only if epistemology is itself as broad as the study of human knowledge.
Substantive naturalism
Substantive naturalism is a form of naturalized epistemology that emphasizes how all epistemic facts are natural facts. Natural facts can be based on two main ideas. The first is that all natural facts include all facts that science would verify. The second is to provide a list of examples that consists of natural items. This will help in deducing what else can be included.
Criticism
Quine articulates the problem of circularity inherent to naturalized epistemology when it is treated as a replacement for traditional epistemology. If the goal of traditional epistemology is to validate or to provide the foundation for the natural sciences, then naturalized epistemology would be tasked with validating the natural sciences by means of those very sciences. That is, an empirical investigation into the criteria which are used to scientifically evaluate evidence must presuppose those very same criteria. However, Quine points out that these concerns with validation are merely a byproduct of traditional epistemology. Instead, the naturalized epistemologist should only be concerned with understanding the link between observation and science, even if that understanding makes use of the very science under investigation.
In order to understand the link between observation and science, Quine's naturalized epistemology must be able to identify and describe the process by which scientific knowledge is acquired. One form of this investigation is reliabilism, which requires that a belief be the product of some reliable method if it is to be considered knowledge. Since naturalized epistemology relies on empirical evidence, all epistemic facts which comprise this reliable method must be reducible to natural facts. That is, all facts related to the process of understanding must be expressible in terms of natural facts. If there are facts which cannot be expressed as natural facts, science would have no means of investigating them. In this vein, Roderick Chisholm argues that there are epistemic principles (or facts) which are necessary to knowledge acquisition, but may not be, themselves, natural facts. If Chisholm is correct, naturalized epistemology cannot account for these epistemic principles and, as a result, would be unable to wholly describe the process by which knowledge is obtained.
Beyond Quine's own concerns and potential discrepancies between epistemic and natural facts, Hilary Putnam argues that replacing traditional epistemology with naturalized epistemology would eliminate the normative. But without the normative, there is no "justification, rational acceptability [nor] warranted assertibility". Ultimately, there is no "true" since any method for arriving at the truth was abandoned with the normative. Notions which explain truth are intelligible only when the normative is presupposed. Moreover, for there to be "thinkers", there "must be some kind of truth"; otherwise, "our thoughts aren't really about anything [,...] there is no sense in which any thought is right or wrong". Without the normative to dictate how one should proceed or which methods should be employed, naturalized epistemology cannot determine the "right" criteria by which empirical evidence should be evaluated. But these are precisely the issues traditional epistemology has been tasked with. If naturalized epistemology cannot provide the means for addressing these issues, it cannot succeed in replacing traditional epistemology.
Jaegwon Kim, another critic of naturalized epistemology, further articulates the difficulty of removing the normative component. He notes that modern epistemology has been dominated by the concepts of justification and reliability. Kim argues that epistemology and knowledge are nearly eliminated in their common sense meanings without normative concepts such as these. These concepts are meant to engender the question "What conditions must a belief meet if we are justified in accepting it as true?". That is to say, what are the necessary criteria by which a particular belief can be declared as "true" (or, should it fail to meet these criteria, can we rightly infer its falsity)? This notion of truth rests solely on the conception and application of the criteria which are set forth in traditional and modern theories of epistemology.
Kim further explains how the notion of "justification" (alongside "belief" and "truth") is the defining characteristic of an epistemological study. To remove this aspect is to alter the very meaning and goal of epistemology, whereby we are no longer discussing the study and acquisition of knowledge. As Kim puts it, "If justification drops out of epistemology, knowledge itself drops out of epistemology." Justification is what makes knowledge valuable and normative; without it what can rightly be called true or false? We are left with only descriptions of the processes by which we arrive at a belief. Quine is moving epistemology into the realm of psychology, where Quine's main interest is based on the sensory input–output relationship of an individual. On Kim's view, this account cannot establish an affirmable statement that leads us to truth, since all statements without the normative are purely descriptive and so cannot amount to knowledge. The vulgar allowance of any statements as scientifically valid, but not "true", makes Quine's theory difficult to accept on any epistemic theory that requires truth as the object of knowledge.
In response, Quine insists that critics are wrong to suggest that, given his naturalized epistemology, "the normative element, so characteristic of epistemology, goes by the board. Insofar as theoretical epistemology gets naturalized into a chapter of theoretical science, so normative epistemology gets naturalized into a chapter of engineering: the technology of anticipating sensory stimulation." Thus, "[t]he normative is naturalized, not dropped." There remains debate, however, about whether Quine's view can account for the normativity of epistemology.
As a result of these objections and others like them, most contemporary philosophers agree that replacement naturalized epistemology may be too strong of a view (even Quine held more moderate views in later writings). However, these objections have helped shape rather than eliminate naturalized epistemology. One product of these objections is cooperative naturalism, which holds that empirical results are essential and useful to epistemology. That is, while traditional epistemology cannot be eliminated, neither can it succeed in its investigation of knowledge without empirical results from the natural sciences. In any case, Quinean Replacement Naturalism finds relatively few supporters.
References
Bibliography
Almeder, Robert (1998) Harmless Naturalism: The Limits of Science and the Nature of Philosophy, Peru, Illinois: Open Court.
BonJour, Laurence (1994) "Against Naturalized Epistemology," Midwest Studies in Philosophy, XIX: 283–300.
Chisholm, Roderick (1966)Theory of Knowledge, Englewood Cliffs, NJ: Prentice-Hall.
Chisholm, Roderick (1982) The Foundations of Knowing, Minneapolis: University of Minnesota Press.
Chisholm, Roderick (1989)Theory of Knowledge, 3rd ed., Englewood Cliffs, NJ: Prentice-Hall.
Feldman, Richard (1999), "Methodological Naturalism in Epistemology," in The Blackwell Guide to Epistemology, edited by John Greco and Ernest Sosa, Malden, Ma: Blackwell, pp. 170–186.
Foley, Richard (1994) "Quine and Naturalized Epistemology," Midwest Studies in Philosophy, XIX: 243–260.
Fumerton, Richard (1994) "Skepticism and Naturalistic Epistemology," Midwest Studies in Philosophy, XIX: 321–340.
Fumerton, Richard (1995) Metaepistemology and Skepticism, Lanham, MD: Rowman and Littlefield.
Gibbard, Allan (1990) Wise Feelings, Apt Choices, Cambridge: Harvard University Press.
Goldman, Alvin (1979) "What is Justified Belief?," in G. Pappas, ed., Justification and Knowledge: New Studies in Epistemology, Dordrecht, Reidel: 1-23.
Goldman, Alvin (1992), Liaisons: Philosophy Meets the Cognitive and Social Sciences, Cambridge: MIT Press.
Haack, Susan (1993) Evidence and Inquiry: Towards Reconstruction in Epistemology, Oxford: Blackwell.
Harman, Gilbert (1977) Thought, Princeton: Princeton University Press.
Kim, Jaegwon (1988) "What is Naturalized Epistemology?" Philosophical Perspectives 2 edited by James E. Tomberlin, Asascadero, CA: Ridgeview Publishing Co: 381–406.
Kitcher, Philip (1992) "The Naturalists Return," Philosophical Review, 101: 53–114.
Kornblith, Hilary (1994) Naturalizing Epistemology 2nd Edition, Cambridge: MIT Press.
Kornblith, Hilary (1999) "In Defense of a Naturalized Epistemology" in The Blackwell Guide to Epistemology, edited by John Greco and Ernest Sosa, Malden, Ma: Blackwell, pp. 158–169.
Kornblith, Hilary (1988) "How Internal Can You Get?," Synthese, 74: 313–327.
Lehrer, Keith (1997) Self-Trust: A study of Reason, Knowledge and Autonomy, Oxford: Clarendon Press.
Lycan, William (1988) Judgement and Justification, Cambridge: Cambridge University Press.
Mafffie, James (1990) "Recent Work on Naturalizing Epistemology," American Philosophical Quarterly 27: 281–293.
Pollock, John (1986) Contemporary Theories of Knowledge, Totawa, NJ: Rowman and Littlefield.
Quine, W.V.O. (1969) Ontological Relativity and Other Essays, New York: Columbia University Press.
Quine, W.V.O. (1990) "Norms and Aims" in The Pursuit of Truth, Cambridge: Harvard University Press.
Steup, Matthias, An Introduction to Contemporary Epistemology, Prentice-Hall, 1996.
Stich, Stephen and Richard Nisbett (1980), "Justification and the Psychology of Human Reasoning," Philosophy of Science 47: 188–202.
Stich, Stephen (1990) The Fragmentation of Reason, Cambridge, MA: MIT Press.
Strawson, Peter (1952) Introduction to Logical Theory, New York: Wiley.
van Cleve, James (1985) "Epistemic Supervenience and the Circle of Belief" Monist 68: 90–104.
External links
The Routledge Encyclopedia of Philosophy: Epistemology
The Penguin Dictionary of Philosophy
Epistemological schools and traditions
Naturalism (philosophy)
Willard Van Orman Quine | 0.791603 | 0.977811 | 0.774038 |
Antiphilosophy | Antiphilosophy is an opposition to traditional philosophy. It may be characterized as anti-theoretical, critical of a priori justifications, and may see common philosophical problems as misconceptions that are to be dissolved. Common strategies may involve forms of relativism, skepticism, nihilism, or pluralism.
The term has been used as a denigrating word but is also used with more neutral or positive connotations. Boris Groys's 2012 book Introduction to Antiphilosophy discusses thinkers such as Kierkegaard, Shestov, Nietzsche, and Benjamin, characterizing their work as privileging life and action over thought.
Examples of antiphilosophical positions
Ethics
The antiphilosopher could argue that, with regard to ethics, there is only practical, ordinary reasoning. Therefore, a priori it is wrong to superimpose overarching ideas of what is good for philosophical reasons. For example, it is wrong blanketly to assume that only happiness matters, as in utilitarianism. This is not to claim, however, that a utilitarian-like argument may not be valid in some particular case.
Continuum hypothesis
Consider the continuum hypothesis, stating that there is no set with size strictly between the size of the natural numbers and the size of the real numbers. One idea is that the set universe ought to be rich, with many sets, which leads to the continuum hypothesis being false. This richness argument, the antiphilosopher might argue, is purely philosophical, and groundless, and therefore should be dismissed; maintaining that the continuum hypothesis should be settled by mathematical arguments. In particular it could be the case that the question isn't mathematically meaningful or useful, that the hypothesis is neither true, nor false. It is then wrong to stipulate, a priori and for philosophical reasons, that the continuum hypothesis is true or false.
Scientism
Scientism, as a doctrinal position in that science is the only way to know the reality, is continuously confronting the utility and validity of Philosophy methods, adopting an anti-philosophical position. Authors as Sam Harris believe that science can, or will do, answer questions about morality or ethic, making useless the Philosophy. In line to Comte's Law of three stages, scientists conclude Philosophy is a discipline of plausible answers, but that fails by not verifying their postulates with physical reality, which must necessarily conclude that it is science, for its categorical imperative to respond only through accessible and universal responses to rational-sensitive experience, a stage of knowledge in line with material existence, if not the only one.
Antiphilosophies
Wittgenstein's metaphilosophy
The views of Ludwig Wittgenstein, specifically his metaphilosophy, could be said to be antiphilosophy. In The New York Times, Paul Horwich points to Wittgenstein's rejection of philosophy as traditionally and currently practiced and his "insistence that it can't give us the kind of knowledge generally regarded as its raison d'être".
Horwich goes on to argue that:
Horwich concludes that, according to Wittgenstein,
philosophy "must avoid theory-construction and instead be merely 'therapeutic,' confined to exposing the irrational assumptions on which theory-oriented investigations are based and the irrational conclusions to which they lead".
Moreover, these antiphilosophical views are central to Wittgenstein, Horwich argues.
Pyrrhonism
Pyrrhonism has been considered an antiphilosophy.
See also
Quietism also takes a therapeutic approach to philosophy.
Non-philosophy
Irrationalism
Notes
References
Further reading
Paul Horwich, Wittgenstein's Metaphilosophy, Oxford University Press, 2012.
Ludwig Wittgenstein, Philosophical Investigations, 1953.
Philosophical schools and traditions
Concepts in metaphilosophy
Skepticism | 0.801228 | 0.96601 | 0.773994 |
Anachronism | An anachronism (from the Greek , 'against' and , 'time') is a chronological inconsistency in some arrangement, especially a juxtaposition of people, events, objects, language terms and customs from different time periods. The most common type of anachronism is an object misplaced in time, but it may be a verbal expression, a technology, a philosophical idea, a musical style, a material, a plant or animal, a custom, or anything else associated with a particular period that is placed outside its proper temporal domain.
An anachronism may be either intentional or unintentional. Intentional anachronisms may be introduced into a literary or artistic work to help a contemporary audience engage more readily with a historical period. Anachronism can also be used intentionally for purposes of rhetoric, propaganda, comedy, or shock. Unintentional anachronisms may occur when a writer, artist, or performer is unaware of differences in technology, terminology and language, customs and attitudes, or even fashions between different historical periods and eras.
Types
The metachronism-prochronism contrast is nearly synonymous with parachronism-anachronism, and involves postdating-predating respectively.
Parachronism
A parachronism (from the Greek , "on the side", and , "time") postdates. It is anything that appears in a time period in which it is not normally found (though not sufficiently out of place as to be impossible).
This may be an object, idiomatic expression, technology, philosophical idea, musical style, material, custom, or anything else so closely bound to a particular time period as to seem strange when encountered in a later era. They may be objects or ideas that were once common but are now considered rare or inappropriate. They can take the form of obsolete technology or outdated fashion or idioms.
Prochronism
A prochronism (from the Greek , "before", and , "time") predates. It is an impossible anachronism which occurs when an object or idea has not yet been invented when the situation takes place, and therefore could not have possibly existed at the time. A prochronism may be an object not yet developed, a verbal expression that had not yet been coined, a philosophy not yet formulated, a breed of animal not yet evolved or bred, or use of a technology that had not yet been created.
Metachronism
A metachronism (from the Greek , "after", and , "time") postdates. It is the use of older cultural artifacts in modern settings which may seem inappropriate. For example, it could be considered metachronistic for a modern-day person to be depicted wearing a top hat or writing with a quill.
Politically motivated anachronism
Works of art and literature promoting a political, nationalist or revolutionary cause may use anachronism to depict an institution or custom as being more ancient than it actually is, or otherwise intentionally blur the distinctions between past and present. For example, the 19th-century Romanian painter Constantin Lecca depicts the peace agreement between Ioan Bogdan Voievod and Radu Voievod—two leaders in Romania's 16th-century history—with the flags of Moldavia (blue-red) and of Wallachia (yellow-blue) seen in the background. These flags date only from the 1830s: anachronism promotes legitimacy for the unification of Moldavia and Wallachia into the Kingdom of Romania at the time the painting was made. The Russian artist Vasily Vereshchagin, in his painting Suppression of the Indian Revolt by the English, depicts the aftermath of the Indian Rebellion of 1857, when mutineers were executed by being blown from guns. In order to make the argument that the method of execution would again be utilized by the British if another rebellion broke out in India, Vereshchagin depicted the British soldiers conducting the executions in late 19th-century uniforms.
Art and literature
Anachronism is used especially in works of imagination that rest on a historical basis. Anachronisms may be introduced in many ways: for example, in the disregard of the different modes of life and thought that characterize different periods, or in ignorance of the progress of the arts and sciences and other facts of history. They vary from glaring inconsistencies to scarcely perceptible misrepresentation. Anachronisms may be the unintentional result of ignorance, or may be a deliberate aesthetic choice.
Sir Walter Scott justified the use of anachronism in historical literature: "It is necessary, for exciting interest of any kind, that the subject assumed should be, as it were, translated into the manners as well as the language of the age we live in." However, as fashions, conventions and technologies move on, such attempts to use anachronisms to engage an audience may have quite the reverse effect, as the details in question are increasingly recognized as belonging neither to the historical era being represented, nor to the present, but to the intervening period in which the artwork was created. "Nothing becomes obsolete like a period vision of an older period", writes Anthony Grafton; "Hearing a mother in a historical movie of the 1940s call out 'Ludwig! Ludwig van Beethoven! Come in and practice your piano now!' we are jerked from our suspension of disbelief by what was intended as a means of reinforcing it, and plunged directly into the American bourgeois world of the filmmaker."
It is only since the beginning of the 19th century that anachronistic deviations from historical reality have jarred on a general audience. C. S. Lewis wrote:
Anachronisms abound in the works of Raphael and Shakespeare, as well as in those of less celebrated painters and playwrights of earlier times. Carol Meyers says that anachronisms in ancient texts can be used to better understand the stories by asking what the anachronism represents. Repeated anachronisms and historical errors can become an accepted part of popular culture, such as the belief that Roman legionaries wore leather armor.
Comical anachronism
Comedy fiction set in the past may use anachronism for humorous effect. Comedic anachronism can be used to make serious points about both historical and modern society, such as drawing parallels to political or social conventions.
Future anachronism
Even with careful research, science fiction writers risk anachronism as their works age because they cannot predict all political, social, and technological change.
For example, many books, television shows, radio productions and films nominally set in the mid-21st century or later refer to the Soviet Union, to Saint Petersburg in Russia as Leningrad, to the continuing struggle between the Eastern and Western Blocs and to divided Germany and divided Berlin. Star Trek has suffered from future anachronisms; instead of "retconning" these errors, the 2009 film retained them for consistency with older franchises.
Buildings or natural features, such as the World Trade Center in New York City, can become out of place once they disappear, with some works having been edited to remove the World Trade Center to avoid this situation.
Futuristic technology may appear alongside technology which would be obsolete by the time in which the story is set. For example, in the stories of Robert A. Heinlein, interplanetary space travel coexists with calculation using slide rules.
Language anachronism
Language anachronisms in novels and films are quite common, both intentional and unintentional. Intentional anachronisms inform the audience more readily about a film set in the past. In this regard, language and pronunciation change so fast that most modern people (even many scholars) would find it difficult, or even impossible, to understand a film with dialogue in 15th-century English; thus, audiences willingly accept characters speaking an updated language, and modern slang and figures of speech are often used in these films.
Unconscious anachronism
Unintentional anachronisms may occur even in what are intended as wholly objective and accurate records or representations of historic artifacts and artworks, because the perspectives of historical recorders are conditioned by the assumptions and practices of their own times, in a form of cultural bias. One example is the attribution of historically inaccurate beards to various medieval tomb effigies and figures in stained glass in records made by English antiquaries of the late 16th and early 17th centuries. Working in an age in which beards were in fashion and widespread, the antiquaries seem to have unconsciously projected the fashion back into an era in which they were rare.
In academia
In historical writing, the most common type of anachronism is the adoption of the political, social or cultural concerns and assumptions of one era to interpret or evaluate the events and actions of another. The anachronistic application of present-day perspectives to comment on the historical past is sometimes described as presentism. Empiricist historians, working in the traditions established by Leopold von Ranke in the 19th century, regard this as a great error, and a trap to be avoided. Arthur Marwick has argued that "a grasp of the fact that past societies are very different from our own, and ... very difficult to get to know" is an essential and fundamental skill of the professional historian; and that "anachronism is still one of the most obvious faults when the unqualified (those expert in other disciplines, perhaps) attempt to do history".
Detection of forgery
The ability to identify anachronisms may be employed as a critical and forensic tool to demonstrate the fraudulence of a document or artifact purporting to be from an earlier time. Anthony Grafton discusses, for example, the work of the 3rd-century philosopher Porphyry, of Isaac Casaubon (1559–1614), and of Richard Reitzenstein (1861–1931), all of whom succeeded in exposing literary forgeries and plagiarisms, such as those included in the "Hermetic Corpus", through – among other techniques – the recognition of anachronisms. The detection of anachronisms is an important element within the scholarly discipline of diplomatics, the critical analysis of the forms and language of documents, developed by the Maurist scholar Jean Mabillon (1632–1707) and his successors René-Prosper Tassin (1697–1777) and Charles-François Toustain (1700–1754). The philosopher and reformer Jeremy Bentham wrote at the beginning of the 19th century:
Examples are:
The exposure by Lorenzo Valla in 1440 of the so-called Donation of Constantine, a decree purportedly issued by the Emperor Constantine the Great in either 315 or 317 AD, as a later forgery, depended to a considerable degree on the identification of anachronisms, such as references to the city of Constantinople (a name not in fact bestowed until 330 AD).
A large number of apparent anachronisms in the Book of Mormon have served to convince critics that the book was written in the 19th century, and not, as its adherents claim, in pre-Columbian America.
The use of 19th- and 20th-century anti-semitic terminology demonstrates that the purported "Franklin Prophecy" (attributed to Benjamin Franklin, who died in 1790) is a forgery.
The "William Lynch speech", an address, supposedly delivered in 1712, on the control of slaves in Virginia, is now considered to be a 20th-century forgery, partly on account of its use of anachronistic terms such as "program" and "refueling".
See also
Anachronisms in the Book of Mormon
Anatopism
Evolutionary anachronism
Invented traditions
List of stories set in a future now past
Retrofuturism
Skeuomorph
Society for Creative Anachronism
Steampunk
Tiffany Problem
Whig history
References
Bibliography
External links | 0.775161 | 0.99842 | 0.773936 |
Philosophy of religion | Philosophy of religion is "the philosophical examination of the central themes and concepts involved in religious traditions". Philosophical discussions on such topics date from ancient times, and appear in the earliest known texts concerning philosophy. The field involves many other branches of philosophy, including metaphysics, epistemology, logic, ethics, aesthetics, philosophy of language, and philosophy of science.
The philosophy of religion differs from religious philosophy in that it seeks to discuss questions regarding the nature of religion as a whole, rather than examining the problems brought forth by a particular belief-system. The philosophy of religion differs from theology in that it aims to examine religious concepts from an objective philosophical perspective rather than from the perspective of a specific religious tradition. The philosophy of religion also differs from religious studies in that it seeks to evaluate the truth of religious worldviews. It can be carried out dispassionately by those who identify as believers or non-believers.
Overview
Philosopher William L. Rowe characterized the philosophy of religion as: "the critical examination of basic religious beliefs and concepts." Philosophy of religion covers alternative beliefs about God, gods, demons, spirits or all, the varieties of religious experience, the interplay between science and religion, the nature and scope of good and evil, and religious treatments of birth, history, and death. The field also includes the ethical implications of religious commitments, the relation between faith, reason, experience and tradition, concepts of the miraculous, the sacred revelation, mysticism, power, and salvation.
The term philosophy of religion did not come into general use in the West until the nineteenth century, and most pre-modern and early modern philosophical works included a mixture of religious themes and non-religious philosophical questions. In Asia, examples include texts such as the Hindu Upanishads, the works of Daoism and Confucianism and Buddhist texts. Greek philosophies like Pythagoreanism and Stoicism included religious elements and theories about deities, and Medieval philosophy was strongly influenced by the big three monotheistic Abrahamic religions. In the Western world, early modern philosophers such as Thomas Hobbes, John Locke, and George Berkeley discussed religious topics alongside secular philosophical issues as well.
The philosophy of religion has been distinguished from theology by pointing out that, for theology, "its critical reflections are based on religious convictions". Also, "theology is responsible to an authority that initiates its thinking, speaking, and witnessing ... [while] philosophy bases its arguments on the ground of timeless evidence."
Some aspects of philosophy of religion have classically been regarded as a part of metaphysics. In Aristotle's Metaphysics, the necessarily prior cause of eternal motion was an unmoved mover, who, like the object of desire, or of thought, inspires motion without itself being moved. Today, however, philosophers have adopted the term "philosophy of religion" for the subject, and typically it is regarded as a separate field of specialization, although it is also still treated by some, particularly Catholic philosophers, as a part of metaphysics.
Basic themes and problems
Ultimate reality
Different religions have different ideas about ultimate reality, its source or ground (or lack thereof) and also about what is the "Maximal Greatness". Paul Tillich's concept of 'Ultimate Concern' and Rudolf Otto's 'Idea of the Holy' are concepts which point to concerns about the ultimate or highest truth which most religious philosophies deal with in some way. One of the main differences among religions is whether the ultimate reality is a personal god or an impersonal reality.
In Western religions, various forms of theism are the most common conceptions, while in Eastern religions, there are theistic and also various non-theistic conceptions of the Ultimate. Theistic vs non-theistic is a common way of sorting the different types of religions.
There are also several philosophical positions with regard to the existence of God that one might take including various forms of theism (such as monotheism and polytheism), agnosticism and different forms of atheism.
Monotheism
Keith Yandell outlines roughly three kinds of historical monotheisms: Greek, Semitic and Hindu. Greek monotheism holds that the world has always existed and does not believe in creationism or divine providence, while Semitic monotheism believes the world was created by a God at a particular point in time and that this God acts in the world. Indian monotheism teaches that the world is beginningless, but that there is God's act of creation which sustains the world.
The attempt to provide proofs or arguments for the existence of God is one aspect of what is known as natural theology or the natural theistic project. This strand of natural theology attempts to justify belief in God by independent grounds. Perhaps most of the philosophy of religion is predicated on natural theology's assumption that the existence of God can be justified or warranted on rational grounds. There has been considerable philosophical and theological debate about the kinds of proofs, justifications and arguments that are appropriate for this discourse.
Non-theistic conceptions
Eastern religions have included both theistic and other alternative positions about the ultimate nature of reality. One such view is Jainism, which holds a dualistic view that all that exists is matter and a multiplicity of souls (jiva), without depending on a supreme deity for their existence. There are also different Buddhist views, such as the Theravada Abhidharma view, which holds that the only ultimately existing things are transitory phenomenal events (dharmas) and their interdependent relations. Madhyamaka Buddhists such as Nagarjuna hold that ultimate reality is emptiness (shunyata) while the Yogacara holds that it is vijñapti (mental phenomena). In Indian philosophical discourses, monotheism was defended by Hindu philosophers (particularly the Nyaya school), while Buddhist thinkers argued against their conception of a creator god (Sanskrit: Ishvara).
The Hindu view of Advaita Vedanta, as defended by Adi Shankara, is a total non-dualism. Although Advaitins do believe in the usual Hindu gods, their view of ultimate reality is a radically monistic oneness (Brahman without qualities) and anything which appears (like persons and gods) is illusory (maya).
The various philosophical positions of Taoism can also be viewed as non-theistic about the ultimate reality (Tao). Taoist philosophers have conceived of different ways of describing the ultimate nature of things. For example, while the Taoist Xuanxue thinker Wang Bi argued that everything is "rooted" inWu (non-being, nothingness), Guo Xiang rejected Wu as the ultimate source of things, instead arguing that the ultimate nature of the Tao is "spontaneous self-production" (zi sheng) and "spontaneous self-transformation" (zi hua).
Traditionally, Jains and Buddhists did not rule out the existence of limited deities or divine beings, they only rejected the idea of a single all-powerful creator God or First cause posited by monotheists.
Knowledge and belief
All religious traditions make knowledge claims which they argue are central to religious practice and to the ultimate solution to the main problem of human life. These include epistemic, metaphysical and ethical claims.
Evidentialism is the position that may be characterized as "a belief is rationally justified only if there is sufficient evidence for it". Many theists and non-theists are evidentialists, for example, Aquinas and Bertrand Russell agree that belief in God is rational only if there is sufficient evidence, but disagree on whether such evidence exists. These arguments often stipulate that subjective religious experiences are not reasonable evidence and thus religious truths must be argued based on non-religious evidence. One of the strongest positions of evidentialism is that by William Kingdon Clifford who wrote: "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence". His view of evidentialism is usually read in tandem with William James's article A Will to Believe (1896), which argues against Clifford's principle. More recent supporters of evidentialism include Antony Flew ("The Presumption of Atheism", 1972) and Michael Scriven (Primary philosophy, 1966). Both of them rely on the Ockhamist view that in the absence of evidence for X, belief in X is not justified. Many modern Thomists are also evidentialists in that they hold they can demonstrate there is evidence for the belief in God. Another move is to argue in a Bayesian way for the probability of a religious truth like God, not for total conclusive evidence.
Some philosophers, however, argue that religious belief is warranted without evidence and hence are sometimes called non-evidentialists. They include fideists and reformed epistemologists. Alvin Plantinga and other reformed epistemologists are examples of philosophers who argue that religious beliefs are "properly basic beliefs" and that it is not irrational to hold them even though they are not supported by any evidence. The rationale here is that some beliefs we hold must be foundational and not be based on further rational beliefs. If this is not so, then we risk an infinite regress. This is qualified by the proviso that they can be defended against objections (this differentiates this view from fideism). A properly basic belief is a belief that one can reasonably hold without evidence, such as a memory, a basic sensation or a perception. Plantinga's argument is that belief in God is of this type because within every human mind there is a natural awareness of divinity.
William James in his essay "The Will to Believe" argues for a pragmatic conception of religious belief. For James, religious belief is justified if one is presented with a question which is rationally undecidable and if one is presented with genuine and live options which are relevant for the individual. For James, religious belief is defensible because of the pragmatic value it can bring to one's life, even if there is no rational evidence for it.
Some work in recent epistemology of religion goes beyond debates over evidentialism, fideism, and reformed epistemology to consider contemporary issues deriving from new ideas about knowledge-how and practical skill; how practical factors can affect whether one could know whether theism is true; from formal epistemology's use of probability theory; or from social epistemology (particularly the epistemology of testimony, or the epistemology of disagreement).
For example, an important topic in the epistemology of religion is that of religious disagreement, and the issue of what it means for intelligent individuals of the same epistemic parity to disagree about religious issues. Religious disagreement has been seen as possibly posing first-order or higher-order problems for religious belief. A first order problem refers to whether that evidence directly applies to the truth of any religious proposition, while a higher order problem instead applies to whether one has rationally assessed the first order evidence. One example of a first order problem is the Argument from nonbelief. Higher order discussions focus on whether religious disagreement with epistemic peers (someone whose epistemic ability is equal to our own) demands us to adopt a skeptical or agnostic stance or whether to reduce or change our religious beliefs.
Faith and reason
While religions resort to rational arguments to attempt to establish their views, they also claim that religious belief is at least partially to be accepted through faith, confidence or trust in one's religious belief. There are different conceptions or models of faith, including:
The affective model of faith sees it as a feeling of trust, a psychological state
The special knowledge model of faith as revealing specific religious truths (defended by Reformed epistemology)
The belief model of faith as the theoretical conviction that a certain religious claim is true.
Faith as trusting, as making a fiducial commitment such as trusting in God.
The practical doxastic venture model where faith is seen as a commitment to believe in the trustworthiness of a religious truth or in God. In other words, to trust in God presupposes belief, thus faith must include elements of belief and trust.
The non-doxastic venture model of faith as practical commitment without actual belief (defended by figures such as Robert Audi, J. L. Schellenberg and Don Cupitt). In this view, one need not believe in literal religious claims about reality to have religious faith.
The hope model, faith as hoping
There are also different positions on how faith relates to reason. One example is the belief that faith and reason are compatible and work together, which is the view of Thomas Aquinas and the orthodox view of Catholic natural theology. According to this view, reason establishes certain religious truths and faith (guided by reason) gives us access to truths about the divine which, according to Aquinas, "exceed all the ability of human reason."
Another position on is Fideism, the view that faith is "in some sense independent of, if not outright adversarial toward, reason." Modern philosophers such as Kierkegaard, William James, and Wittgenstein have been associated with this label. Kierkegaard in particular, argued for the necessity of the religious to take a non-rational leap of faith to bridge the gulf between man and God. Wittgensteinian fideism meanwhile sees religious language games as being incommensurate with scientific and metaphysical language games, and that they are autonomous and thus may only be judged on their own standards. The obvious criticism to this is that many religions clearly put forth metaphysical claims.
Several contemporary New Atheist writers which are hostile to religion hold a related view that says that religious claims and scientific claims are opposed to each other and that therefore religions are false.
The Protestant theologian Karl Barth (1886–1968) argued that religious believers have no need to prove their beliefs through reason and thus rejected the project of natural theology. According to Barth, human reason is corrupt and God is utterly different from his creatures, thus we can only rely on God's own revelation for religious knowledge. Barth's view has been termed Neo-orthodoxy. Similarly, D.Z. Phillips argues that God is not intelligible through reason or evidence because God is not an empirical object or a 'being among beings'.
As Brian Davies points out, the problem with positions like Barth's is that they do not help us in deciding between inconsistent and competing revelations of the different religions.
Science
The topic of whether religious beliefs are compatible with science and in what way is also another important topic in the philosophy of religion as well as in theology. This field draws the historical study of their interactions and conflicts, such as the debates in the United States over the teaching of evolution and creationism. There are different models of interaction that have been discussed in the philosophical literature, including:
Conflict thesis which sees them as being in constant conflict, such as during the reception of the theory of evolution and the current debate over creationism.
Independence model, both have separate domains, or non-overlapping magisteria
Dialogue model, some overlap between the fields, they remain separate but share some concepts and presuppositions
Integration or unification model includes projects like natural theology and process theology
The field also draws the scientific study of religion, particularly by psychologists and sociologists as well as cognitive scientists. Various theories about religion have arisen from these various disciplines. One example is the various evolutionary theories of religion which see the phenomenon as either adaptive or a by-product. Another can be seen in the various theories put forth by the Cognitive science of religion. Some argued that evolutionary or cognitive theories undermine religious belief.
Religious experience
Closely related to knowledge and belief is how to interpret religious experiences and their potential for providing knowledge. Religious experiences have been recorded throughout all cultures and are widely diverse. These personal experiences tend to be highly important to individuals who undergo them. Discussions about religious experiences can be said to be informed in part by the question: "what sort of information about what there is might religious experience provide, and how could one tell?"
One could interpret these experiences either veridically, neutrally or as delusions. Both monotheistic and non-monotheistic religious thinkers and mystics have appealed to religious experiences as evidence for their claims about ultimate reality. Philosophers such as Richard Swinburne and William Alston have compared religious experiences to everyday perceptions, that is, both are noetic and have a perceptual object, and thus religious experiences could logically be veridical unless we have a good reason to disbelieve them. Other philosophers such as Eleonore Stump and Matthew Benton argue for an interpersonal epistemology on which one can experience and know God in a relational or personal sense.
According to Brian Davies common objections against the veridical force of religious experiences include the fact that experience is frequently deceptive and that people who claim an experience of a god may be "mistakenly identifying an object of their experience", or be insane or hallucinating. However, he argues that we cannot deduce from the fact that our experiences are sometimes mistaken, hallucinations or distorted to the conclusion that all religious experiences are mistaken etc. Indeed, a drunken or hallucinating person could still perceive things correctly, therefore these objections cannot be said to necessarily disprove all religious experiences.
According to C. B. Martin, "there are no tests agreed upon to establish genuine experience of God and distinguish it decisively from the ungenuine", and therefore all that religious experiences can establish is the reality of these psychological states.
Naturalistic explanations for religious experiences are often seen as undermining their epistemic value. Explanations such as the fear of death, suggestion, infantile regression, sexual frustration, neurological anomalies ("it's all in the head") as well as the socio-political power that having such experiences might grant to a mystic have been put forward. More recently, some argued that religious experiences are caused by cognitive misattributions akin to hallucinations, although this was denied by others. A contrary position was taken by Bertrand Russell who compared the veridical value of religious experiences to the hallucinations of a drunk person: "From a scientific point of view, we can make no distinction between the man who eats little and sees heaven and the man who drinks much and sees snakes. Each is in an abnormal physical condition, and therefore has abnormal perceptions." However, as William L. Rowe notes:
The hidden assumption in Russell's argument is that bodily and mental states that interfere with reliable perceptions of the physical world also interfere with reliable perceptions of a spiritual world beyond the physical, if there is such a spiritual world to be perceived. Perhaps this assumption is reasonable, but it certainly is not obviously true.
In other words, as argued by C.D. Broad, "one might need to be slightly 'cracked or at least appear to be mentally and physically abnormal in order to perceive the supranormal spiritual world.
William James meanwhile takes a middle course between accepting mystical experiences as veridical or seeing them as delusional. He argues that for the individual who experiences them, they are authoritative and they break down the authority of the rational mind. Not only that, but according to James, the mystic is justified in this. But when it comes to the non-mystic, the outside observer, they have no reason to regard them as either veridical nor delusive.
The study of religious experiences from the perspective of the field of phenomenology has also been a feature of the philosophy of religion. Key thinkers in this field include William Brede Kristensen and Gerard van der Leeuw.
Types
Just like there are different religions, there are different forms of religious experience. One could have "subject/content" experiences (such as a euphoric meditative state) and "subject/consciousness/object" experiences (such as the perception of having seen a god, i.e. theophany). Experiences of theophany are described in ancient Mediterranean religious works and myths and include the story of Semele who died due to her seeing Zeus and the Biblical story of the Burning bush. Indian texts like the Bhagavad Gita also contain theophanic events. The diversity (sometimes to the point of contradiction) of religious experiences has also been used as an argument against their veridical nature, and as evidence that they are a purely subjective psychological phenomenon.
In Western thought, religious experience (mainly a theistic one) has been described by the likes of Friedrich Schleiermacher, Rudolf Otto and William James. According to Schleiermacher, the distinguishing feature of a religious experience is that "one is overcome by the feeling of absolute dependence." Otto meanwhile, argued that while this was an important element, the most basic feature of religious experiences is that it is numinous. He described this as "non-rational, non-sensory experience or feeling whose primary and immediate object is outside the self" as well as having the qualities of being a mystery, terrifying and fascinating.
Rowe meanwhile defined a religious experience as "an experience in which one senses the immediate presence of the divine." According to Rowe, religious experiences can be divided in the following manner:
Religious experiences in which one senses the presence of the divine as being distinct from oneself.
Mystical experiences in which one senses one's own union with a divine presence.
The extrovertive way looks outward through the senses into the world around us and finds the divine reality there.
The introvertive way turns inward and finds the divine reality in the deepest part of the self.
Non-monotheistic religions meanwhile also report different experiences from theophany, such as non-dual experiences of oneness and deeply focused meditative states (termed samadhi in Indian religion) as well as experiences of enlightenment in Buddhism, liberation in Hinduism, and kevala in Jainism.
Another typology, offered by Chad Meister, differentiates between three major experiences:
Regenerative experiences, in which an individual feels reborn, transformed or changed radically, usually resulting in religious conversion.
Charismatic experiences, in which special gifts, abilities, or blessings are manifested (such as healing, visions, etc.)
Mystical experiences, which can be described using William James qualifications as being: ineffable, noetic, transient and passive.
Perennialism vs Constructivism
Another debate on this topic is whether all religious cultures share common core mystical experiences (Perennialism) or whether these experiences are in some way socially and culturally constructed (Constructivism or Contextualism). According to Walter Stace all cultures share mystical experiences of oneness with the external world, as well as introverted "Pure Conscious Events" which is empty of all concepts, thoughts, qualities, etc. except pure consciousness. Similarly Ninian Smart argued that monistic experiences were universal. Perennialists tend to distinguish between the experience itself, and its post experience interpretation to make sense of the different views in world religions.
Some constructivists like Steven T. Katz meanwhile have argued against the common core thesis, and for either the view that every mystical experience contains at least some concepts (soft constructivism) or that they are strongly shaped and determined by one's religious ideas and culture (hard constructivism). In this view, the conceptual scheme of any mystic strongly shapes their experiences and because mystics from different religions have very different schemas, there cannot be any universal mystical experiences.
Religion and ethics
All religions argue for certain values and ideas of the moral Good. Non-monotheistic Indian traditions like Buddhism and Advaita Vedanta find the highest Good in nirvana or moksha which leads to release from suffering and the rounds of rebirth and morality is a means to achieve this, while for monotheistic traditions, God is the source or ground of all morality and heaven in the highest human good. The world religions also offer different conceptions of the source of evil and suffering in the world, that is, what is wrong with human life and how to solve and free ourselves from these dilemmas. For example, for Christianity, sin is the source of human problems, while for Buddhism, it is craving and ignorance.
A general question which philosophy of religion asks is what is the relationship, if any, between morality and religion. Brian Davies outlines four possible theses:
Morality somehow requires religion. One example of this view is Kant's idea that morality should lead us to believe in a moral law, and thus to believe in an upholder of that law, that is, God.
Morality is somehow included in religion, "The basic idea here is that being moral is part of what being religious means."
Morality is pointless without religion, for one would have no reason to be moral without it.
Morality and religion are opposed to each other. In this view, belief in a God would mean one would do whatever that God commands, even if it goes against morality. The view that religion and morality are often opposed has been espoused by atheists like Lucretius and Bertrand Russell as well as by theologians like Kierkegaard who argued for a 'teleological suspension of the ethical'.
Monotheistic religions who seek to explain morality and its relationship to God must deal with what is termed the Euthyphro dilemma, famously stated in the Platonic dialogue "Euthyphro" as: "Is the pious (τὸ ὅσιον, i.e. what is morally good) loved by the gods because it is pious, or is it pious because it is loved by the gods?" Those who hold that what is moral is so because it is what God commands are defending a version of the Divine command theory.
Another important topic which is widely discussed in Abrahamic monotheistic religious philosophy is the problem of human Free will and God's omniscience. God's omniscience could presumably include perfect knowledge of the future, leading to Theological determinism and thus possibly contradicting with human free will. There are different positions on this including libertarianism (free will is true) and Predestination.
Miracles
Belief in miracles and supernatural events or occurrences is common among world religions. A miracle is an event which cannot be explained by rational or scientific means. The Resurrection of Jesus and the Miracles of Muhammad are examples of miracles claimed by religions.
Skepticism towards the supernatural can be found in early philosophical traditions like the Indian Carvaka school and Greco-Roman philosophers like Lucretius. David Hume, who defined a miracle as "a violation of the laws of nature", famously argued against miracles in Of Miracles, Section X of An Enquiry concerning Human Understanding (1748). For Hume, the probability that a miracle has not occurred is always greater than the probability that it has because "as a firm and unalterable experience has established these laws [of nature], the proof against a miracle, from the very nature of the fact, is as entire as any argument from experience can possibly be imagined" (Enquiry. X. p. 173). Hume does not argue that a miracle is impossible, only that it is unreasonable to believe in any testimony of a miracle's occurrence, for evidence for the regularity of natural laws is much stronger than human testimony (which is often in error).
According to Rowe, there are two weaknesses with Hume's argument. First, there could be other forms of indirect evidence for the occurrence of a miracle that does not include testimony of someone's direct experience of it. Secondly, Rowe argues that Hume overestimates "the weight that should be given to past experience in support of some principle thought to be a law of nature." For it is a common occurrence that currently accepted ideas of natural laws are revised due to an observed exception but Hume's argument would lead one to conclude that these exceptions do not occur. Rowe adds that "It remains true, however, that a reasonable person will require quite strong evidence before believing that a law of nature has been violated. It is easy to believe the person who claimed to see water run downhill, but quite difficult to believe that someone saw water run uphill."
Another definition of a miracle is possible however, which is termed the Epistemic theory of miracles and was argued for by Spinoza and St. Augustine. This view rejects that a miracle is a transgression of natural laws, but is simply a transgression of our current understanding of natural law. In the Tractatus Theologico-Politicus, Spinoza writes: "miracles are only intelligible as in relation to human opinions, and merely mean events of which the natural cause cannot be explained by a reference to any ordinary occurrence, either by us, or at any rate, by the writer and narrator of the miracle" (Tractatus p. 84). Similarly, R.F. Holland has defined miracle in a naturalistic way in a widely cited paper. For Holland, a miracle need only be an extraordinary and beneficial coincidence interpreted religiously.
Brian Davies notes that even if we can establish that a miracle has occurred, it is hard to see what this is supposed to prove. For it is possible that they arise due to agencies which are unusual and powerful, but not divine.
Afterlife
World religions put forth various theories which affirm life after death and different kinds of postmortem existence. This is often tied to belief in an immortal individual soul or self (Sanskrit: atman) separate from the body which survives death, as defended by Plato, Descartes, Monotheistic religions like Christianity and many Indian philosophers. This view is also a position on the mind body problem, mainly, dualism. This view then must show not only that dualism is true and that souls exist, but also that souls survive death. As Kant famously argued, the mere existence of a soul does not prove its immortality, for one could conceive that a soul, even if it is totally simple, could still fade away or lose its intensity. H. H. Price is one modern philosopher who has speculated at length about what it would be like to be a disembodied soul after death.
One major issue with soul beliefs is that since personhood is closely tied to one's physical body, it seems difficult to make sense of a human being existing apart from their body. A further issue is with continuity of personal identity, that is, it is not easy to account for the claim that the person that exists after bodily death is the same person that existed before.
Bertrand Russell put forth the general scientific argument against the afterlife as follows:
Persons are part of the everyday world with which science is concerned, and the conditions which determine their existence are discoverable...we know that the brain is not immortal, and that the organized energy of a living body becomes, as it were, demobilized at death and therefore not available for collective action. All the evidence goes to show that what we regard as our mental life is bound up with brain structure and organized bodily energy. Therefore it is rational to suppose that mental life ceases when bodily life ceases. The argument is only one of probability, but it is as strong as those upon which most scientific conclusions are based.
Contra Russell, J. M. E. McTaggart argues that people have no scientific proof that the mind is dependent on the body in this particular way. As Rowe notes, the fact that the mind depends on the functions of the body while one is alive is not necessarily proof that the mind will cease functioning after death just as a person trapped in a room while depending on the windows to see the outside world might continue to see even after the room ceases to exist.
Buddhism is one religion which, while affirming postmortem existence (through rebirth), denies the existence of individual souls and instead affirms a deflationary view of personal identity, termed not-self (anatta).
While physicalism has generally been seen as hostile to notions of an afterlife, this need not be the case. Abrahamic religions like Christianity have traditionally held that life after death will include the element of bodily resurrection. One objection to this view is that it seems difficult to account for personal continuity, at best, a resurrected body is a replica of the resurrected person not the same person. One response is the constitution view of persons, which says persons are constituted by their bodies and by a "first-person perspective", the capacity to think of oneself as oneself. In this view, what is resurrected is that first person perspective, or both the person's body and that perspective. An objection to this view is that it seems difficult to differentiate one person's first person perspective from another person's without reference to temporal and spatial relations. Peter van Inwagen meanwhile, offers the following theory:
Perhaps at the moment of each man's death, God removes his corpse and replaces it with a simulacrum which is what is burned or rots. Or perhaps God is not quite so wholesale as this: perhaps He removes for "safekeeping" only the "core person"—the brain and central nervous system—or even some special part of it. These are details. (van Inwagen 1992: 245–46)
This view shows how some positions on the nature of the afterlife are closely tied to and sometimes completely depend upon theistic positions. This close connection between the two views was made by Kant, who argued that one can infer an afterlife from belief in a just God who rewards persons for their adherence to moral law.
Other discussions on the philosophy of the afterlife deal with phenomena such as near death experiences, reincarnation research, and other parapsychological events and hinge on whether naturalistic explanations for these phenomena is enough to explain them or not. Such discussions are associated with philosophers like William James, Henry Sidgwick, C.D. Broad, and H.H. Price.
Diversity and pluralism
The issue of how one is to understand religious diversity and the plurality of religious views and beliefs has been a central concern of the philosophy of religion.
There are various philosophical positions regarding how one is to make sense of religious diversity, including exclusivism, inclusivism, pluralism, relativism, atheism or antireligion and agnosticism.
Religious exclusivism is the claim that only one religion is true and that others are wrong. To say that a religion is exclusivistic can also mean that salvation or human freedom is only attainable by the followers of one's religion. This view tends to be the orthodox view of most monotheistic religions, such as Christianity and Islam, though liberal and modernist trends within them might differ. William L Rowe outlines two problems with this view. The first problem is that it is easy to see that if this is true, a large portion of humanity is excluded from salvation and it is hard to see how a loving god would desire this. The second problem is that once we become acquainted with the saintly figures and virtuous people in other religions, it can be difficult to see how we could say they are excluded from salvation just because they are not part of our religion.
A different view is inclusivism, which is the idea that "one's own tradition alone has the whole truth but that this truth is nevertheless partially reflected in other traditions." An inclusivist might maintain that their religion is privileged, they can also hold that other religious adherents have fundamental truths and even that they will be saved or liberated. The Jain view of Anekantavada ('many-sidedness') has been interpreted by some as a tolerant view which is an inclusive acceptance of the partial truth value of non-Jain religious ideas. As Paul Dundas notes, the Jains ultimately held the thesis that Jainism is the final truth, while other religions only contain partial truths. Other scholars such as Kristin Beise Kiblinger have also argued that some of the Buddhist traditions include inclusivist ideas and attitudes.
In the modern Western study of religion, the work of Ninian Smart has also been instrumental in representing a more diverse understanding of religion and religious pluralism. Smart's view is that there are genuine differences between religions.
Pluralism is the view that all religions are equally valid responses to the divine and that they are all valid paths to personal transformation. This approach is taken by John Hick, who has developed a pluralistic view which synthesizes components of various religious traditions. Hick promotes an idea of a noumenal sacred reality which different religions provide us access to. Hick defines his view as "the great world faiths embody different perceptions and conceptions of, and correspondingly different responses to, the Real or the Ultimate." For Hick, all religions are true because they all allow us to encounter the divine reality, even if they have different deities and conceptions of it. Rowe notes that a similar idea is proposed by Paul Tillich's concept of Being-itself.
The view of perennialism is that there is a single or core truth or experience which is shared by all religions even while they use different terms and language to express it. This view is espoused by the likes of Aldous Huxley, the thinkers of the Traditionalist School as well as Neo-Vedanta.
Yet another way of responding to the conflicting truth claims of religions is Relativism. Joseph Runzo., one of its most prominent defenders, has argued for henofideism which states that the truth of a religious worldview is relative to each community of adherents. Thus while religions have incompatible views, each one is individually valid as they emerge from individual experiences of a plurality of phenomenal divine realities. According to Runzo, this view does not reduce the incompatible ideas and experiences of different religions to mere interpretations of the Real and thus preserves their individual dignity.
Another response to the diversity and plurality of religious beliefs and deities throughout human history is one of skepticism towards all of them (or even antireligion), seeing them as illusions or human creations which serve human psychological needs. Sigmund Freud was a famous proponent of this view, in various publications such as The Future of an Illusion (1927) and Civilization and Its Discontents (1930). According to Freud, "Religion is an illusion and it derives its strength from the fact that it falls in with our instinctual desires."
While one can be skeptical towards the claims of religion, one need not be hostile towards religion. Don Cupitt is one example of someone who, while disbelieving in the metaphysical and cosmological claims of his religion, holds that one can practice it with a "non-realist" perspective which sees religious claims as human inventions and myths to live by.
Religious language
The question of religious language and in what sense it can be said to be meaningful has been a central issue of the philosophy of religion since the work of the Vienna circle, a group of philosophers who, influenced by Wittgenstein, put forth the theory of Logical positivism. Their view was that religious language, such as any talk of God cannot be verified empirically and thus was ultimately meaningless. This position has also been termed theological noncognitivism. A similar view can be seen in David Hume's An Enquiry Concerning Human Understanding, where he famously wrote that any work which did not include either (1) abstract reasoning on quantity or number or (2) reasoning concerning matter of fact and existence was "nothing but sophistry and illusion".
In a similar vein, Antony Flew, questioned the validity of religious statements because they do not seem to be falsifiable, that is, religious claims do not seem to allow any counter evidence to count against them and thus they seem to be lacking in content. While such arguments were popular in the 1950s and 60s, the verification principle and falsifiability as a criterion for meaning are no longer as widely held. The main problem with verificationism is that it seems to be self refuting, for it is a claim which does not seem to be supported by its own criterion.
As noted by Brian Davies, when talking about God and religious truths, religious traditions tend to resort to metaphor, negation and analogy. The via negativa has been defended by thinkers such as Maimonides who denied that positive statements about God were helpful and wrote: "you will come nearer to the knowledge and comprehension of God by the negative attributes." Similar approaches based on negation can be seen in the Hindu doctrine of Neti neti and the Buddhist philosophy of Madhyamaka.
Wittgenstein's theory of language games also shows how one can use analogical religious language to describe God or religious truths, even if the words one is using do not in this case refer to their everyday sense, i.e. when we say God is wise, we do not mean he is wise in the same sense that a person is wise, yet it can still make sense to talk in this manner. However, as Patrick Sherry notes, the fact that this sort of language may make sense does not mean that one is warranted in ascribing these terms to God, for there must be some connection between the relevant criteria we use in ascribing these terms to conventional objects or subjects and to God. As Chad Meister notes though, for Wittgenstein, a religion's language game need not reflect some literal picture of reality (as a picture theory of meaning would hold) but is useful simply because its ability to "reflect the practices and forms of life of the various religious adherents." Following Wittgenstein, philosophers of religion like Norman Malcolm, B. R. Tilghman, and D. Z. Phillips have argued that instead of seeing religious language as referring to some objective reality, we should instead see it as referring to forms of life. This approach is generally termed non-realist.
Against this view, realists respond that non-realism subverts religious belief and the intelligibility of religious practice. It is hard to see for example, how one can pray to a God without believing that they really exist. Realists also argue that non-realism provides no normative way to choose between competing religions.
Analytic philosophy of religion
In Analytic Philosophy of Religion, James Franklin Harris noted that
As with the study of ethics, early analytic philosophy tended to avoid the study of philosophy of religion, largely dismissing (as per the logical positivists view) the subject as part of metaphysics and therefore meaningless. The collapse of logical positivism renewed interest in philosophy of religion, prompting philosophers like William Alston, John Mackie, Alvin Plantinga, Robert Merrihew Adams, Richard Swinburne, and Antony Flew not only to introduce new problems, but to re-open classical topics such as the nature of miracles, theistic arguments, the problem of evil, the rationality of belief in God, concepts of the nature of God, and many more.
Plantinga, Mackie and Flew debated the logical validity of the free will defense as a way to solve the problem of evil. Alston, grappling with the consequences of analytic philosophy of language, worked on the nature of religious language. Adams worked on the relationship of faith and morality. Analytic epistemology and metaphysics has formed the basis for a number of philosophically-sophisticated theistic arguments, like those of the reformed epistemologists like Plantinga.
Analytic philosophy of religion has also been preoccupied with Ludwig Wittgenstein, as well as his interpretation of Søren Kierkegaard's philosophy of religion. Using first-hand remarks (which would later be published in Philosophical Investigations, Culture and Value, and other works), philosophers such as Peter Winch and Norman Malcolm developed what has come to be known as contemplative philosophy, a Wittgensteinian school of thought rooted in the "Swansea tradition" and which includes Wittgensteinians such as Rush Rhees, Peter Winch and D. Z. Phillips, among others. The name "contemplative philosophy" was first coined by D. Z. Phillips in Philosophy's Cool Place, which rests on an interpretation of a passage from Wittgenstein's "Culture and Value". This interpretation was first labeled, "Wittgensteinian Fideism", by Kai Nielsen but those who consider themselves Wittgensteinians in the Swansea tradition have relentlessly and repeatedly rejected this construal as a caricature of Wittgenstein's considered position; this is especially true of D. Z. Phillips. Responding to this interpretation, Kai Nielsen and D.Z. Phillips became two of the most prominent philosophers on Wittgenstein's philosophy of religion.
See also
List of philosophers of religion
Comparative theology
Conceptions of God
Definition of religion
Issues in Science and Religion
Lectures on the Philosophy of Religion by Hegel
Nontheistic religion
Religious naturalism
Religious studies
Philosophical theology
Notes
References
Further reading
Al-Nawawi Forty Hadiths and Commentary, by Arabic Virtual Translation Center; (2010) (Philosophy of Religion from an Islamic Point of View)
The London Philosophy Study Guide offers many suggestions on what to read, depending on the student's familiarity with the subject: Philosophy of Religion
William L. Rowe, William J. Wainwright, Philosophy of Religion: Selected Readings, Third Ed. (Florida: Harcourt Brace & Company, 1998)
Religious Studies is an international journal for the philosophy of religion. It is available online and in print and has a fully searchable online archive dating back to Issue 1 in 1965. It currently publishes four issues per year.
Shults, F. LeRon, 2019, "Computer Modeling in Philosophy of Religion", Open Philosophy, (special issue on computer modeling in philosophy) 2(1): 108–125. doi:10.1515/opphil-2019-0011.
Shokhin, Vladimir K., "The Pioneering Appearances of Philosophy of Religion in Europe: François Para du Phanjas on the Nature of Religion", Open Theology 2015, 1: 97-106.
Yandell, Keith E. PHILOSOPHY OF RELIGION A contemporary introduction, Routledge, 2002.
Saeed, Abu Hayyan, Fallacies of the Atheists ! Who is the Architect of this Universe ? (February 19, 2024). Available at SSRN: https://ssrn.com/abstract=4731439.
External links
An introduction to the Philosophy of Religion by Paul Newall
Philosophy of Religion Useful annotated index of religious philosophy topics
Philosophy of Religion .Info Introductory articles on philosophical arguments for and against theism
The Australasian Philosophy of Religion Association
Introductory Articles Into the Philosophy of Religion from University of Notre Dame
Hume on Miracles, commentary by Rev Dr Wally Shaw
[Saeed, Abu Hayyan, The objections of Atheists and Orientalists to the Holy Quran and their answers... (January 18, 2024). Available at SSRN: https://ssrn.com/abstract=4698784 or http://dx.doi.org/10.2139/ssrn.4698784]
R
Religion
Religion and government
Religion and politics
Religious studies | 0.779283 | 0.992899 | 0.773749 |
Inquiry | An inquiry (also spelled as enquiry in British English) is any process that has the aim of augmenting knowledge, resolving doubt, or solving a problem. A theory of inquiry is an account of the various types of inquiry and a treatment of the ways that each type of inquiry achieves its aim.
Inquiry theories
Deduction
When three terms are so related to one another that the last is wholly contained in the middle and the middle is wholly contained in or excluded from the first, the extremes must admit of perfect syllogism. By 'middle term' I mean that which both is contained in another and contains another in itself, and which is the middle by its position also; and by 'extremes' (a) that which is contained in another, and (b) that in which another is contained. For if A is predicated of all B, and B of all C, A must necessarily be predicated of all C. ... I call this kind of figure the First. (Aristotle, Prior Analytics, 1.4)
Induction
Inductive reasoning consists in establishing a relation between one extreme term and the middle term by means of the other extreme; for example, if B is the middle term of A and C, in proving by means of C that A applies to B; for this is how we effect inductions. (Aristotle, Prior Analytics, 2.23)
Abduction
The locus classicus for the study of abductive reasoning is found in Aristotle's Prior Analytics, Book 2, Chapt. 25. It begins this way:
We have Reduction (απαγωγη, abduction):
When it is obvious that the first term applies to the middle, but that the middle applies to the last term is not obvious, yet is nevertheless more probable or not less probable than the conclusion;
Or if there are not many intermediate terms between the last and the middle;
For in all such cases the effect is to bring us nearer to knowledge.
By way of explanation, Aristotle supplies two very instructive examples, one for each of the two varieties of abductive inference steps that he has just described in the abstract:
For example, let A stand for "that which can be taught", B for "knowledge", and C for "morality". Then that knowledge can be taught is evident; but whether virtue is knowledge is not clear. Then if BC is not less probable or is more probable than AC, we have reduction; for we are nearer to knowledge for having introduced an additional term, whereas before we had no knowledge that AC is true.
Or again we have reduction if there are not many intermediate terms between B and C; for in this case too we are brought nearer to knowledge. For example, suppose that D is "to square", E "rectilinear figure", and F "circle". Assuming that between E and F there is only one intermediate term — that the circle becomes equal to a rectilinear figure by means of lunules — we should approximate to knowledge. (Aristotle, "Prior Analytics", 2.25, with minor alterations)
Aristotle's latter variety of abductive reasoning, though it will take some explaining in the sequel, is well worth our contemplation, since it hints already at streams of inquiry that course well beyond the syllogistic source from which they spring, and into regions that Peirce will explore more broadly and deeply.
Inquiry in the pragmatic paradigm
In the pragmatic philosophies of Charles Sanders Peirce, William James, John Dewey, and others, inquiry is closely associated with the normative science of logic. In its inception, the pragmatic model or theory of inquiry was extracted by Peirce from its raw materials in classical logic, with a little bit of help from Kant, and refined in parallel with the early development of symbolic logic by Boole, De Morgan, and Peirce himself to address problems about the nature and conduct of scientific reasoning. Borrowing a brace of concepts from Aristotle, Peirce examined three fundamental modes of reasoning that play a role in inquiry, commonly known as abductive, deductive, and inductive inference.
In rough terms, abduction is what we use to generate a likely hypothesis or an initial diagnosis in response to a phenomenon of interest or a problem of concern, while deduction is used to clarify, to derive, and to explicate the relevant consequences of the selected hypothesis, and induction is used to test the sum of the predictions against the sum of the data. It needs to be observed that the classical and pragmatic treatments of the types of reasoning, dividing the generic territory of inference as they do into three special parts, arrive at a different characterization of the environs of reason than do those accounts that count only two.
These three processes typically operate in a cyclic fashion, systematically operating to reduce the uncertainties and the difficulties that initiated the inquiry in question, and in this way, to the extent that inquiry is successful, leading to an increase in knowledge or in skills.
In the pragmatic way of thinking everything has a purpose, and the purpose of each thing is the first thing we should try to note about it. The purpose of inquiry is to reduce doubt and lead to a state of belief, which a person in that state will usually call knowledge or certainty. As they contribute to the end of inquiry, we should appreciate that the three kinds of inference describe a cycle that can be understood only as a whole, and none of the three makes complete sense in isolation from the others. For instance, the purpose of abduction is to generate guesses of a kind that deduction can explicate and that induction can evaluate. This places a mild but meaningful constraint on the production of hypotheses, since it is not just any wild guess at explanation that submits itself to reason and bows out when defeated in a match with reality. In a similar fashion, each of the other types of inference realizes its purpose only in accord with its proper role in the whole cycle of inquiry. No matter how much it may be necessary to study these processes in abstraction from each other, the integrity of inquiry places strong limitations on the effective modularity of its principal components.
In Logic: The Theory of Inquiry, John Dewey defined inquiry as "the controlled or directed transformation of an indeterminate situation into one that is so determinate in its constituent distinctions and relations as to convert the elements of the original situation into a unified whole". Dewey and Peirce's conception of inquiry extended beyond a system of thinking and incorporated the social nature of inquiry. These ideas are summarize in the notion Community of inquiry.
Art and science of inquiry
For our present purposes, the first feature to note in distinguishing the three principal modes of reasoning from each other is whether each of them is exact or approximate in character. In this light, deduction is the only one of the three types of reasoning that can be made exact, in essence, always deriving true conclusions from true premises, while abduction and induction are unavoidably approximate in their modes of operation, involving elements of fallible judgment in practice and inescapable error in their application.
The reason for this is that deduction, in the ideal limit, can be rendered a purely internal process of the reasoning agent, while the other two modes of reasoning essentially demand a constant interaction with the outside world, a source of phenomena and problems that will no doubt continue to exceed the capacities of any finite resource, human or machine, to master. Situated in this larger reality, approximations can be judged appropriate only in relation to their context of use and can be judged fitting only with regard to a purpose in view.
A parallel distinction that is often made in this connection is to call deduction a demonstrative form of inference, while abduction and induction are classed as non-demonstrative forms of reasoning. Strictly speaking, the latter two modes of reasoning are not properly called inferences at all. They are more like controlled associations of words or ideas that just happen to be successful often enough to be preserved as useful heuristic strategies in the repertoire of the agent. But non-demonstrative ways of thinking are inherently subject to error, and must be constantly checked out and corrected as needed in practice.
In classical terminology, forms of judgment that require attention to the context and the purpose of the judgment are said to involve an element of "art", in a sense that is judged to distinguish them from "science", and in their renderings as expressive judgments to implicate arbiters in styles of rhetoric, as contrasted with logic.
In a figurative sense, this means that only deductive logic can be reduced to an exact theoretical science, while the practice of any empirical science will always remain to some degree an art.
Limits to inquiry
C. S. Peirce argued that inquiry reaches a logical limit, or "approximate[s] indefinitely toward that limit", which he regarded as "truth".
Jewish rabbinical writers used the text of Deuteronomy 4:32 in the Hebrew Bible, or ask now concerning the days that are past, which were before you, since the day that God created man on the earth, and ask from one end of heaven to the other, whether any great thing like this has happened, or anything like it has been heard, to impose ethical limits on inquiry, forbidding inquiry into the work of creation in the presence of two people, reading the words "for ask now of the days past" to indicate that one may inquire, but not two. The Rabbis reasoned that the words "since the day that God created man upon the earth" in this verse taught that one must not inquire concerning the time before creation, but that the words "the days past that were before you" meant that one may inquire about the six days of creation. They further reasoned that the words "from the one end of heaven to the other" indicated that one must not inquire about what is beyond the universe, what is above and what is below, what is before and what is after.
Zeroth order inquiry
Many aspects of inquiry can be recognized and usefully studied in very basic logical settings, even simpler than the level of syllogism, for example, in the realm of reasoning that is variously known as Boolean algebra, propositional calculus, sentential calculus, or zeroth-order logic. By way of approaching the learning curve on the gentlest availing slope, we may well begin at the level of zeroth-order inquiry, in effect, taking the syllogistic approach to inquiry only so far as the propositional or sentential aspects of the associated reasoning processes are concerned. One of the bonuses of doing this in the context of Peirce's logical work is that it provides us with doubly instructive exercises in the use of his logical graphs, taken at the level of his so-called "alpha graphs".
In the case of propositional calculus or sentential logic, deduction comes down to applications of the transitive law for conditional implications and the approximate forms of inference hang on the properties that derive from these. In describing the various types of inference the following employs a few old "terms of art" from classical logic that are still of use in treating these kinds of simple problems in reasoning.
Deduction takes a Case, the minor premise
and combines it with a Rule, the major premise
to arrive at a Fact, the demonstrative conclusion
Induction takes a Case of the form
and matches it with a Fact of the form
to infer a Rule of the form
Abduction takes a Fact of the form
and matches it with a Rule of the form
to infer a Case of the form
For ease of reference, Figure 1 and the Legend beneath it
summarize the classical terminology for the three types
of inference and the relationships among them.
In its original usage a statement of Fact has to do with a deed done or a record made, that is, a type of event that is openly observable and not riddled with speculation as to its very occurrence. In contrast, a statement of Case may refer to a hidden or a hypothetical cause, that is, a type of event that is not immediately observable to all concerned. Obviously, the distinction is a rough one and the question of which mode applies can depend on the points of view that different observers adopt over time. Finally, a statement of a Rule is called that because it states a regularity or a regulation that governs a whole class of situations, and not because of its syntactic form. So far in this discussion, all three types of constraint are expressed in the form of conditional propositions, but this is not a fixed requirement. In practice, these modes of statement are distinguished by the roles that they play within an argument, not by their style of expression. When the time comes to branch out from the syllogistic framework, we will find that propositional constraints can be discovered and represented in arbitrary syntactic forms.
Example of inquiry
Examples of inquiry, that illustrate the full cycle of its abductive, deductive, and inductive phases, and yet are both concrete and simple enough to be suitable for a first (or zeroth) exposition, are somewhat rare in Peirce's writings, and so let us draw one from the work of fellow pragmatician John Dewey, analyzing it according to the model of zeroth-order inquiry that we developed above.
A man is walking on a warm day. The sky was clear the last time he observed it; but presently he notes, while occupied primarily with other things, that the air is cooler. It occurs to him that it is probably going to rain; looking up, he sees a dark cloud between him and the sun, and he then quickens his steps. What, if anything, in such a situation can be called thought? Neither the act of walking nor the noting of the cold is a thought. Walking is one direction of activity; looking and noting are other modes of activity. The likelihood that it will rain is, however, something suggested. The pedestrian feels the cold; he thinks of clouds and a coming shower. (John Dewey, How We Think, 1910, pp. 6-7).
Once over quickly
Let's first give Dewey's example of inquiry in everyday life the quick once over, hitting just the high points of its analysis into Peirce's three kinds of reasoning.
Abductive phase
In Dewey's "Rainy Day" or "Sign of Rain" story, we find our peripatetic hero presented with a surprising Fact:
Fact: C → A, In the Current situation the Air is cool.
Responding to an intellectual reflex of puzzlement about the situation, his resource of common knowledge about the world is impelled to seize on an approximate Rule:
Rule: B → A, Just Before it rains, the Air is cool.
This Rule can be recognized as having a potential relevance to the situation because it matches the surprising Fact, C → A, in its consequential feature A.
All of this suggests that the present Case may be one in which it is just about to rain:
Case: C → B, The Current situation is just Before it rains.
The whole mental performance, however automatic and semi-conscious it may be, that leads up from a problematic Fact and a previously settled knowledge base of Rules to the plausible suggestion of a Case description, is what we are calling an abductive inference.
Deductive phase
The next phase of inquiry uses deductive inference to expand the implied consequences of the abductive hypothesis, with the aim of testing its truth. For this purpose, the inquirer needs to think of other things that would follow from the consequence of his precipitate explanation. Thus, he now reflects on the Case just assumed:
Case: C → B, The Current situation is just Before it rains.
He looks up to scan the sky, perhaps in a random search for further information, but since the sky is a logical place to look for details of an imminent rainstorm, symbolized in our story by the letter B, we may safely suppose that our reasoner has already detached the consequence of the abduced Case, C → B, and has begun to expand on its further implications. So let us imagine that our up-looker has a more deliberate purpose in mind, and that his search for additional data is driven by the new-found, determinate Rule:
Rule: B → D, Just Before it rains, Dark clouds appear.
Contemplating the assumed Case in combination with this new Rule leads him by an immediate deduction to predict an additional Fact:
Fact: C → D, In the Current situation Dark clouds appear.
The reconstructed picture of reasoning assembled in this second phase of inquiry is true to the pattern of deductive inference.
Inductive phase
Whatever the case, our subject observes a Dark cloud, just as he would expect on the basis of the new hypothesis. The explanation of imminent rain removes the discrepancy between observations and expectations and thereby reduces the shock of surprise that made this process of inquiry necessary.
Looking more closely
Seeding hypotheses
Figure 4 gives a graphical illustration of Dewey's example of inquiry, isolating for the purposes of the present analysis the first two steps in the more extended proceedings that go to make up the whole inquiry.
o-----------------------------------------------------------o
| |
| A D |
| o o |
| \ * * / |
| \ * * / |
| \ * * / |
| \ * * / |
| \ * * / |
| \ R u l e R u l e / |
| \ * * / |
| \ * * / |
| \ * * / |
| \ * B * / |
| F a c t o F a c t |
| \ * / |
| \ * / |
| \ * / |
| \ * / |
| \ C a s e / |
| \ * / |
| \ * / |
| \ * / |
| \ * / |
| \ * / |
| \*/ |
| o |
| C |
| |
| A = the Air is cool |
| B = just Before it rains |
| C = the Current situation |
| D = a Dark cloud appears |
| |
| A is a major term |
| B is a middle term |
| C is a minor term |
| D is a major term, associated with A |
| |
o-----------------------------------------------------------o
Figure 4. Dewey's 'Rainy Day' Inquiry
In this analysis of the first steps of Inquiry, we have a complex or a mixed form of inference that can be seen as taking place in two steps:
The first step is an Abduction that abstracts a Case from the consideration of a Fact and a Rule.
Fact: C → A, In the Current situation the Air is cool.
Rule: B → A, Just Before it rains, the Air is cool.
Case: C → B, The Current situation is just Before it rains.
The final step is a Deduction that admits this Case to another Rule and so arrives at a novel Fact.
Case: C → B, The Current situation is just Before it rains.
Rule: B → D, Just Before it rains, a Dark cloud will appear.
Fact: C → D, In the Current situation, a Dark cloud will appear.
This is nowhere near a complete analysis of the Rainy Day inquiry, even insofar as it might be carried out within the constraints of the syllogistic framework, and it covers only the first two steps of the relevant inquiry process, but maybe it will do for a start.
One other thing needs to be noticed here, the formal duality between this expansion phase of inquiry and the argument from analogy. This can be seen most clearly in the propositional lattice diagrams shown in Figures 3 and 4, where analogy exhibits a rough "A" shape and the first two steps of inquiry exhibit a rough "V" shape, respectively. Since we find ourselves repeatedly referring to this expansion phase of inquiry as a unit, let's give it a name that suggests its duality with analogy—"catalogy" will do for the moment. This usage is apt enough if one thinks of a catalogue entry for an item as a text that lists its salient features. Notice that analogy has to do with the examples of a given quality, while catalogy has to do with the qualities of a given example. Peirce noted similar forms of duality in many of his early writings, leading to the consummate treatment in his 1867 paper "On a New List of Categories" (CP 1.545-559, W 2, 49-59).
Weeding hypotheses
In order to comprehend the bearing of inductive reasoning on the closing phases of inquiry there are a couple of observations that we need to make:
First, we need to recognize that smaller inquiries are typically woven into larger inquiries, whether we view the whole pattern of inquiry as carried on by a single agent or by a complex community.
Further, we need to consider the different ways in which the particular instances of inquiry can be related to ongoing inquiries at larger scales. Three modes of inductive interaction between the micro-inquiries and the macro-inquiries that are salient here can be described under the headings of the "Learning", the "Transfer", and the "Testing" of rules.
Analogy of experience
Throughout inquiry the reasoner makes use of rules that have to be transported across intervals of experience, from the masses of experience where they are learned to the moments of experience where they are applied. Inductive reasoning is involved in the learning and the transfer of these rules, both in accumulating a knowledge base and in carrying it through the times between acquisition and application.
Learning. The principal way that induction contributes to an ongoing inquiry is through the learning of rules, that is, by creating each of the rules that goes into the knowledge base, or ever gets used along the way.
Transfer. The continuing way that induction contributes to an ongoing inquiry is through the exploit of analogy, a two-step combination of induction and deduction that serves to transfer rules from one context to another.
Testing. Finally, every inquiry that makes use of a knowledge base constitutes a "field test" of its accumulated contents. If the knowledge base fails to serve any live inquiry in a satisfactory manner, then there is a prima facie reason to reconsider and possibly to amend some of its rules.
Let's now consider how these principles of learning, transfer, and testing apply to John Dewey's "Sign of Rain" example.
Learning
Rules in a knowledge base, as far as their effective content goes, can be obtained by any mode of inference.
For example, a rule like:
Rule: B → A, Just Before it rains, the Air is cool,
is usually induced from a consideration of many past events, in a manner that can be rationally reconstructed as follows:
Case: C → B, In Certain events, it is just Before it rains,
Fact: C → A, In Certain events, the Air is cool,
------------------------------------------------------------------------------------------
Rule: B → A, Just Before it rains, the Air is cool.
However, the very same proposition could also be abduced as an explanation of a singular occurrence or deduced as a conclusion of a presumptive theory.
Transfer
What is it that gives a distinctively inductive character to the acquisition of a knowledge base? It is evidently the "analogy of experience" that underlies its useful application. Whenever we find ourselves prefacing an argument with the phrase "If past experience is any guide..." then we can be sure that this principle has come into play. We are invoking an analogy between past experience, considered as a totality, and present experience, considered as a point of application. What we mean in practice is this: "If past experience is a fair sample of possible experience, then the knowledge gained in it applies to present experience". This is the mechanism that allows a knowledge base to be carried across gulfs of experience that are indifferent to the effective contents of its rules.
Here are the details of how this notion of transfer works out in the case of the "Sign of Rain" example:
Let K(pres) be a portion of the reasoner's knowledge base that is logically equivalent to the conjunction of two rules, as follows:
K(pres) = (B → A) and (B → D).
K(pres) is the present knowledge base, expressed in the form of a logical constraint on the present universe of discourse.
It is convenient to have the option of expressing all logical statements in terms of their logical models, that is, in terms of the primitive circumstances or the elements of experience over which they hold true.
Let E(past) be the chosen set of experiences, or the circumstances that we have in mind when we refer to "past experience".
Let E(poss) be the collective set of experiences, or the projective total of possible circumstances.
Let E(pres) be the present experience, or the circumstances that are present to the reasoner at the current moment.
If we think of the knowledge base K(pres) as referring to the "regime of experience" over which it is valid, then all of these sets of models can be compared by the simple relations of set inclusion or logical implication.
Figure 5 schematizes this way of viewing the "analogy of experience".
o-----------------------------------------------------------o
| |
| K(pres) |
| o |
| /|\ |
| / | \ |
| / | \ |
| / | \ |
| / Rule \ |
| / | \ |
| / | \ |
| / | \ |
| / E(poss) \ |
| Fact / o \ Fact |
| / * * \ |
| / * * \ |
| / * * \ |
| / * * \ |
| / * * \ |
| / * Case Case * \ |
| / * * \ |
| / * * \ |
| /* *\ |
| o<<<---------------<<<---------------<<<o |
| E(past) Analogy Morphism E(pres) |
| More Known Less Known |
| |
o-----------------------------------------------------------o
Figure 5. Analogy of Experience
In these terms, the "analogy of experience" proceeds by inducing a Rule about the validity of a current knowledge base and then deducing a Fact, its applicability to a current experience, as in the following sequence:
Inductive Phase:
Given Case: E(past) → E(poss), Chosen events fairly sample Collective events.
Given Fact: E(past) → K(pres), Chosen events support the Knowledge regime.
-----------------------------------------------------------------------------------------------------------------------------
Induce Rule: E(poss) → K(pres), Collective events support the Knowledge regime.
Deductive Phase:
Given Case: E(pres) → E(poss), Current events fairly sample Collective events.
Given Rule: E(poss) → K(pres), Collective events support the Knowledge regime.
--------------------------------------------------------------------------------------------------------------------------------
Deduce Fact: E(pres) → K(pres), Current events support the Knowledge regime.
Testing
If the observer looks up and does not see dark clouds, or if he runs for shelter but it does not rain, then there is fresh occasion to question the utility or the validity of his knowledge base. But we must leave our foulweather friend for now and defer the logical analysis of this testing phase to another occasion.
See also
Charles Sanders Peirce bibliography
Community of inquiry
C. West Churchman
Curiosity
Empirical limits in science
Information entropy
Information theory
Inquisitive learning
Instrumental and intrinsic value
Logic of information
Models of scientific inquiry
Pragmatic information
Pragmatic theory of truth
Pragmaticism
Research
Research question
Uncertainty
Notes
Citations
Bibliography
Angluin, Dana (1989), "Learning with Hints", pp. 167–181 in David Haussler and Leonard Pitt (eds.), Proceedings of the 1988 Workshop on Computational Learning Theory, MIT, 3–5 August 1988, Morgan Kaufmann, San Mateo, CA, 1989.
Aristotle, "Prior Analytics", Hugh Tredennick (trans.), pp. 181–531 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, UK, 1938.
Awbrey, Jon, and Awbrey, Susan (1995), "Interpretation as Action : The Risk of Inquiry", Inquiry : Critical Thinking Across the Disciplines 15, 40–52. Eprint.
Delaney, C.F. (1993), Science, Knowledge, and Mind: A Study in the Philosophy of C.S. Peirce, University of Notre Dame Press, Notre Dame, IN.
Dewey, John (1910), How We Think, D.C. Heath, Lexington, MA, 1910. Reprinted, Prometheus Books, Buffalo, NY, 1991.
Dewey, John (1938), Logic: The Theory of Inquiry, Henry Holt and Company, New York, NY, 1938. Reprinted as pp. 1–527 in John Dewey, The Later Works, 1925–1953, Volume 12: 1938, Jo Ann Boydston (ed.), Kathleen Poulos (text. ed.), Ernest Nagel (intro.), Southern Illinois University Press, Carbondale and Edwardsville, IL, 1986.
Haack, Susan (1993), Evidence and Inquiry: Towards Reconstruction in Epistemology, Blackwell Publishers, Oxford, UK.
Hanson, Norwood Russell (1958), Patterns of Discovery, An Inquiry into the Conceptual Foundations of Science, Cambridge University Press, Cambridge, UK.
Hendricks, Vincent F. (2005), Thought 2 Talk: A Crash Course in Reflection and Expression, Automatic Press / VIP, New York, NY.
Maxwell, Nicholas (2007) From Knowledge to Wisdom, Pentire Press, London.
Maxwell, Nicholas (2017), In Praise of Natural Philosophy: A Revolution for Thought and Life, McGill-Queen's University Press, Montreal.
Misak, Cheryl J. (1991), Truth and the End of Inquiry, A Peircean Account of Truth, Oxford University Press, Oxford, UK.
Peirce, C.S., (1931–1935, 1958), Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA. Cited as CP volume.paragraph.
Stalnaker, Robert C. (1984), Inquiry, MIT Press, Cambridge, MA.
External links
Philosophical logic
Philosophy of science
Evaluation methods
Critical thinking skills
Wikipedia articles with ASCII art | 0.782874 | 0.988315 | 0.773726 |
Universalism | Universalism is the philosophical and theological concept within Christianity that some ideas have universal application or applicability.
A belief in one fundamental truth is another important tenet in universalism. The living truth is seen as more far-reaching than the national, cultural, or religious boundaries or interpretations of that one truth. A community that calls itself universalist may emphasize the universal principles of most religions, and accept others in an inclusive manner.
In the modern context, Universalism can also mean the Western pursuit of unification of all human beings across geographic and other boundaries under Western values, or the application of really universal or universalist constructs, such as human rights or international law.
Universalism has had an influence on modern-day Hinduism, in turn influencing modern Western spirituality.
Christian universalism refers to the idea that every human will eventually receive salvation in a religious or spiritual sense, a concept also referred to as universal reconciliation.
Philosophy
Universality
In philosophy, universality is the notion that universal facts can be discovered and is therefore understood as being in opposition to relativism and nominalism.
Moral universalism
Moral universalism (also called moral objectivism or universal morality) is the meta-ethical position that some system of ethics applies universally. That system is inclusive of all individuals, regardless of culture, race, sex, religion, nationality, sexual orientation, or any other distinguishing feature. Moral universalism is opposed to moral nihilism and moral relativism. However, not all forms of moral universalism are absolutist, nor do they necessarily value monism. Many forms of universalism, such as utilitarianism, are non-absolutist. Other forms such as those theorized by Isaiah Berlin, may value pluralist ideals.
Religion
Baháʼí Faith
In the teachings of the Baháʼí Faith, a single God has sent all the historic founders of the world religions in a process of progressive revelation. As a result, the major world religions are seen as divine in origin and are continuous in their purpose. In this view, there is unity among the founders of world religions, but each revelation brings a more advanced set of teachings in human history and none are syncretic. In addition, the Baháʼí teachings acknowledge that in every country and every people God has always revealed the divine purpose via messengers and prophets, masters and sages since time immemorial.
Within this universal view, the unity of humanity is one of the central teachings of the Baháʼí Faith. The Baháʼí teachings state that since all humans have been created in the image of God, God does not make any distinction between people with regard to race, colour or religion. Thus, because all humans have been created equal, they all require equal opportunities and treatment. Hence the Baháʼí view promotes the unity of humanity, and that people's vision should be world-embracing and that people should love the whole world rather than just their nation.
The teaching, however, does not equate unity with uniformity; instead the Baháʼí writings advocate the principle of unity in diversity where the variety in the human race is valued. Operating on a worldwide basis this cooperative view of the peoples and nations of the planet culminates in a vision of the practicality of the progression in world affairs towards, and the inevitability of, world peace.
Buddhism
The term Universalism has been applied to different aspects of Buddhist thought by different modern authors.
The idea of universal salvation is key to the Mahayana school of Buddhism. A common feature of Mahayana Buddhism is the idea that all living beings have Buddha nature and thus all beings can aspire to become bodhisattvas, beings who are on the path to Buddhahood. This capacity is seen as something that all beings in the universe have. This idea has been termed "bodhisattva universalism" by the Buddhist studies scholar Jan Nattier.
The idea of universal Buddha nature has been interpreted in various ways in Buddhism, from the idea that all living beings have Buddha nature and thus can become Buddhas to the idea that because all beings have Buddha nature, all beings will definitely become Buddhas. Some forms of East Asian Mahayana Buddhism even extended the Buddha nature theory to plants and insentient phenomena. Some thinkers (such as Kukai) even promote the idea that the entire universe is the Buddha's body.
The Lotus Sutra, an influential Mahayana scripture, is often seen as promoting the universality of Buddhahood, the Buddha's teaching as well as the equality of all living beings. Mahayana Buddhism also promotes a universal compassion towards all sentient beings and sees all beings as equally deserving of compassion. The doctrine of the One Vehicle (which states that all Buddhist paths lead to Buddhahood) is also often seen as a universalist doctrine.
Adherents to Pure Land Buddhism point to Amitabha Buddha as a Universal Savior. According to the Pure Land Sutras (scriptures), before becoming a Buddha Amitabha vowed that he would save all beings and according to some Pure Land authors, all beings will be eventually saved through the work of Amida Buddha. As such, Pure Land Buddhism is often seen as an expression of a Buddhist universalism that compares to Christian universalism. This comparison has also been commented on by Christian theologians like Karl Barth.
Chinese Buddhism developed a form of Buddhist universalism which saw Confucianism, Daoism and Buddhism as different aspects of a single universal truth.
In Western Buddhism, the term Universalism may also refer to an nonsectarian and eclectic form of Buddhism which emphasizes ecumenism among the different Buddhism schools. American clergyman Julius A. Goldwater was one Buddhist figure who promoted a modern kind of Buddhist Universalism. For Goldwater, Buddhism transcends local contexts and culture, and his practice grew increasingly eclectic over time. Goldwater established the nonsectarian Buddhist Brotherhood of America which focused on ecumenical and nonsectarian Buddhism while also drawing on Protestant vocabulary and ideas.
The desire to develop a more universalist and nonsectarian form of Buddhism was also shared by some modernist Japanese Buddhist authors, including the influential D.T. Suzuki.
Christianity
The fundamental idea of Christian universalism is universal reconciliation – that all humans will ultimately receive salvation and be reconciled to God. They will eventually enter God's kingdom in Heaven, through the grace and works of the Lord Jesus Christ. Christian universalists hold that an everlasting hell does not exist (though most believe there is a temporary hell of some kind), and that unending torment was not what Jesus taught. They point to historical evidence showing that many early fathers of the church were universalists and attribute the origin of the idea of hell as eternal punishment to mistranslation. They also appeal to many texts of Scripture to argue that the concept of eternal hell is not biblically or historically supported either in Judaism or early Christianity.
Universalists cite numerous biblical passages which reference the salvation of all beings (such as Jesus' words in John 12:31-32, and Paul's words in Romans 5:18-19). In addition, they argue that an eternal hell is both unjust and contrary to the nature and attributes of a loving God.
The beliefs of Christian universalism are generally compatible with the essentials of Christianity, as they do not contradict any of the central affirmations summarized in the Nicene Creed. More specifically, universalists often emphasize the following teachings:
God is the loving Parent of all people (see Love of God).
Jesus Christ reveals the nature and character of God, and is the spiritual leader of humankind.
Humankind is created with an immortal soul, which death can not end—or a mortal soul that shall be resurrected and preserved by God. A soul which God will not wholly destroy.
Sin has negative consequences for the sinner either in this life or the afterlife. All of God's punishments for sin are corrective and remedial. None of such punishments will last forever, or result in the permanent destruction of a soul. Some Christian universalists believe in the idea of a Purgatorial Hell, or a temporary place of purification that some must undergo before their entrance into Heaven.
In 1899 the Universalist General Convention, later called the Universalist Church of America, adopted the Five Principles: the belief in God, Jesus Christ, the immortality of the human soul, the reality of sin and universal reconciliation.
History
Universalist writers such as George T. Knight have claimed that Universalism was a widely held view among theologians in Early Christianity. These included such important figures such as Alexandrian scholar Origen as well as Clement of Alexandria, a Christian theologian. Origen and Clement both included the existence of a non-eternal Hell in their teachings. Hell was remedial, in that it was a place one went to purge one's sins before entering into Heaven.
Between 1648-1697 English activist Gerrard Winstanley, writer Richard Coppin, and dissenter Jane Leade, each taught that God would grant all human beings salvation. The same teachings were later spread throughout 18th-century France and America by George de Benneville. People who taught this doctrine in America would later become known as the Universalist Church of America. The first Universalist Church in America was founded by John Murray (minister).
The Greek term apocatastasis came to be related by some to the beliefs of Christian universalism, but central to the doctrine was the restitution, or restoration of all sinful beings to God, and to His state of blessedness. In early Patristics, usage of the term is distinct.
Universalist theology
Universalist theology is grounded in history, scripture, and assumptions about the nature of God. That All Shall Be Saved (2019) by Orthodox Christian theologian David Bentley Hart contains arguments from all three areas but with a focus on arguments from the nature of God. Thomas Whittemore wrote the book 100 Scriptural Proofs that Jesus Christ Will Save All Mankind quoting both Old and New Testament verses which support the Universalist viewpoint.
Some Bible verses he cites and are cited by other Christian universalists are:
Luke 3:6: "And all people will see God's salvation." (NIV)
John 17:2: "since thou hast given him power over all flesh, to give eternal life to all whom thou hast given him." (RSV)
1 Corinthians 15:22: "For as in Adam all die, so also in Christ shall all be made alive." (ESV)
2 Peter 3:9: "The Lord is not slow to fulfill his promise as some count slowness, but is patient toward you, not wishing that any should perish, but that all should reach repentance." (ESV)
1 Timothy 2:3–6: "This is good, and pleases God our Savior, who wants all men to be saved and to come to a knowledge of the truth. For there is one God and one mediator between God and men, the man Christ Jesus, who gave himself as a ransom for ALL men—the testimony given in its proper time." (NIV)
1 John 2:2: "He is the atoning sacrifice for our sins, and not only for ours but also for the sins of the whole world." (NIV)
1 Timothy 4:10: "For to this end we toil and strive, because we have our hope set on the living God, who is the Savior of all people, especially of those who believe." (ESV)
Romans 5:18: "Then as one man's trespass led to condemnation for all men, so one man's act of righteousness leads to acquittal and life for all men." (RSV)
Romans 11:32: "For God has bound all men over to disobedience so that he may have mercy on them all." (NIV)
Questions of Biblical Translation
Christian universalists point towards the mistranslations of the Greek word αιών (literally "age," but often assumed to mean "eternity") and its adjectival form αἰώνιος (usually assumed to mean "eternal" or "everlasting"), as giving rise to the idea of an endless hell and the idea that some people will never be saved. For example, Revelation 14:11 says "the smoke of their torment goes up εἰς αἰῶνας αἰώνων" which most literally means "until ages of ages" but is often paraphrased in translations as "forever and ever."
This Greek word is the origin of the modern English word eon, which refers to a period of time or an epoch/age.
The 19th century theologian Marvin Vincent wrote about the word aion, and the supposed connotations of "eternal" or "temporal":
Aion, transliterated aeon, is a period of longer or shorter duration, having a beginning and an end, and complete in itself. [...] Neither the noun nor the adjective, in themselves, carry the sense of endless or everlasting."A number of scholars have argued that, in some cases, the adjective may not indicate duration at all, but may instead have a qualitative meaning. For instance, Dr. David Bentley Hart translates Matthew 25:46 as "And these will go to the chastening of that Age, but the just to the life of that Age." In this reading, Jesus is not necessarily indicating how long the life and punishment last, but instead what kind the life and punishment are—they are "of the age [to come]" rather than being earthly life or punishment. Dr. Thomas Talbott writes:[The writers of the New Testament] therefore came to employ the term aiōnios as an eschatological term, one that functioned as a handy reference to the realities of the age to come. In that way they managed to combine the more literal sense of "that which pertains to an age" with the more religious sense of "that which manifests the presence of God in a special way."Dr. Ken Vincent writes that "When it (aion) was translated into Latin Vulgate, 'aion' became 'aeternam' which means 'eternal'. Likewise, Dr. Ilaria Ramelli explains:The mistranslation and misinterpretation of αἰώνιος as "eternal" (already in Latin, where both αἰώνιος and ἀΐδιος are rendered aeternus and their fundamental semantic difference is blurred) certainly contributed a great deal to the rise of the doctrine of "eternal damnation" and of the "eternity of hell."Among the English translations that do not render αἰώνιος as "eternal" or "everlasting" are Young’s Literal Translation (“age-during”), the Weymouth New Testament ("of the ages”), the Concordant Literal Version ("eonian"), Rotherham's Emphasized Bible ("age-abiding"), Hart's New Testament ("of that Age"), and more.
Catholicism
The Catholic church believes that God judges everyone based only on their moral acts, that no one should be subject to human misery, that everyone is equal in dignity yet distinct in individuality before God, that no one should be discriminated against because of their sin or concupiscence, and that apart from coercion God exhausts every means to save mankind from evil: original holiness being intended for everyone, the irrevocable Old Testament covenants, each religion being a share in the truth, elements of sanctification in non-Catholic Christian communities, the good people of every religion and nation, everyone being called to baptism and confession, and Purgatory, suffrages, and indulgences for the dead. The church believes that everyone is predestined to Heaven, that no one is predestined to Hell, that everyone is redeemed by Christ's Passion, that no one is excluded from the church except by sin, and that everyone can either love God by loving others unto going to Heaven or reject God by sin unto going to Hell. The church believes that God's predestination takes everything into account, and that his providence brings out of evil a greater good, as evidenced, the church believes, by the Passion of Christ being all at once predestined by God, foretold in Scripture, necessitated by original sin, authored by everyone who sins, caused by Christ's executioners, and freely planned and undergone by Christ. The church believes that everyone who goes to Heaven joins the church, and that from the beginning God intended Israel to be the beginning of the church, wherein God would unite all persons to each other and to God. The church believes that Heaven and Hell are eternal.
Hinduism
Author David Frawley says that Hinduism has a "background universalism" and its teachings contain a "universal relevance." Hinduism is also naturally religiously pluralistic. A well-known Rig Vedic hymn says: "Truth is One, though the sages know it variously." Similarly, in the Bhagavad Gītā (4:11), God, manifesting as an incarnation, states: "As people approach me, so I receive them. All paths lead to me." The Hindu religion has no theological difficulties in accepting degrees of truth in other religions. Hinduism emphasizes that everyone actually worships the same God, whether one knows it or not.
While Hinduism has an openness and tolerance towards other religions, it also has a wide range of diversity within it. There are considered to be six orthodox Hindu schools of philosophy/theology, as well as multiple unorthodox or "heterodox" traditions called darshanas.
Hindu universalism
Hindu universalism, also called Neo-Vedanta and neo-Hinduism, is a modern interpretation of Hinduism which developed in response to western colonialism and orientalism. It denotes the ideology that all religions are true and therefore worthy of toleration and respect.
It is a modern interpretation that aims to present Hinduism as a "homogenized ideal of Hinduism" with Advaita Vedanta as its central doctrine. For example, it presents that:
Hinduism embraces universalism by conceiving the whole world as a single family that deifies the one truth, and therefore it accepts all forms of beliefs and dismisses labels of distinct religions which would imply a division of identity.
This modernised re-interpretation has become a broad current in Indian culture, extending far beyond the Dashanami Sampradaya, the Advaita Vedanta Sampradaya founded by Adi Shankara. An early exponent of Hindu Universalism was Ram Mohan Roy, who established the Brahmo Samaj. Hindu Universalism was popularised in the 20th century in both India and the west by Vivekananda and Sarvepalli Radhakrishnan. Veneration for all other religions was articulated by Gandhi:
Western orientalists played an important role in this popularisation, regarding Vedanta to be the "central theology of Hinduism". Oriental scholarship portrayed Hinduism as a "single world religion", and denigrated the heterogeneity of Hindu beliefs and practices as 'distortions' of the basic teachings of Vedanta.
Islam
Islam recognizes to a certain extent the validity of the Abrahamic religions, the Quran identifying Jews, Christians, and "Sabi'un" (usually taken as a reference to the Mandaeans) as "people of the Book" (ahl al-kitab). Later Islamic theologians expanded this definition to include Zoroastrians, and later even Hindus, as the early Islamic empire brought many people professing these religions under its dominion, but the Qur'an explicitly identifies only Jews, Christians, and Sabians as People of the Book., , The relation between Islam and universalism has assumed crucial importance in the context of political Islam or Islamism, particularly in reference to Sayyid Qutb, a leading member of the Muslim Brotherhood movement, and one of the key contemporary philosophers of Islam.
There are several views within Islam with respect to Universalism. According to the most inclusive teachings all peoples of the book have a chance of salvation. For example, Surah 2:62 states:
However, the most exclusive teachings disagree. For example, Surah 9:5 states:
The interpretation of all of these passages are hotly contested amongst various schools of thought and branches of Islam as is the doctrine of abrogation (naskh) which is used to determine which verses take precedence, based on reconstructed chronology, with later verses superseding earlier ones. The ahadith also play a major role in this, and different schools of thought assign different weightings and rulings of authenticity to different hadith, with the four schools of Sunni thought accepting the Six Authentic Collections, generally along with the Muwatta Imam Malik. Depending on the level of acceptance of rejection of certain traditions, the interpretation of the Koran can be changed immensely, from the Qur'anists who reject the ahadith, to the ahl al-hadith, who hold the entirety of the traditional collections in great reverence.
Some Islamic scholars view the world as bipartite, consisting of the House of Islam, that is, where people live under the Sharia; and the House of War, that is, where the people do not live under Sharia, which must be proselytized using whatever resources available, including, in some traditionalist and conservative interpretations, the use of violence, as holy struggle in the path of God, to either convert its inhabitants to Islam, or to rule them under the Shariah (cf. dhimmi).
Judaism
Judaism teaches that God chose the Jewish people to be in a unique covenant with God, and one of their beliefs is that Jewish people were charged by the Torah with a specific mission—to be a light unto the nations, and to exemplify the covenant with God as described in the Torah to other nations. This view does not preclude a belief that God also has a relationship with other peoples—rather, Judaism holds that God had entered into a covenant with all humanity as Noachides, and that Jews and non-Jews alike have a relationship with God, as well as being universal in the sense that it is open to all mankind.
Modern Jews such as Emmanuel Levinas advocate a universalist mindset that is performed through particularist behavior. An on-line organization, the Jewish Spiritual Leaders Institute founded and led by Steven Blane, who calls himself an "American Jewish Universalist Rabbi", believes in a more inclusive version of Jewish Universalism, stating that "God equally chose all nations to be lights unto the world, and we have much to learn and share with each other. We can only accomplish Tikkun Olam by our unconditional acceptance of each other's peaceful doctrines."
Manichaeism
Manichaeism, like Christian Gnosticism and Zurvanism, was arguably in some ways inherently universalist. Yet in other respects, it was quite contrary to universalistic principles, holding instead to an eternal dualism.
Sikhism
In Sikhism, all the religions of the world are compared to rivers flowing into a single ocean. Although the Sikh gurus did not agree with the practices of fasting, idolatry and pilgrimage during their times, they stressed that all religions should be tolerated. The Sikh scripture, the Guru Granth Sahib, contains the writings of not just the Sikh guru themselves, but the writings of several Hindu and Muslim saints, known as the Bhagats.
The very first word of the Sikh scripture is "Ik", followed by "Omkar". This literally means that there is only one god, and that one is wholesome, inclusive of the whole universe. It further goes on to state that all of creation, and all energy is part of this primordial being. As such, it is described in scripture over and over again, that all that occurs is part of the divine will, and as such, has to be accepted. It occurs for a reason, even if it is beyond the grasp of one person to understand.
Although Sikhism does not teach that men are created as an image of God, it states that the essence of the One is to be found throughout all of its creation. As was said by Yogi Bhajan, the man who is credited with having brought Sikhism to the West:
The First Sikh Guru, Guru Nanak said himself:
By this, Guru Nanak meant that there is no real "religion" in God's eyes. Unlike many of the major world religions, Sikhism does not have missionaries, instead it believes men have the freedom to find their own path to salvation.
Unitarian Universalism
Unitarian Universalism (UU) is a theologically liberal religion characterized by a "free and responsible search for truth and meaning". Unitarian Universalists do not share a creed; rather, they are unified by their shared search for spiritual growth and by the understanding that an individual's theology is a result of that search and not a result of obedience to an authoritarian requirement. Unitarian Universalists draw from all major world religions and many different theological sources and have a wide range of beliefs and practices.
While having its origins in Christianity, UU is no longer a Christian church. As of 2006, fewer than about 20% of Unitarian Universalists identified themselves as Christian. Contemporary Unitarian Universalism espouses a pluralist approach to religious belief, whereby members may describe themselves as humanist, agnostic, deist, atheist, pagan, Christian, monotheist, pantheist, polytheist, or assume no label at all.
The Unitarian Universalist Association (UUA) was formed in 1961, a consolidation of the American Unitarian Association, established in 1825, and the Universalist Church of America, established in 1866. It is headquartered in Boston, and mainly serves churches in the United States. The Canadian Unitarian Council became an independent body in 2002.
Zoroastrianism
Some varieties of Zoroastrian (such as Zurvanism) are universalistic in application to all races, but not necessarily universalist in the sense of universal salvation.
Views of the Latter Day Saint Movement
See also
Ananda Marga
Christianity:
Liberal Catholic Church
Primitive Baptist Universalist
Religious Society of Friends
Schwarzenau Brethren
Swedenborgianism (The New Church)
Trinitarian Universalism
Church Universal and Triumphant
Comparative religion
Ecumenism
Hypothetical universalism
George MacDonald
Mahatma Gandhi Foundation
Omnism
Perennial philosophy
Post-theism
Religious liberalism
Religious pluralism
Spannian universalism
Subud
Universal basic income
Universal basic services
Universal health care
Universal suffrage
Universal Sufism
References
Sources
Further reading
Online.
External links
Catholic Encyclopedia article on Universalists as a Protestant denomination
Catholic Encyclopedia article on Apocatastasis/apokatastatis
TentMaker website – Many free books and articles on Universalism
Christian theological movements
Religious philosophical concepts
Philosophical theories
Religious pluralism
Religious terminology | 0.775347 | 0.997852 | 0.773682 |
Existential crisis | Existential crises are inner conflicts characterized by the impression that life lacks meaning and confusion about one's personal identity. They are accompanied by anxiety and stress, often to such a degree that they disturb one's normal functioning in everyday life and lead to depression. Their negative attitude towards meaning reflects characteristics of the philosophical movement of existentialism. The components of existential crises can be divided into emotional, cognitive, and behavioral aspects. Emotional components refer to the feelings, such as emotional pain, despair, helplessness, guilt, anxiety, or loneliness. Cognitive components encompass the problem of meaninglessness, the loss of personal values or spiritual faith, and thinking about death. Behavioral components include addictions, and anti-social and compulsive behavior.
Existential crises may occur at different stages in life: the teenage crisis, the quarter-life crisis, the mid-life crisis, and the later-life crisis. Earlier crises tend to be forward-looking: the individual is anxious and confused about which path in life to follow regarding education, career, personal identity, and social relationships. Later crises tend to be backward-looking. Often triggered by the impression that one is past one's peak in life, they are usually characterized by guilt, regret, and a fear of death. If an earlier existential crisis was properly resolved, it is easier for the individual to resolve or avoid later crises. Not everyone experiences existential crises in their life.
The problem of meaninglessness plays a central role in all of these types. It can arise in the form of cosmic meaning, which is concerned with the meaning of life at large or why we are here. Another form concerns personal secular meaning, in which the individual tries to discover purpose and value mainly for their own life. Finding a source of meaning may resolve a crisis, like altruism, dedicating oneself to a religious or political cause, or finding a way to develop one's potential. Other approaches include adopting a new system of meaning, learning to accept meaninglessness, cognitive behavioral therapy, and the practice of social perspective-taking.
Negative consequences of existential crisis include anxiety and bad relationships on the personal level as well as a high divorce rate and decreased productivity on the social level. Some questionnaires, such as the Purpose in Life Test, measure whether someone is currently undergoing an existential crisis. Outside its main use in psychology and psychotherapy, the term "existential crisis" refers to a threat to the existence of something.
Definition
In psychology and psychotherapy, the term "existential crisis" refers to a form of inner conflict. It is characterized by the impression that life lacks meaning and is accompanied by various negative experiences, such as stress, anxiety, despair, and depression. This often happens to such a degree that it disturbs one's normal functioning in everyday life. The inner nature of this conflict sets existential crises apart from other types of crises that are mainly due to outward circumstances, like social or financial crises. Outward circumstances may still play a role in triggering or exacerbating an existential crisis, but the core conflict happens on an inner level. The most common approach to resolving an existential crisis consists in addressing this inner conflict and finding new sources of meaning in life.
The core issue responsible for the inner conflict is the impression that the individual's desire to lead a meaningful life is thwarted by an apparent lack of meaning, also because they feel much confusion about what meaning really is, and are constantly questioning themselves. In this sense, existential crises are crises of meaning. This is often understood through the lens of the philosophical movement known as existentialism. One important aspect of many forms of existentialism is that the individual seeks to live in a meaningful way but finds themselves in a meaningless and indifferent world. The exact term "existential crisis" is not commonly found in the traditional existentialist literature in philosophy. But various closely related technical terms are discussed, such as existential dread, existential vacuum, existential despair, existential neurosis, existential sickness, anxiety, and alienation.
Different authors focus in their definitions of existential crisis on different aspects. Some argue that existential crises are at their core crises of identity. On this view, they arise from a confusion about the question "Who am I?" and their goal is to achieve some form of clarity about oneself and one's position in the world. As identity crises, they involve intensive self-analysis, often in the form of exploring different ways of looking at oneself. They constitute a personal confrontation with certain key aspects of the human condition, like existence, death, freedom, and responsibility. In this sense, the person questions the very foundations of their life. Others emphasize the confrontation with human limitations, such as death and lack of control. Some stress the spiritual nature of existential crises by pointing out how outwardly successful people may still be severely affected by them if they lack the corresponding spiritual development.
The term "existential crisis" is most commonly used in the context of psychology and psychotherapy. But it can also be employed in a more literal sense as a crisis of existence to express that the existence of something is threatened. In this sense, a country, a company, or a social institution faces an existential crisis if political tensions, military threats , high debt, or social changes may have as a result that the corresponding entity ceases to exist.
Components
Existential crises are usually seen as complex phenomena that can be understood as consisting of various components. Some approaches distinguish three types of components belonging to the fields of emotion, cognition, and behavior. Emotional aspects correspond to what it feels like to have an existential crisis. It is usually associated with emotional pain, despair, helplessness, guilt, anxiety, and loneliness. On the cognitive side, the affected are often confronted with a loss of meaning and purpose together with the realization of one's own end. Behaviorally, existential crises may express themselves in addictions and anti-social behavior, sometimes paired with ritualistic behavior, loss of relationships, and degradation of one's health. While manifestations of these three components can usually be identified in every case of an existential crisis, there are often significant differences in how they manifest. Nonetheless, it has been suggested that these components can be used to give a more unified definition of existential crises.
Emotional
On the emotional level, existential crises are associated with unpleasant experiences, such as fear, anxiety, panic, and despair. They can be categorized as a form of emotional pain whereby people lose trust and hope. This pain often manifests in the form of despair and helplessness. The despair may be caused by being unable to find meaning in life, which is associated both with a lack of motivation and the absence of inner joy. The impression of helplessness arises from being unable to find a practical response to deal with the crisis and the associated despair. This helplessness concerns specifically a form of emotional vulnerability: the individual is not just subject to a wide range of negative emotions, but these emotions often seem to be outside the person's control. This feeling of vulnerability and lack of control can itself produce further negative impressions and may lead to a form of panic or a state of deep mourning.
But on the other hand, there is also often an impression in the affected that they are in some sense responsible for their predicament. This is the case, for example, if the loss of meaning is associated with bad choices in the past for which the individual feels guilty. But it can also take the form of a more abstract type of bad conscience as existential guilt. In this case, the agent carries a vague sense of guilt that is free-floating in the sense that it is not tied to any specific wrongdoing by the agent. Especially in existential crises in the later parts of one's life, this guilt is often accompanied by a fear of death. But just as in the case of guilt, this fear may also take a more abstract form as an unspecific anxiety associated with a sense of deficiency and meaninglessness.
As crises of identity, existential crises often lead to a disturbed sense of personal integrity. This can be provoked by the apparent meaninglessness of one's life together with a general lack of motivation. Central to the sense of personal integrity are close relationships with oneself, others, and the world. The absence of meaning usually has a negative impact on these relationships. As a lack of a clear purpose, it threatens one's personal integrity and can lead to insecurity, alienation, and self-abandonment. The negative impact on one's relationships with others is often experienced as a form of loneliness.
Depending on the person and the crisis they are suffering, some of these emotional aspects may be more or less pronounced. While they are all experienced as unpleasant, they often carry within them various positive potentials as well that can push the person in the direction of positive personal development. Through the experience of loneliness, for example, the person may achieve a better understanding of the substance and importance of relationships.
Cognitive
The main cognitive aspect of existential crises is the loss of meaning and purpose. In this context, the term "meaninglessness" refers to the general impression that there is no higher significance, direction, or purpose in our actions or in the world at large. It is associated with the question of why one is doing what one is doing and why one should continue. It is a central topic in existentialist psychotherapy, which has as one of its main goals to help the patient find a proper response to this meaninglessness. In Viktor Frankl's logotherapy, for example, the term existential vacuum is used to describe this state of mind. Many forms of existentialist psychotherapy aim to resolve existential crises by assisting the patient in rediscovering meaning in their life. Closely related to meaninglessness is the loss of personal values. This means that things that seemed valuable to the individual before, like the relation to a specific person or success in their career, may now appear insignificant or pointless to them. If the crisis is resolved, it can lead to the discovery of new values.
Another aspect of the cognitive component of many existential crises concerns the attitude to one's personal end, i.e. the realization that one will die one day. While this is not new information as an abstract insight, it takes on a more personal and concrete nature when one sees oneself confronted with this fact as a concrete reality one has to face. This aspect is of particular relevance for existential crises occurring later in life or when the crisis was triggered by the loss of a loved one or by the onset of a terminal disease. For many, the issue of their own death is associated with anxiety. But it has also been argued that the contemplation of one's death may act as a key to resolving an existential crisis. The reason for this is that the realization that one's time is limited can act as a source of meaning by making the remaining time more valuable and by making it easier to discern the bigger issues that matter in contrast to smaller everyday issues that can act as distractions. Important factors for dealing with imminent death include one's religious outlook, one's self-esteem, and social integration as well as one's future prospects.
Behavioral
Existential crises can have various effects on the individual's behavior. They often lead a person to isolate themself and engage less in social interactions. For example, one's communication to one's housemates may be limited to very brief responses like a simple "yes" or "no" in order to avoid a more extended exchange or the individual reduces various forms of contact that are not strictly speaking necessary. This can result in a long-term deterioration and loss of one's relationships. In some cases, existential crises may also express themselves in overtly anti-social behavior, like hostility or aggression. These negative impulses can also be directed at the person themselves, leading to self-injury and, in the worst case, suicide.
Addictive behavior is also seen in people going through an existential crisis. Some turn to drugs in order to lessen the impact of the negative experiences whereas others hope to learn through the non-ordinary drug experiences to cope with the existential crisis. While this type of behavior can succeed in providing a short-term relief of the effects of the existential crisis, it has been argued that it is usually maladaptive and fails on the long-term level. This way, the crises may even be further exacerbated. For the affected, it is often difficult to distinguish the need for pleasure and power from the need for meaning, thereby leading them on a wrong track in their efforts to resolve the crisis. The addictions themselves or the stress associated with existential crises can result in various health problems, ranging from high blood pressure to long-term organ damage and increased likelihood of cancer.
Existential crises may also be accompanied by ritualistic behavior. In some cases, this can have positive effects to help the affected transition to a new outlook on life. But it might also take the form of compulsive behavior that acts more as a distraction than as a step towards a solution. Another positive behavioral aspect concerns the tendency to seek therapy. This tendency reflects the awareness of the affected of the gravity of the problem and their desire to resolve it.
Types
Different types of existential crises are often distinguished based on the time in one's life when they occur. This approach rests on the idea that, depending on one's stage in life, individuals are faced with different issues connected to meaning and purpose. They lead to different types of crises if these issues are not properly resolved. The stages are usually tied to rough age groups but this correspondence is not always accurate since different people of the same age group may find themselves in different life situations and different stages of development. Being aware of these differences is central for properly assessing the issue at the core of a specific crisis and finding a corresponding response to resolve it.
The most well-known existential crisis is the mid-life crisis and a lot of research is directed specifically at this type of crisis. But researchers have additionally discovered various other existential crises belonging to different types. There is no general agreement about their exact number and periodization. Because of this, the categorizations of different theorists do not always coincide but they have significant overlaps. One categorization distinguishes between the early teenage crisis, the sophomore crisis, the adult crisis, the mid-life crisis, and the later-life crisis. Another focuses only on the sophomore crisis, the adult crisis, and the later-life crisis but defines them in wider terms. The sophomore crisis and the adult crisis are often treated together as forms of the quarter-life crisis.
There is wide agreement that the earlier crises tend to be more forward-looking and are characterized by anxiety and confusion about the path in life one wants to follow. The later crises, on the other hand, are more backward-looking, often in the form of guilt and regrets, while also concerned with the problem of one's own mortality.
These different crises can affect each other in various ways. For example, if an earlier crisis was not properly resolved, later crises may impose additional difficulties for the affected. But even if an earlier crisis was fully resolved, this does not guarantee that later crises will be successfully resolved or avoided altogether.
Another approach distinguishes existential crises based on their intensity. Some theorists use the terms existential vacuum and existential neurosis to refer to different degrees of existential crisis. On this view, an existential vacuum is a rather common phenomenon characterized by the frequent recurrence of subjective states like boredom, apathy, and emptiness. Some people experience this only in their free time but are otherwise not troubled by it. The term "Sunday neurosis" is often used in this context. An existential vacuum becomes an existential neurosis if it is paired with overt clinical neurotic symptoms, such as depression or alcoholism.
Teenage
The early teenage crisis involves the transition from childhood to adulthood and is centered around the issue of developing one's individuality and independence. This concerns specifically the relation to one's family and often leads to spending more time with one's peers instead. Various rebellious and anti-social behavior seen sometimes in this developmental stage, like stealing or trespassing, may be interpreted as attempts to achieve independence. It can also give rise to a new type of conformity concerning, for example, how the teenager dresses or behaves. This conformity tends to be not in relation to one's family or public standards but to one's peer group or adored celebrities. But this may be seen as a temporary step in order to distance oneself from previously accepted standards with later steps emphasizing one's independence also from one's peer group and celebrity influences. A central factor for resolving the early teenage crisis is that meaning and purpose are found in one's new identity since independence without it can result in the feeling of being lost and may lead to depression. Another factor pertains to the role of the parents. By looking for signs of depression, they may become aware that a teenager is going through a crisis. Examples include a change of appetite, sleep behavior is different; sleeps more or less, grades take a dive in a short amount of time, they are less social and more isolated, and start to become easily irritated. If parents regularly talk to their teenagers and ask them questions, it is more likely that they detect the presence of a crisis.
Quarter-life, sophomore, and adult
The term "quarter-life crisis" is often used to refer to existential crises occurring in early adulthood, i.e. roughly during the ages between 18 and 30. Some authors distinguish between two separate crises that may occur at this stage in life: the sophomore crisis and the adult crisis. The sophomore crisis affects primarily people in their late teenage years or their early 20s. It is also referred to as "sophomore slump", specifically when it affects students. It is the first time that serious questions about the meaning of life and one's role in the world are formulated. At this stage, these questions have a direct practical relation to one's future. They apply to what paths one wants to choose in life, like which career to focus on and how to form successful relationships. At the center of the sophomore crisis is the anxiety over one's future, i.e. how to lead one's life and how to best develop and employ one's abilities. Existential crisis often specifically affect high achievers who fear that they do not reach their highest potential since they lack a secure plan for the future. To solve them, it is necessary to find meaningful answers to these questions. Such answers may result in practical commitments and can inform later life decisions. Some people who have already made their career choices at a very early age may never experience a sophomore crisis. But such decisions can lead to problems later on since they are usually mainly informed by the outlook of one's social environment and less by the introspective insight into one's individual preferences. If there turns out to be a big discrepancy between the two, it can provoke a more severe form of the sophomore crisis later on. James Marcia defines this early commitment without sufficient exploration as identity foreclosure.
The adult crisis usually starts in the mid- to late 20s. The issues faced in it overlap to some extent with the ones in the sophomore crisis, but they tend to be more complex issues of identity. As such, they also circle around one's career and one's path in life. But they tend to take more details into account, like one's choice of religion, one's political outlook, or one's sexuality. Resolving the adult crisis means having a good idea of who one is as a person and being comfortable with this idea. It is usually associated with reaching full adulthood, having completed school, working full-time, having left one's home, and being financially independent. Being unable to resolve the adult crisis may result in disorientation, a lack of confidence in one's personal identity, and depression.
Mid-life
Among the different types of existential crises, the mid-life crisis is the one most widely discussed. It often sets in around the age of 40 and can be triggered by the impression that one's personal growth is obstructed. This may be combined with the sense that there is a significant distance between one's achievement and one's aspirations. In contrast to the earlier existential crises, it also involves a backward-looking component: previous choices in life are questioned and their meaning for one's achievements are assessed. This may lead to regrets and dissatisfaction with one's life choices on various topics, such as career, partner, children, social status, or missed opportunities. The tendency to look backward is often connected to the impression that one is past one's peak period in life.
Sometimes five intermediary stages are distinguished: accommodation, separation, liminality, reintegration, and individuation. In these stages, the individual first adapts to changed external demands, then addresses the distance between their innate motives and the external persona, next rejects their previously adaptive persona, later adopts their new persona, and lastly becomes aware of the external consequences associated with these changes.
Mid-life crises can be triggered by specific events such as losing a job, forced unemployment, extramarital affairs, separation, death of a loved one, or health problems. In this sense, the mid-life crisis can be understood as a period of transition or reevaluation in which the individual tries to adapt to their changed situation in life, both in response to the particular triggering event and to the more general changes that come with age.
Various symptoms are associated with mid-life crises, such as stress, boredom, self-doubt, compulsivity, changes in the libido and sexual preferences, rumination, and insecurity. In public discourse, the mid-life crisis is primarily associated with men, often in direct relation to their career. But it affects women just as well. An additional factor here is the limited time left in their reproductive period or the onset of the menopause. Between 8 and 25 percent of Americans over the age of thirty-five have experienced a mid-life crisis.
Both the severity and the length of the mid-life crisis are often affected by whether and how well the earlier crises were resolved. People who managed to resolve earlier crises well tend to feel more fulfilled with their life choices, which also reflects in how their meaningfulness is perceived when looking back on them. But it does not ensure that they still appear meaningful from one's current perspective.
Later-life
The later-life crisis often occurs around one's late 60s. It may be triggered by events such as retirement, the death of a loved one, serious illness, or imminent death. At its core is a backward-looking reflection on how one led one's life and the choices one made. This reflection is usually motivated by a desire to have lived a valuable and meaningful life paired with an uncertainty of one's success. A contemplation of one's past wrongdoings may also be motivated by a desire to find a way to make up for them while one still can. It can also express itself in a more theoretical form as trying to assess whether one's life made a positive impact on one's more immediate environment or the world at large. This is often associated with the desire to leave a positive and influential legacy behind.
Because of its backward-looking nature, there may be less one can do to truly resolve the crisis. This is true especially for people who arrive at a negative assessment of their life. An additional impeding factor in contrast to earlier crises is that individuals are often unable to find the energy and youthfulness necessary to make meaningful changes to their lives. Some suggest that developing an acceptance of the reality of death may help in the process. Other suggestions focus less on outright resolving the crisis but more on avoiding or minimizing its negative impact. Recommendations to this end include looking after one's physical, economic, and emotional well-being as well as developing and maintaining a social network of support. The best way to avoid the crisis as much as possible may be to ensure that one's earlier crises in life are resolved.
Meaninglessness
Most theorists see meaninglessness as the central issue around which existential crises revolve. In this sense, they may be understood as crises of meaning. The issue of meaning and meaninglessness concerns various closely related questions. Understood in the widest sense, it involves the global questions of the meaning of life in general, why we are here, or for what purpose we live. Answers to this question traditionally take the form of religious explanations, for example, that the world was created by God according to His purpose and that each thing is meaningful because it plays a role for this higher purpose. This is sometimes termed cosmic meaning in contrast to the secular personal meaning an individual seeks when asking in what way their particular life is meaningful or valuable. In this personal sense, it is often connected with a practical confusion about how one should live one's life or why one should continue doing what one does. This can express itself in the feeling that one has nothing to live for or to hope for. Sometimes this is even interpreted in the sense that there is no right and wrong or good and evil. While it may be more and more difficult in the contemporary secular world to find cosmic meaning, it has been argued that to resolve the problem of meaninglessness, it is sufficient for the individual to find a secular personal meaning to hold onto.
The issue of meaninglessness becomes a problem because humans seem to have a strong desire or need for meaning. This expresses itself both emotionally and practically since goals and ideals are needed to structure one's life. The other side of the problem is given in the fact that there seems to be no such meaning or that the world is at its bottom contingent and could have existed in a very different way or not at all. The world's contingency and indifference to human affairs are often referred to as the absurd in the existentialist literature. The problem can be summarized through the question "How does a being who needs meaning find meaning in a universe that has no meaning?". Various practitioners of existential psychotherapy have affirmed that the loss of meaning plays a role for the majority of people requiring psychotherapy and is the central issue for a significant number of them. But this loss has its most characteristic expression in existential crises.
Various factors affect whether life is experienced as meaningful, such as social relationships, religion, and thoughts about the past or future. Judgments of meaning are quite subjective. They are a form of global assessment since they take one's life as a whole into consideration. It is sometimes argued that the problem of a loss of meaning is particularly associated with modern society. This is often based on the idea that people tended to be more grounded in their immediate social environment, their profession, and their religion in premodern times.
Sources of meaning
It is usually held that humans have a need for meaning. This need may be satisfied by finding an accessible source of meaning. Religious faith can be a source of meaning and many studies demonstrate that it is associated with self-reported meaning in life. Another important source of meaning is due to one's social relationships. Lacking or losing a source of meaning, on the other hand, often leads to an existential crisis. In some cases, this change is clearly linked to a specific source of meaning that becomes inaccessible. For example, a religious person confronted with the vast extent of death and suffering may find their faith in a benevolent, omnipotent God shattered and thereby lose the ability to find meaning in life. For others, a concrete threat of imminent death, for example, due to the disruption of the social order, can have a similar effect. If the individual is unable to assimilate, reinterpret, or ignore this type of threatening information, the loss of their primary source of meaning may force them to reevaluate their system of meaning in life from the ground up. In this case, the person is entering an existential crisis, which can bring with it the need to question what other sources of meaning are accessible to them or whether there is meaning at all. Many different sources of meaning are discussed in the academic literature. Discovering such a source for oneself is often key to resolving an existential crisis. The sources discussed in the literature can be divided into altruism, dedication to a cause, creativity, hedonism, self-actualization, and finding the right attitude.
Altruism refers to the practice or attitude based on the desire to benefit others. Altruists aim to make the world a better place than they found it. This can happen in various ways. On a small scale, one may try to be kinder to the people in one's immediate social environment. It can include the effort to become aware of their problems and try to help them, directly or indirectly. But the altruistic attitude may also express itself in a less personal form towards strangers, for example, by donating money to charities. Effective altruism is an example of a contemporary movement promoting altruism and providing concrete advice on how to live altruistically. It has been argued that altruism can be a strong source of meaning in one's life. This is also reflected in the fact that altruists tend to enjoy higher levels of well-being as well as increased physical and mental health.
Dedicating oneself to a cause can act as a closely related source of meaning. In many cases, the two overlap, if altruism is the primary motivation. But this is not always the case since the fascination with a cause may not be explicitly linked to the desire to benefit others. It consists in devoting oneself fully to producing something greater than oneself. A diverse set of causes can be followed this way, ranging from religious goals, political movements, or social institutions to scientific or philosophical ventures. Such causes provide meaning to one's life to the extent that one participates in the meaningfulness of the cause by working towards it and realizing it.
Creativity refers to the activity of creating something new and exciting. It can act as a source of meaning even if it is not obvious that the creation serves a specific purpose. This aspect is especially relevant in the field of art, where it is sometimes claimed that the work of art does not need an external justification since it is "its own excuse for being". It has been argued that for many great artists, their keener vision of the existential dilemma of the human condition was the cause of their creative efforts. These efforts in turn may have served them as a form of therapy. But creativity is not limited to art. It can be found and practiced in many different fields, both on a big and a small scale, such as in science, cooking, gardening, writing, regular work, or romantic relationships.
The hedonistic approach can also constitute a source of meaning. It is based on the idea that a life enjoyed to the fullest extent is meaningful even if it lacks any higher overarching purpose. For this perspective, it is relevant that hedonism is not understood in a vulgar sense, i.e. as the pursuit of sensory pleasures characterized by a disregard of the long-term consequences. While such a lifestyle may be satisfying in certain respects, a more refined form of hedonism that includes other forms of pleasures and considers their long-term consequences is more commonly recommended in the academic literature. This wider sense also includes more subtle pleasures such as looking at fine art or engaging in a stimulating intellectual conversation. In this way, life can be meaningful to the individual if it is seen as a gift evoking a sense of astonishment at its miracle and a general appreciation of it.
According to the perspective of self-actualization, each human carries within themselves a potential of what they may become. The purpose of life then is to develop oneself to realize this potential and successfully doing so increases the individual's well-being and sense of meaningfulness. In this sense, just like an acorn has the potential to become an oak, so an infant has the potential to become a fully actualized adult with various virtues and skills based on their inborn talents. The process of self-actualization is sometimes understood in terms of a hierarchy: certain lower potentials have to be actualized before the actualization of higher potentials becomes possible.
Most of the approaches mentioned so far have clear practical implications in that they affect how the individual interacts with the world. The attitudinal approach, on the other hand, identifies different sources of meaning based only on taking the right attitude towards life. This concerns specifically negative situations in which one is faced with a fate that one cannot change. In existential crises, this often expresses itself in the feeling of helplessness. The idea is that in such situations one can still find meaning based on taking a virtuous or admirable attitude towards one's suffering, for example, by remaining courageous.
Whether a certain source of meaning is accessible differs from person to person. It may also depend on the stage in life one finds oneself in, similar to how different stages are often associated with different types of existential crises. It has been argued, for example, that the concern with oneself and one's own well-being found in self-actualization and hedonism tends to be associated more with earlier stages in life. The concern with others or the world at large found in altruism and the dedication to a cause, on the other hand, is more likely found in later stages in life, for example, when an older generation aims to pass on their knowledge and improve the lives of a younger generation.
Consequences, clinical manifestation, and measurement
Going through an existential crisis is associated with a variety of consequences, both for the affected individual and their social environment. On the personal level, the immediate effects are usually negative since experiencing an existential crisis is connected to stress, anxiety, and the formation of bad relationships. This can lead all the way to depression if existential crises are not resolved. On the social level, they cause a high divorce rate and an increased number of people being unable to make significant positive contributions to society, for example, due to a lack of drive resulting from depression. But if resolved properly, they can also have positive effects by pushing the affected to address the underlying issue. Individuals may thereby find new sources of meaning, develop as a person, and thereby improve their way of life. In the sophomore crisis, for example, this can happen by planning ahead and thereby making more conscious choices in how to lead one's life.
Being aware of the symptoms and consequences of existential crises on the personal level is important for psychotherapists so they can arrive at an accurate diagnosis. But this is not always easy since the symptoms usually differ from person to person. In this sense, the lack of meaning at the core of existential crises can express itself in several different ways. For some, it may lead them to become overly adventurous and zealous. In their attempt to wrest themselves free from meaninglessness, they are desperate to indiscriminately dedicate themselves to any cause. They might do so without much concern for the concrete content of the cause or for their personal safety. It has been argued that this type of behavior is present in some hardcore activists. This may be understood as a form of defense mechanism in which the individual engages fanatically in activities in response to a deep sense of purposelessness. It can also express itself in a related but less dramatic way as compulsive activity. This may take various forms, such as workaholism or the obsessive pursuit of prestige, or material acquisitions. This is sometimes referred to as false centering or inauthenticity since the activity is pursued more as a distraction and less because it is in itself fulfilling to the agent. It can provide a temporary alleviation by helping the individual drain their energy and thus distract them from the threat of meaninglessness.
Another response consists in an overt declaration of nihilism characterized by a pervasive tendency to discredit activities purported by others to have meaning. Such an individual may, for example, dismiss altruism out of hand as a disingenuous form of selfishness or see all leaders as motivated by their lust for power rather than inspired by a grand vision. In some more extreme forms of crisis, the individual's behavior may show severe forms of aimlessness and apathy, often accompanied by depression. Being unable to find good reasons for making an effort, such a person remains inactive for extended periods of time, such as staying in bed all day. If they engage in a behavior, they may do so indiscriminately without much concern for what they are doing.
Indirect factors for determining the severeness of an existential crisis include job satisfaction and the quality of one's relationships. For example, physical violence or constant fighting in a relationship may be interpreted as external signs of a serious existential crisis. Various empirical studies have shown that a lack of sense of meaning in life is associated with psychopathology. Having a positive sense of meaning, on the other hand, is associated with deeply held religious beliefs, having a clear life goal, and having dedicated oneself to a cause.
Measurement
Different suggestions have been made concerning how to measure whether someone has an existential crisis, to what degree it is present, and which approach to resolving it might be promising. These methods can help therapists and counselors to understand both whether their client is going through an existential crisis and, if so, how severe their crisis is. But they can also be used by theorists in order to identify how existential crises correlate with other phenomena, such as depression, gender, or poverty.
One way to assess this is through questionnaires focusing on topics like the meaning of life, such as the Purpose in Life Test and the Life Regard Index. The Purpose in Life Test is widely used and consists of 20 items rated on a seven-point scale, such as "In life I have: (1) no goals or aims at all ... (7) very clear goals and aims" or "With regard to death, I am (1) unprepared and frightened ... (7) prepared and unafraid".
Resolution
Since existential crises can have a crippling effect on people, it is important to find ways to resolve them. Different forms of resolution have been proposed. The right approach often depends on the type of crisis experienced. Many approaches emphasize the importance of developing a new stage of intellectual functioning in order to resolve the inner conflict. But others focus more on external changes. For example, crises related to one's sexual identity and one's level of independence may be resolved by finding a partner matching one's character and preferences. Positive indicators of marital success include having similar interests, engaging in common activities, and having a similar level of education. Crises centering around one's professional path may also be approached more externally by finding the right type of career. In this respect, important factors include that the career matches both one's interests and one's skills to avoid a job that is unfulfilling, lacks engagement, or is overwhelming.
But the more common approach aims at changing one's intellectual functioning and inner attitude. Existential psychotherapists, for example, usually try to resolve existential crises by helping the patient to rediscover meaning in their life. Sometimes this takes the form of finding a spiritual or religious purpose in life, such as dedicating oneself to an ideal or discovering God. Other approaches focus less on the idea of discovering meaning and more on the idea of creating meaning. This is based on the idea that meaning is not something independent of the agent out there but something that has to be created and maintained. However, there are also types of existentialist psychotherapy that accept the idea that the world is meaningless and try to develop the best way of coping with this fact. The different approaches to resolving the issue of meaninglessness are sometimes divided into a leap of faith, the reasoned approach, and nihilism. Another classification categorizes possible resolutions as isolation, anchoring, distraction, and sublimation. Methods from cognitive behavior therapy have also been used to treat existential crises by bringing about a change in the individual's intellectual functioning.
Leap of faith, reasoned approach, and nihilism
Since existential crises circle around the idea of being unable to find meaning in life, various resolutions focus on specifically this aspect. Sometimes three different forms of this approach are distinguished. On the one hand, the individual may perform a leap of faith and affirm a new system of meaning without a previous in-depth understanding of how secure it is as a source of meaning. Another method consists in carefully considering all the relevant factors and thereby rebuilding and justifying a new system of meaning. A third approach goes against these two by denying that there is actual meaning. It consists in accepting the meaninglessness of life and learning how to deal with it without the illusion of meaning.
A leap of faith implies committing oneself to something one does not fully understand. In the case of existential crises, the commitment involves the faith that life is meaningful even though the believer lacks a reasoned justification. This leap is motivated by the strong desire that life is meaningful and triggered as a response to the threat posed to the fulfillment of this desire by the existential crisis. For whom this is psychologically possible, this may be the fastest way to bypass an existential crisis. This option may be more available to people oriented toward intuitive processing and less to people who favor a more rational approach since it has less need for a thorough reflection and introspection. It has been argued that the meaning acquired through a leap of faith may be more robust than in other cases. One reason for this is that since it is not based on empirical evidence for it, it is also less vulnerable to empirical evidence against it. Another reason concerns the flexibility of intuition to selectively disregard threatening information on the one hand and to focus instead on validating cues.
More rationally inclined persons tend to focus more on a careful evaluation of the sources of meaning based on solid justification through empirical evidence. If successful, this approach has the advantage of providing the individual with a concrete and realistic understanding of how their life is meaningful. It can also constitute a very robust source of meaning if it is based on solid empirical evidence and thorough understanding. The system of meaning arrived at may be very idiosyncratic by being based on the individual's values, preferences, and experiences. On a practical level, it often leads to a more efficient realization of this meaning since the individual can focus more exclusively on this factor. If someone determines that family life is their main source of meaning, for example, they may focus more intensely on this aspect and take a less involved stance towards other areas in life, such as success at work. In comparison to the leap of faith, this approach offers more room for personal growth due to the cognitive labor in the form of reflection and introspection involved in it and the self-knowledge resulting from this process. One of the drawbacks of this approach is that it can take a considerable amount of time to complete and rid oneself of the negative psychological consequences. If successful, the foundations arrived at this way may provide a solid basis to withstand future existential crises. But success is not certain and even after a prolonged search, the individual might still be unable to identify a significant source of meaning in their life.
If the search for meaning in either way fails, there is still another approach to resolving the issue of meaninglessness in existential crises: to find a way to accept that life is meaningless. This position is usually referred to as nihilism. One can distinguish a local and a global version of this approach, depending on whether the denial of meaningfulness is only directed at a certain area of life or at life as a whole. It becomes necessary if the individual arrives at the justifiable conclusion that life is, after all, meaningless. This conclusion may be intolerable initially, since humans seem to have a strong desire to lead a meaningful life, sometimes referred to as the will to meaning. Some theorists, such as Viktor Frankl, see this desire even as the primary motivation of all individuals. One difficulty with this negative stance towards meaning is that it seems to provide very little practical guidance in how to live one's life. So even if an individual has resolved their existential crises this way, they may still lack an answer to the question of what they should do with their life. Positive aspects of this stance include that it can lead to a heightened sense of freedom by being unbound from any predetermined purpose. It also exemplifies the virtue of truthfulness by being able to acknowledge an inconvenient truth instead of escaping into the convenient illusion of meaningfulness.
Isolation, anchoring, distraction, and sublimation
According to Peter Wessel Zapffe, life is essentially meaningless but this does not mean that we are automatically doomed to unresolvable existential crises. Instead, he identifies four ways of dealing with this fact without falling into an existential depression: isolation, anchoring, distraction, and sublimation. Isolation involves a dismissal of destructive thoughts and feelings from consciousness. Physicians and medical students, for example, may adopt a detached and technical stance in order to better deal with the tragic and disgusting aspects of their vocation. Anchoring involves a dedication to certain values and practical commitments that give the individual a sense of assurance. This often happens collectively, for example, through devotion to a common religion, but it can also happen individually. Distraction is a more temporary form of withdrawing one's attention from the meaninglessness of certain life situations that do not provide any significant contributions to the construction of our self. Sublimation is the rarest of these mechanisms. Its essential characteristic setting it apart from the other mechanisms is that it uses the pain of living and transforms it into a work of art or another creative expression.
Cognitive behavioral therapy and social perspective-taking
Some approaches from the field of cognitive behavioral therapy adjust and employ treatments for depression to resolve existential crises. One fundamental idea in cognitive behavior theory is that various psychological problems arise due to inaccurate core beliefs about oneself, such as beliefs that one is worthless, helpless, or incompetent. These problematic core beliefs may lie dormant for extended periods. But when activated by certain life events, they may express themselves in the form of recurrent negative and damaging thoughts. This can lead, among other things, to depression. Cognitive behavioral therapy then consists in raising the awareness of the affected person in regards to these toxic thought patterns and the underlying core beliefs while training to change them. This can happen by focusing on one's immediate present, being goal-oriented, role-playing, or behavioral experiments.
A closely related method employs the practice of social perspective-taking. Social perspective-taking involves the ability to assess one's situation and character from the point of view of a different individual. This enables the individual to step outside their own immediate perspective while taking into consideration how others see the individual and thus reach a more integral perspective.
Unresolved crises
Existential crises sometimes pass even if the underlying issue is not resolved. This may happen, for example, if the issue is pushed into the background by other concerns and thus remains present only in a masked or dormant state. But even in this state, it may have unconscious effects on how people lead their life, like career choices. It can also increase the likelihood of suffering another existential crisis later on in life and might make resolving these later crises more difficult. It has been argued that many existential crises in contemporary society are not resolved. The reason for this may be a lack of clear awareness of the nature, importance, and possible treatments of existential crises.
Cultural context
In the 19th century, Thomas Carlyle wrote of how the loss of faith in God results in an existential crisis which he called the "Centre of Indifference", wherein the world appears cold and unfeeling and the individual considers himself to be without worth. Søren Kierkegaard considered that angst and existential despair would appear when an inherited or borrowed world-view (often of a collective nature) proved unable to handle unexpected and extreme life-experiences. Friedrich Nietzsche extended his views to suggest that the death of God—the loss of collective faith in religion and traditional morality—created a more widespread existential crisis for the philosophically aware.
Existential crisis has indeed been seen as the inevitable accompaniment of modernism (1890–1945). Whereas Émile Durkheim saw individual crises as the by-product of social pathology and a (partial) lack of collective norms, others have seen existentialism as arising more broadly from the modernist crisis of the loss of meaning throughout the modern world.<ref>M. Hardt/K. Weeks, The Jameson Reader (2000) p. 265</ref> Its twin answers were either a religion revivified by the experience of anomie (as with Martin Buber), or an individualistic existentialism based on facing directly the absurd contingency of human fate within a meaningless and alien universe, as with Sartre and Camus.
Irvin Yalom, an emeritus professor of psychiatry at Stanford University, has made fundamental contributions to the field of existential psychotherapy. Rollo May is another of the founders of this approach.
Fredric Jameson has suggested that postmodernism, with its saturation of social space by a visual consumer culture, has replaced the modernist angst of the traditional subject, and with it the existential crisis of old, by a new social pathology of flattened affect and a fragmented subject.
Historical context
Existential crises are often seen as a phenomenon associated specifically with modern society. One important factor in this context is that various sources of meaning, such as religion or being grounded in one's local culture and immediate social environment, are less important in the contemporary context.
Another factor in modern society is that individuals are faced with a daunting number of decisions to make and alternatives to choose from, often without any clear guidelines on how to make these choices. The high difficulty for finding the best alternative and the importance of doing so are often the cause of anxiety and may lead to an existential crisis. For example, it was very common for a long time in history for a son to simply follow his father's profession. In contrast to this, the modern schooling system presents students with different areas of study and interest, thereby opening a wide range of career opportunities to them. The problem brought about by this increased freedom is sometimes referred to as the agony of choice. The increased difficulty is described in Barry Schwartz's law, which links the costs, time, and energy needed to make a well-informed choice to the number of alternatives available.
See also
Absurdism
Why there is anything at all
Antinatalism
"Dark Night of the Soul"
Depersonalization
Duḥkha
Ego death
Limit situation
Scholarly approaches to mysticism
Positive disintegration
The Sickness unto Death Spiritual crisis
References
Further reading
J. Watson, Caring Science as Sacred Science 2005. Chapter 4: "Existential Crisis in Science and Human Sciences".
T.M. Cousineau, A. Seibring, M.T. Barnard, P-673 Making meaning of infertility: Existential crisis or personal transformation? Fertility and Sterility, 2006.
Sanders, Marc, Existential Depression. How to recognize and cure life-related sadness in gifted people'', 2013.
External links
Alan Watts on meaningless life, and its resolution
Crisis
Personal life
Philosophy of life
Popular psychology
Psychological concepts
Psychotherapy
Religion and mental health
Suffering | 0.77432 | 0.999122 | 0.77364 |
Moral absolutism | Moral absolutism, commonly known as black-and-white morality, is an ethical view that most, if not all actions are intrinsically right or wrong, regardless of context or consequence.
Comparison with other ethical theories
Moral absolutism is not the same as moral universalism. Universalism holds merely that what is right or wrong is independent of custom or opinion (as opposed to moral relativism), but not necessarily that what is right or wrong is independent of context or consequences (as in absolutism). Louis Pojman gives the following definitions to distinguish the two positions of moral absolutism and objectivism:
Moral absolutism: There is at least one principle that ought never to be violated.
Moral objectivism: There is a fact of the matter as to whether any given action is morally permissible or impermissible: a fact of the matter that does not depend solely on social custom or individual acceptance.
Ethical theories which place strong emphasis on rights and duty, such as the deontological ethics of Immanuel Kant, are often forms of moral absolutism, as are many religious moral codes.
Religion
One can adhere to moral absolutism in a strictly secular context, exemplified by the many variations of deontological moral rationalism. However, many religions, especially ones which define divine commandments, also adhere to moral absolutist positions. Therefore, to followers of such religions, the moral system is absolute, perfect and unchanging. Some secular philosophies, borrowing from religion, also take a morally absolutist position, asserting that the absolute laws of morality are inherent in the nature of people, the nature of life in general, or the Universe itself. For example, someone who absolutely believes in non-violence considers it wrong to use violence even in self-defense.
Catholic philosopher Thomas Aquinas never explicitly addresses the Euthyphro dilemma, but draws a distinction between what is good or evil in itself and what is good or evil because of God's commands, with unchangeable moral standards forming the bulk of natural law. Thus he contends that not even God can change the Ten Commandments, adding, however, that God can change what individuals deserve in particular cases, in what might look like special dispensations to murder or steal.
See also
Notes
Absolutism
Deontology
Ethical theories | 0.779598 | 0.992325 | 0.773614 |
Formal science | Formal science is a branch of science studying disciplines concerned with abstract structures described by formal systems, such as logic, mathematics, statistics, theoretical computer science, artificial intelligence, information theory, game theory, systems theory, decision theory and theoretical linguistics. Whereas the natural sciences and social sciences seek to characterize physical systems and social systems, respectively, using empirical methods, the formal sciences use language tools concerned with characterizing abstract structures described by formal systems. The formal sciences aid the natural and social sciences by providing information about the structures used to describe the physical world, and what inferences may be made about them.
Branches
Logic (also a branch of philosophy)
Mathematics
Statistics
Systems science
Data science
Information science
Computer science
Cryptography
Differences from other sciences
Because of their non-empirical nature, formal sciences are construed by outlining a set of axioms and definitions from which other statements (theorems) are deduced. For this reason, in Rudolf Carnap's logical-positivist conception of the epistemology of science, theories belonging to formal sciences are understood to contain no synthetic statements, instead containing only analytic statements.
See also
Philosophy
Science
Rationalism
Abstract structure
Abstraction in mathematics
Abstraction in computer science
Cognitive science
Formalism (philosophy of mathematics)
Formal grammar
Formal language
Formal method
Formal system
Form and content
Mathematical model
Mathematical sciences
Mathematics Subject Classification
Semiotics
Theory of forms
References
Further reading
Mario Bunge (1985). Philosophy of Science and Technology. Springer.
Mario Bunge (1998). Philosophy of Science. Rev. ed. of: Scientific research. Berlin, New York: Springer-Verlag, 1967.
C. West Churchman (1940). Elements of Logic and Formal Science, J.B. Lippincott Co., New York.
James Franklin (1994). The formal sciences discover the philosophers' stone. In: Studies in History and Philosophy of Science. Vol. 25, No. 4, pp. 513–533, 1994
Stephen Leacock (1906). Elements of Political Science. Houghton, Mifflin Co, 417 pp.
Bernt P. Stigum (1990). Toward a Formal Science of Economics. MIT Press
Marcus Tomalin (2006), Linguistics and the Formal Sciences. Cambridge University Press
William L. Twining (1997). Law in Context: Enlarging a Discipline. 365 pp.
External links
Interdisciplinary conferences — Foundations of the Formal Sciences
Branches of science | 0.775914 | 0.997006 | 0.773591 |
Pramana | Pramana (; IAST: Pramāṇa) literally means "proof" and "means of knowledge". In Indian philosophies, pramana are the means which can lead to knowledge, and serve as one of the core concepts in Indian epistemology. It has been one of the key, much debated fields of study in Hinduism, Buddhism and Jainism since ancient times. It is a theory of knowledge, and encompasses one or more reliable and valid means by which human beings gain accurate, true knowledge. The focus of pramana is how correct knowledge can be acquired, how one knows, how one does not know, and to what extent knowledge pertinent about someone or something can be acquired.
While the number of pramanas varies widely from system to system, many ancient and medieval Indian texts identify six pramanas as correct means of accurate knowledge and to truths: Three central pramanas which are almost universally accepted are perception (Sanskrit: pratyakṣa), inference (anumāna), and "word", meaning the testimony of past or present reliable experts (Śabda); and more contentious ones, which are comparison and analogy (upamāna), postulation, derivation from circumstances (arthāpatti), and non-perception, negative/cognitive proof (anupalabdhi). Each of these are further categorized in terms of conditionality, completeness, confidence and possibility of error, by each school of Indian philosophies.
The various schools of Indian philosophies vary on how many of these six pramanas are epistemically reliable and valid means to knowledge. For example, the Carvaka school of the Śramaṇa tradition holds that only one (perception) is a reliable source of knowledge, Buddhism holds two (perception, inference) are valid means, Jainism holds three (perception, inference and testimony), while Mimamsa and Advaita Vedanta schools of Hinduism hold that all six pramanas are useful and can be reliable means to knowledge. The various schools of Indian philosophy have debated whether one of the six forms of pramana can be derived from another and the relative uniqueness of each. For example, Buddhism considers Buddha and other "valid persons", "valid scriptures" and "valid minds" as indisputable, but that such testimony is a form of perception and inference pramanas.
The science and study of pramanas is called Nyaya.
Etymology
Pramāṇa literally means "proof" and is also a concept and field of Indian philosophy. The concept is derived from the Sanskrit roots, pra (प्र), a preposition meaning "outward" or "forth", and mā (मा) which means "measurement". Pramā means "correct notion, true knowledge, basis, foundation, understand", with pramāṇa being a further nominalization of the word. Thus, the concept Pramāṇa implies that which is a "means of acquiring prama or certain, correct, true knowledge".
Pramāṇa forms one part of a trio of concepts, which describe the ancient Indian view on how knowledge is gained. The other two concepts are knower and knowable, each discussed in how they influence the knowledge, by their own characteristic and the process of knowing. The two are called Pramātŗ (प्रमातृ, the subject, the knower) and Prameya (प्रमेय, the object, the knowable).
The term Pramana is commonly found in various schools of Hinduism. In Buddhist literature, Pramana is referred to as Pramāṇavāda. Pramana is also related to the Indian concept of Yukti (युक्ति) which means active application of epistemology or what one already knows, innovation, clever expedients or connections, methodological or reasoning trick, joining together, application of contrivance, means, method, novelty or device to more efficiently achieve a purpose. Yukti and Pramana are discussed together in some Indian texts, with Yukti described as active process of gaining knowledge in contrast to passive process of gaining knowledge through observation/perception. The texts on Pramana, particularly by Samkhya, Yoga, Mimamsa and Advaita Vedanta schools of Hinduism, include in their meaning and scope "Theories of Errors". These texts explore why human beings make error and reach incorrect knowledge, how can one know if one is wrong, and, if so, how one can discover whether one's epistemic method was flawed or one's conclusion (truth) was flawed, in order to revise oneself and reach correct knowledge.
Hinduism
Six pramanas
Hinduism identifies six pramanas as correct means of accurate knowledge and to truths: Pratyakṣa (evidence/ perception), Anumāna (inference), Upamāna (comparison and analogy), Arthāpatti (postulation, derivation from circumstances), Anupalabdhi (non-perception, negative/cognitive proof) and Śabda (word, testimony of past or present reliable experts).
In verse 1.2.1 of the Taittirīya Āraṇyaka (c. 9th–6th centuries BCE), "four means of attaining correct knowledge" are listed: smṛti ("scripture, tradition"), pratyakṣa ("perception"), aitihya ("expert testimony, historical tradition"), and anumāna ("inference").
In some texts such as by Vedvyasa, ten pramanas are discussed, Krtakoti discusses eight epistemically reliable means to correct knowledge. The most widely discussed pramanas are:
Pratyakṣa
Pratyakṣa (प्रत्यक्ष) means perception. It is of two types in Hindu texts: external and internal. External perception is described as that arising from the interaction of five senses and worldly objects, while internal perception is described by this school as that of inner sense, the mind. According to Matt Stefan, the distinction is between direct perception (anubhava) and remembered perception (smriti).
The ancient and medieval Indian texts identify four requirements for correct perception:
Indriyarthasannikarsa (direct experience by one's sensory organ(s) with the object, whatever is being studied);
Avyapadesya (non-verbal; correct perception is not through hearsay, according to ancient Indian scholars, where one's sensory organ relies on accepting or rejecting someone else's perception);
Avyabhicara (does not wander; correct perception does not change, nor is it the result of deception because one's sensory organ or means of observation is drifting, defective, suspect);
Vyavasayatmaka (definite; correct perception excludes judgments of doubt, either because of one's failure to observe all the details, or because one is mixing inference with observation and observing what one wants to observe, or not observing what one does not want to observe).
Some ancient scholars proposed "unusual perception" as pramana and called it internal perception, a proposal contested by other Indian scholars. The internal perception concepts included pratibha (intuition), samanyalaksanapratyaksa (a form of induction from perceived specifics to a universal), and jnanalaksanapratyaksa (a form of perception of prior processes and previous states of a 'topic of study' by observing its current state). Further, some schools of Hinduism considered and refined rules of accepting uncertain knowledge from Pratyakṣa-pranama, so as to contrast nirnaya (definite judgment, conclusion) from anadhyavasaya (indefinite judgment).
Anumāna
Anumāna (अनुमान) means ‘inference’ in Sanskrit, though it often is used to mean ‘guess’ in modern Indian languages. In the context of classical philosophy, it is described as reaching a new conclusion and truth from one or more observations and previous truths by applying reason. Observing smoke and inferring fire is an example of Anumana. In all except one Hindu philosophies, this is a valid and useful means to knowledge. The method of inference is explained by Indian texts as consisting of three parts: pratijna (hypothesis), hetu (a reason), and drshtanta (examples). The hypothesis must further be broken down into two parts, state the ancient Indian scholars: sadhya (that idea which needs to proven or disproven) and paksha (the object on which the sadhya is predicated). The inference is conditionally true if sapaksha (positive examples as evidence) are present, and if vipaksha (negative examples as counter-evidence) are absent. For rigor, the Indian philosophies also state further epistemic steps. For example, they demand Vyapti—the requirement that the hetu (reason) must necessarily and separately account for the inference in "all" cases, in both sapaksha and vipaksha. A conditionally proven hypothesis is called a nigamana (conclusion).
Upamāna
Upamāna (उपमान) means comparison and analogy. Some Hindu schools consider it as a proper means of knowledge. Upamana, states Lochtefeld, may be explained with the example of a traveller who has never visited lands or islands with endemic population of wildlife. He or she is told, by someone who has been there, that in those lands you see an animal that sort of looks like a cow, grazes like cow but is different from a cow in such and such way. Such use of analogy and comparison is, state the Indian epistemologists, a valid means of conditional knowledge, as it helps the traveller identify the new animal later. The subject of comparison is formally called upameyam, the object of comparison is called upamanam, while the attribute(s) are identified as samanya. Thus, explains Monier Williams, if a boy says "her face is like the moon in charmingness", "her face" is upameyam, the moon is upamanam, and charmingness is samanya. The 7th-century text Bhaṭṭikāvya in verses 10.28 through 10.63 discusses many types of comparisons and analogies, identifying when this epistemic method is more useful and reliable, and when it is not. In various ancient and medieval texts of Hinduism, 32 types of Upamāna and their value in epistemology are debated.
Arthāpatti
Arthāpatti (अर्थापत्ति) means postulation, derivation from circumstances. In contemporary logic, this pramana is similar to circumstantial implication. As example, if a person left in a boat on river earlier, and the time is now past the expected time of arrival, then the circumstances support the truth postulate that the person has arrived. Many Indian scholars considered this pramana as invalid or at best weak, because the boat may have gotten delayed or diverted. However, in cases such as deriving the time of a future sunrise or sunset, this method was asserted by the proponents to be reliable. Another common example for arthapatti in ancient Hindu texts is, that if "Devadatta is fat" and "Devadatta does not eat in day", then the following must be true: "Devadatta eats in the night". This form of postulation and deriving from circumstances is, claim the Indian scholars, a means to discovery, proper insight and knowledge. The Hindu schools that accept this means of knowledge state that this method is a valid means to conditional knowledge and truths about a subject and object in original premises or different premises. The schools that do not accept this method, state that postulation, extrapolation and circumstantial implication is either derivable from other pramanas or flawed means to correct knowledge, instead one must rely on direct perception or proper inference.
Anupalabdhi
Anupalabdhi (अनुपलब्धि) means non-perception, negative/cognitive proof. Anupalabdhi pramana suggests that knowing a negative, such as "there is no jug in this room" is a form of valid knowledge. If something can be observed or inferred or proven as non-existent or impossible, then one knows more than what one did without such means. In the two schools of Hinduism that consider Anupalabdhi as epistemically valuable, a valid conclusion is either sadrupa (positive) or asadrupa (negative) relation—both correct and valuable. Like other pramana, Indian scholars refined Anupalabdi to four types: non-perception of the cause, non-perception of the effect, non-perception of object, and non-perception of contradiction. Only two schools of Hinduism accepted and developed the concept "non-perception" as a pramana. The schools that endorsed Anupalabdi affirmed that it as valid and useful when the other five pramanas fail in one's pursuit of knowledge and truth.
Abhava (अभाव) means non-existence. Some scholars consider Anupalabdi to be same as Abhava, while others consider Anupalabdi and Abhava as different. Abhava-pramana has been discussed in ancient Hindu texts in the context of Padārtha (पदार्थ, referent of a term). A Padartha is defined as that which is simultaneously Astitva (existent), Jneyatva (knowable) and Abhidheyatva (nameable). Specific examples of padartha, states Bartley, include dravya (substance), guna (quality), karma (activity/motion), samanya/jati (universal/class property), samavaya (inherence) and vishesha (individuality). Abhava is then explained as "referents of negative expression" in contrast to "referents of positive expression" in Padartha. An absence, state the ancient scholars, is also "existent, knowable and nameable", giving the example of negative numbers, silence as a form of testimony, asatkaryavada theory of causation, and analysis of deficit as real and valuable. Abhava was further refined in four types, by the schools of Hinduism that accepted it as a useful method of epistemology: dhvamsa (termination of what existed), atyanta-abhava (impossibility, absolute non-existence, contradiction), anyonya-abhava (mutual negation, reciprocal absence) and pragavasa (prior, antecedent non-existence).
Śabda
Śabda (शब्द) means relying on word, testimony of past or present reliable experts, specifically the shruti, Vedas. Hiriyanna explains Sabda-pramana as a concept which means reliable expert testimony. The schools of Hinduism which consider it epistemically valid suggest that a human being needs to know numerous facts, and with the limited time and energy available, he can learn only a fraction of those facts and truths directly. He must rely on others, his parent, family, friends, teachers, ancestors and kindred members of society to rapidly acquire and share knowledge and thereby enrich each other's lives. This means of gaining proper knowledge is either spoken or written, but through Sabda (words). The reliability of the source is important, and legitimate knowledge can only come from the Sabda of reliable sources. The disagreement between the schools of Hinduism has been on how to establish reliability. Some schools, such as Carvaka, state that this is never possible, and therefore Sabda is not a proper pramana. Other schools debate means to establish reliability.
Acceptance per school
Different schools of Hindu philosophy accept one or more of these pramanas as valid epistemology.
Carvaka school
Carvaka school accepted only one valid source of knowledge—perception. It held all remaining methods as outright invalid or prone to error and therefore invalid.
Vaisheshika school
Epistemologically, the Vaiśeṣika school considered the following as the only proper means of knowledge:
Perception (Pratyakṣa)
Inference (Anumāna)
Sankhya, Yoga, Vishishtadvaita Vedanta, and Dvaita Vedanta schools
According to the Sankhya, Yoga, and two sub-schools of Vedanta, the proper means of knowledge must rely on these three pramanas:
Pratyakṣa — perception
Anumāna — inference
Śabda — testimony/word of reliable experts
These are enumerated in sutra I.7 of the Yoga Sutras. The mode of Pramana itself in sutra I.6 is distinguished among 5 classes of vritti/mental modification, the others including indiscrimination, verbal delusion, sleep, and memory.
Nyaya school
The Nyāya school accepts four means of obtaining knowledge (pramāṇa), viz., Perception, Inference, Comparison and Word.
Perception, called Pratyakṣa, occupies the foremost position in the Nyaya epistemology. Perception is defined by sense-object contact and is unerring. Perception can be of two types—ordinary or extraordinary. Ordinary (Laukika or Sādhārana) perception is of six types, viz., visual-by eyes, olfactory-by nose, auditory-by ears, tactile-by skin, gustatory-by tongue and mental-by mind. Extraordinary (Alaukika or Asādhārana) perception is of three types, viz., Sāmānyalakṣana (perceiving generality from a particular object), Jñānalakṣana (when one sense organ can also perceive qualities not attributable to it, as when seeing a chilli, one knows that it would be bitter or hot), and Yogaja (when certain human beings, from the power of Yoga, can perceive past, present and future and have supernatural abilities, either complete or some). Also, there are two modes or steps in perception, viz., Nirvikalpa, when one just perceives an object without being able to know its features, and Savikalpa, when one is able to clearly know an object. All laukika and alaukika pratyakshas are savikalpa. There is yet another stage called Pratyabhijñā, when one is able to re-recognise something on the basis of memory.
Inference, called Anumāna, is one of the most important contributions of Nyaya. It can be of two types – inference for oneself (Svārthānumāna, where one does not need any formal procedure, and at the most the last three of their five steps), and inference for others (Parāthānumāna, which requires a systematic methodology of five steps). Inference can also be classified into three types: Pūrvavat (inferring an unperceived effect from a perceived cause), Śeṣavat (inferring an unperceived cause from a perceived effect) and Sāmānyatodṛṣṭa (when inference is not based on causation but on uniformity of co-existence). A detailed analysis of error is also given, explaining when anumāna could be false.
Comparison, called Upamāna. It is produced by the knowledge of resemblance or similarity, given some pre-description of the new object beforehand.
Word, or Śabda are also accepted as a pramāṇa. It can be of two types, Vaidika (Vedic), which are the words of the four sacred Vedas, or can be more broadly interpreted as knowledge from sources acknowledged as authoritative, and Laukika, or words and writings of trustworthy human beings.
Prabhakara Mimamsa school
In Mimamsa school of Hinduism linked to Prabhakara considered the following pramanas as proper:
Pratyakṣa (perception)
Anumāṇa (inference)
Śabda (word, testimony)
Upamāṇa (comparison, analogy)
Arthapatti (postulation, presumption)
Advaita Vedanta and Bhatta Mimamsa schools
In Advaita Vedānta, and Mimamsa school linked to Kumārila Bhaṭṭa, the following pramanas are accepted:
Śabda (word, testimony)
Pratyakṣa (perception)
Anumāṇa (inference)
Upamāṇa (comparison, analogy)
Arthāpatti (postulation, presumption)
Anupalabdhi, Abhava (non-perception, cognitive proof using non-existence)
Buddhism
Padmākara Translation Group (2005: p. 390) annotates that:
Strictly speaking, pramana (tshad ma) means "valid cognition." In (Buddhism) practice, it refers to the tradition, principally associated with Dignāga and Dharmakīrti, of logic (rtags rigs) and epistemology (blo rigs).
Buddhism accepts only two pramana (tshad ma) as valid means to knowledge: Pratyaksha (mngon sum tshad ma, perception) and Anumāṇa (rjes dpag tshad ma, inference). Rinbochay adds that Buddhism also considers scriptures as third valid pramana, such as from Buddha and other "valid minds" and "valid persons". This third source of valid knowledge is a form of perception and inference in Buddhist thought. Valid scriptures, valid minds and valid persons are considered in Buddhism as Avisamvadin (mi slu ba, incontrovertible, indisputable). Means of cognition and knowledge, other than perception and inference, are considered invalid in Buddhism.
In Buddhism, the two most important scholars of pramāṇa are Dignāga and Dharmakīrti.
Sautrantrika
Dignāga and Dharmakīrti are usually categorized as expounding the view of the Sautrāntika tenets, though one can make a distinction between the Sautrāntikas Following Scripture and the Sautrāntikas Following Reason and both these masters are described as establishing the latter. Dignāga's main text on this topic is the Pramāṇa-samuccaya. Dignāga's Pramāṇa-samuccaya played a crucial role in shaping the discipline of epistemology (pramāṇaśāstra), blending it with logical discourse. Dharmakīrti, influenced by Dignāga, further developed these ideas in his Pramanavarttika.
These two rejected the complex Abhidharma-based description of how in the Vaibhāṣika school and the Sautrāntika Following Scripture approach connected an external world with mental objects, and instead posited that the mental domain never connects directly with the external world but instead only perceives an aspect based upon the sense organs and the sense consciousnesses. Further, the sense consciousnesses assume the form of the aspect (Sanskrit: Sākāravāda) of the external object and what is perceived is actually the sense consciousness which has taken on the form of the external object. By starting with aspects, a logical argument about the external world as discussed by the Hindu schools was possible. Otherwise their views would be so different as to be impossible to begin a debate. Then a logical discussion could follow.
This approach attempts to solve how the material world connects with the mental world, but not completely explaining it. When pushed on this point, Dharmakīrti then drops a presupposition of the Sautrāntrika position and shifts to a kind of Yogācāra position that extramental objects never really occur but arise from the habitual tendencies of mind. So he begins a debate with Hindu schools positing external objects then later to migrate the discussion to how that is logically untenable.
Note there are two differing interpretations of Dharmakīrti's approach later in Tibet, due to differing translations and interpretations. One is held by the Gelug school leaning to a moderate realism with some accommodation of universals and the other held by the other schools who held that Dharmakīrti was distinctly antirealist.
Apoha
A key feature of Dignāga's logic is in how he treats generalities versus specific objects of knowledge. The Nyāya Hindu school made assertions about the existence of general principles, and in refutation Dignāga asserted that generalities were mere mental features and not truly existent. To do this he introduced the idea of Apoha, that the way the mind recognizes is by comparing and negating known objects from the perception. In that way, the general idea or categories of objects has to do with differences from known objects, not from identification with universal truths. So one knows that a perceived chariot is a chariot not because it is in accord with a universal form of a chariot, but because it is perceived as different from things that are not chariots. This approach became an essential feature of Buddhist epistemology.
Madhyamaka
The contemporary of Dignāga but before Dharmakīrti, Bhāvaviveka, incorporated a logical approach when commenting upon Nāgārjuna. He also started with a Sautrāntika approach when discussing the way appearances appear, to debate with realists, but then took a Middle Way view of the ultimate nature of phenomenon. But he used logical assertions and arguments about the nature of that ultimate nature.
His incorporation of logic into the Middle Way system was later critiqued by Candrakīrti, who felt that the establishment of the ultimate way of abiding since it was beyond thought and concept was not the domain of logic. He used simple logical consequence arguments to refute the views of other tenet systems, but generally he thought a more developed use of logic and epistemology in describing the Middle Way was problematic. Bhāvaviveka's use of autonomous logical arguments was later described as the Svātantrika approach.
In Tibet
Modern Buddhist schools employ the 'three spheres' (Sanskrit: trimaṇḍala; Tibetan: 'khor gsum):
subject
object,
action.
When Madhyamaka first migrated to Tibet, Śāntarakṣita established a view of Madhyamaka more consistent with Bhāvaviveka while further evolving logical assertions as a way of contemplating and developing one's viewpoint of the ultimate truth.
In the 14th century Je Tsongkhapa presented a new commentary and approach to Madhyamaka, which became the normative form in Tibet. In this variant, the Madhyamaka approach of Candrakīrti was elevated instead of Bhāvaviveka's yet Tsongkhapa rejected Candrakirti's disdain of logic and instead incorporated logic further.
The exact role of logic in Tibetan Buddhist practice and study may still be a topic of debate, but it is definitely established in the tradition. Ju Mipham remarked in his 19th-century commentary on Śāntarakṣita's Madhyamakālaṅkāra:
See also
Hindu philosophy
Śāstra pramāṇam in Hinduism
Nyaya
Buddhist logic
Epistemology
Metaphysics
Notes
References
Sources
Bibliography
Śāntarakṣita (author); Mipham (commentator); Padmākara Translation Group (translators)(2005). The Adornment of the Middle Way: Shantarakshita's Madhyamakalankara with commentary by Jamgön Mipham. Boston, Massachusetts, US: Shambhala Publications, Inc. (alk. paper)
External links
Pramāṇamīmāṃsā: Devanagari, A SARIT Initiative, German Research Foundation
Pramāṇavārttika Pariśiṣṭa 1: Devanagari, A SARIT Initiative, German Research Foundation
Pramāṇavārttika: Devanagari, A SARIT Initiative, German Research Foundation
Pramāṇavārttikasvavṛttiṭīkā: Devanagari, A SARIT Initiative, German Research Foundation
Pramāṇavārttikālaṅkāra: Devanagari, A SARIT Initiative, German Research Foundation
Pramāṇāntarbhāva: Devanagari, A SARIT Initiative, German Research Foundation
Vidhabhusana, Satis Chandra (1907). History of the Mediaeval School of Indian Logic. Calcutta University.
Sources of knowledge
Concepts in epistemology
Hindu philosophical concepts
Buddhist logic
Epistemology literature | 0.780698 | 0.990701 | 0.773438 |
Reductionism | Reductionism is any of several related philosophical ideas regarding the associations between phenomena which can be described in terms of simpler or more fundamental phenomena. It is also described as an intellectual and philosophical position that interprets a complex system as the sum of its parts.
Definitions
The Oxford Companion to Philosophy suggests that reductionism is "one of the most used and abused terms in the philosophical lexicon" and suggests a three-part division:
Ontological reductionism: a belief that the whole of reality consists of a minimal number of parts.
Methodological reductionism: the scientific attempt to provide an explanation in terms of ever-smaller entities.
Theory reductionism: the suggestion that a newer theory does not replace or absorb an older one, but reduces it to more basic terms. Theory reduction itself is divisible into three parts: translation, derivation, and explanation.
Reductionism can be applied to any phenomenon, including objects, problems, explanations, theories, and meanings.
For the sciences, application of methodological reductionism attempts explanation of entire systems in terms of their individual, constituent parts and their interactions. For example, the temperature of a gas is reduced to nothing beyond the average kinetic energy of its molecules in motion. Thomas Nagel and others speak of 'psychophysical reductionism' (the attempted reduction of psychological phenomena to physics and chemistry), and 'physico-chemical reductionism' (the attempted reduction of biology to physics and chemistry). In a very simplified and sometimes contested form, reductionism is said to imply that a system is nothing but the sum of its parts.
However, a more nuanced opinion is that a system is composed entirely of its parts, but the system will have features that none of the parts have (which, in essence is the basis of emergentism). "The point of mechanistic explanations is usually showing how the higher level features arise from the parts."
Other definitions are used by other authors. For example, what John Polkinghorne terms 'conceptual' or 'epistemological' reductionism is the definition provided by Simon Blackburn and by Jaegwon Kim: that form of reductionism which concerns a program of replacing the facts or entities involved in one type of discourse with other facts or entities from another type, thereby providing a relationship between them. Richard Jones distinguishes ontological and epistemological reductionism, arguing that many ontological and epistemological reductionists affirm the need for different concepts for different degrees of complexity while affirming a reduction of theories.
The idea of reductionism can be expressed by "levels" of explanation, with higher levels reducible if need be to lower levels. This use of levels of understanding in part expresses our human limitations in remembering detail. However, "most philosophers would insist that our role in conceptualizing reality [our need for a hierarchy of "levels" of understanding] does not change the fact that different levels of organization in reality do have different 'properties'."
Reductionism does not preclude the existence of what might be termed emergent phenomena, but it does imply the ability to understand those phenomena completely in terms of the processes from which they are composed. This reductionist understanding is very different from ontological or strong emergentism, which intends that what emerges in "emergence" is more than the sum of the processes from which it emerges, respectively either in the ontological sense or in the epistemological sense.
Ontological reductionism
Richard Jones divides ontological reductionism into two: the reductionism of substances (e.g., the reduction of mind to matter) and the reduction of the number of structures operating in nature (e.g., the reduction of one physical force to another). This permits scientists and philosophers to affirm the former while being anti-reductionists regarding the latter.
Nancey Murphy has claimed that there are two species of ontological reductionism: one that claims that wholes are nothing more than their parts; and atomist reductionism, claiming that wholes are not "really real". She admits that the phrase "really real" is apparently senseless but she has tried to explicate the supposed difference between the two.
Ontological reductionism denies the idea of ontological emergence, and claims that emergence is an epistemological phenomenon that only exists through analysis or description of a system, and does not exist fundamentally.
In some scientific disciplines, ontological reductionism takes two forms: token-identity theory and type-identity theory. In this case, "token" refers to a biological process.
Token ontological reductionism is the idea that every item that exists is a sum item. For perceivable items, it affirms that every perceivable item is a sum of items with a lesser degree of complexity. Token ontological reduction of biological things to chemical things is generally accepted.
Type ontological reductionism is the idea that every type of item is a sum type of item, and that every perceivable type of item is a sum of types of items with a lesser degree of complexity. Type ontological reduction of biological things to chemical things is often rejected.
Michael Ruse has criticized ontological reductionism as an improper argument against vitalism.
Methodological reductionism
In a biological context, methodological reductionism means attempting to explain all biological phenomena in terms of their underlying biochemical and molecular processes.
In religion
Anthropologists Edward Burnett Tylor and James George Frazer employed some religious reductionist arguments.
Theory reductionism
Theory reduction is the process by which a more general theory absorbs a special theory. It can be further divided into translation, derivation, and explanation. For example, both Kepler's laws of the motion of the planets and Galileo's theories of motion formulated for terrestrial objects are reducible to Newtonian theories of mechanics because all the explanatory power of the former are contained within the latter. Furthermore, the reduction is considered beneficial because Newtonian mechanics is a more general theory—that is, it explains more events than Galileo's or Kepler's. Besides scientific theories, theory reduction more generally can be the process by which one explanation subsumes another.
In mathematics
In mathematics, reductionism can be interpreted as the philosophy that all mathematics can (or ought to) be based on a common foundation, which for modern mathematics is usually axiomatic set theory. Ernst Zermelo was one of the major advocates of such an opinion; he also developed much of axiomatic set theory. It has been argued that the generally accepted method of justifying mathematical axioms by their usefulness in common practice can potentially weaken Zermelo's reductionist claim.
Jouko Väänänen has argued for second-order logic as a foundation for mathematics instead of set theory, whereas others have argued for category theory as a foundation for certain aspects of mathematics.
The incompleteness theorems of Kurt Gödel, published in 1931, caused doubt about the attainability of an axiomatic foundation for all of mathematics. Any such foundation would have to include axioms powerful enough to describe the arithmetic of the natural numbers (a subset of all mathematics). Yet Gödel proved that, for any consistent recursively enumerable axiomatic system powerful enough to describe the arithmetic of the natural numbers, there are (model-theoretically) true propositions about the natural numbers that cannot be proved from the axioms. Such propositions are known as formally undecidable propositions. For example, the continuum hypothesis is undecidable in the Zermelo–Fraenkel set theory as shown by Cohen.
In science
Reductionist thinking and methods form the basis for many of the well-developed topics of modern science, including much of physics, chemistry and molecular biology. Classical mechanics in particular is seen as a reductionist framework. For instance, we understand the solar system in terms of its components (the sun and the planets) and their interactions. Statistical mechanics can be considered as a reconciliation of macroscopic thermodynamic laws with the reductionist method of explaining macroscopic properties in terms of microscopic components, although it has been argued that reduction in physics 'never goes all the way in practice'.
In computer science
The role of reduction in computer science can be thought as a precise and unambiguous mathematical formalization of the philosophical idea of "theory reductionism". In a general sense, a problem (or set) is said to be reducible to another problem (or set), if there is a computable/feasible method to translate the questions of the former into the latter, so that, if one knows how to computably/feasibly solve the latter problem, then one can computably/feasibly solve the former. Thus, the latter can only be at least as "hard" to solve as the former.
Reduction in theoretical computer science is pervasive in both: the mathematical abstract foundations of computation; and in real-world performance or capability analysis of algorithms. More specifically, reduction is a foundational and central concept, not only in the realm of mathematical logic and abstract computation in computability (or recursive) theory, where it assumes the form of e.g. Turing reduction, but also in the realm of real-world computation in time (or space) complexity analysis of algorithms, where it assumes the form of e.g. polynomial-time reduction.
Criticism
Free will
Philosophers of the Enlightenment worked to insulate human free will from reductionism. Descartes separated the material world of mechanical necessity from the world of mental free will. German philosophers introduced the concept of the "noumenal" realm that is not governed by the deterministic laws of "phenomenal" nature, where every event is completely determined by chains of causality. The most influential formulation was by Immanuel Kant, who distinguished between the causal deterministic framework the mind imposes on the world—the phenomenal realm—and the world as it exists for itself, the noumenal realm, which, as he believed, included free will. To insulate theology from reductionism, 19th century post-Enlightenment German theologians, especially Friedrich Schleiermacher and Albrecht Ritschl, used the Romantic method of basing religion on the human spirit, so that it is a person's feeling or sensibility about spiritual matters that comprises religion.
Causation
Most common philosophical understandings of causation involve reducing it to some collection of non-causal facts. Opponents of these reductionist views have given arguments that the non-causal facts in question are insufficient to determine the causal facts.
Alfred North Whitehead's metaphysics opposed reductionism. He refers to this as the "fallacy of the misplaced concreteness". His scheme was to frame a rational, general understanding of phenomena, derived from our reality.
In science
An alternative term for ontological reductionism is fragmentalism, often used in a pejorative sense. In cognitive psychology, George Kelly developed "constructive alternativism" as a form of personal construct psychology and an alternative to what he considered "accumulative fragmentalism". For this theory, knowledge is seen as the construction of successful mental models of the exterior world, rather than the accumulation of independent "nuggets of truth". Others argue that inappropriate use of reductionism limits our understanding of complex systems. In particular, ecologist Robert Ulanowicz says that science must develop techniques to study ways in which larger scales of organization influence smaller ones, and also ways in which feedback loops create structure at a given level, independently of details at a lower level of organization. He advocates and uses information theory as a framework to study propensities in natural systems. The limits of the application of reductionism are claimed to be especially evident at levels of organization with greater complexity, including living cells, neural networks (biology), ecosystems, society, and other systems formed from assemblies of large numbers of diverse components linked by multiple feedback loops.
See also
Antireductionism
Eliminative materialism
Emergentism
Further facts
Materialism
Multiple realizability
Physicalism
Technological determinism
References
Further reading
Churchland, Patricia (1986), Neurophilosophy: Toward a Unified Science of the Mind-Brain. MIT Press.
Dawkins, Richard (1976), The Selfish Gene. Oxford University Press; 2nd edition, December 1989.
Dennett, Daniel C. (1995) Darwin's Dangerous Idea. Simon & Schuster.
Descartes (1637), Discourses, Part V.
Dupre, John (1993), The Disorder of Things. Harvard University Press.
Galison, Peter and David J. Stump, eds. (1996), The Disunity of the Sciences: Boundaries, Contexts, and Power. Stanford University Press.
Jones, Richard H. (2013), Analysis & the Fullness of Reality: An Introduction to Reductionism & Emergence. Jackson Square Books.
Laughlin, Robert (2005), A Different Universe: Reinventing Physics from the Bottom Down. Basic Books.
Nagel, Ernest (1961), The Structure of Science. New York.
Pinker, Steven (2002), The Blank Slate: The Modern Denial of Human Nature. Viking Penguin.
Ruse, Michael (1988), Philosophy of Biology. Albany, NY.
Rosenberg, Alexander (2006), Darwinian Reductionism or How to Stop Worrying and Love Molecular Biology. University of Chicago Press.
Eric Scerri The reduction of chemistry to physics has become a central aspect of the philosophy of chemistry. See several articles by this author.
Weinberg, Steven (1992), Dreams of a Final Theory: The Scientist's Search for the Ultimate Laws of Nature, Pantheon Books.
Weinberg, Steven (2002) describes what he terms the culture war among physicists in his review of A New Kind of Science.
Capra, Fritjof (1982), The Turning Point.
Lopez, F., Il pensiero olistico di Ippocrate. Riduzionismo, antiriduzionismo, scienza della complessità nel trattato sull'Antica Medicina, vol. IIA, Ed. Pubblisfera, Cosenza Italy 2008.
Maureen L Pope, Personal construction of formal knowledge, Humanities Social Science and Law, 13.4, December, 1982, pp. 3–14
Tara W. Lumpkin, Perceptual Diversity: Is Polyphasic Consciousness Necessary for Global Survival? December 28, 2006, bioregionalanimism.com
Vandana Shiva, 1995, Monocultures, Monopolies and the Masculinisation of Knowledge. International Development Research Centre (IDRC) Reports: Gender Equity. 23: 15–17. Gender and Equity (v. 23, no. 2, July 1995)
The Anti-Realist Side of the Debate: A Theory's Predictive Success does not Warrant Belief in the Unobservable Entities it Postulates Andre Kukla and Joel Walmsley.
External links
Alyssa Ney, "Reductionism" in: Internet Encyclopedia of Philosophy.
Ingo Brigandt and Alan Love, "Reductionism in Biology" in: The Stanford Encyclopedia of Philosophy.
John Dupré: The Disunity of Science—an interview at the Galilean Library covering criticisms of reductionism.
Monica Anderson: Reductionism Considered Harmful
Reduction and Emergence in Chemistry, Internet Encyclopedia of Philosophy.
Metatheory of science
Metaphysical theories
Sociological theories
Analytic philosophy
Epistemology of science
Cognition
Epistemological theories
Emergence | 0.776129 | 0.996425 | 0.773354 |
Ethics of technology | The ethics of technology is a sub-field of ethics addressing ethical questions specific to the technology age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information. Technology ethics is the application of ethical thinking to growing concerns as new technologies continue to rise in prominence.
The topic has evolved as technologies have developed. Technology poses an ethical dilemma on producers and consumers alike.
The subject of technoethics, or the ethical implications of technology, have been studied by different philosophers such as Hans Jonas and Mario Bunge.
Technoethics
Technoethics (TE) is an interdisciplinary research area that draws on theories and methods from multiple knowledge domains (such as communications, social sciences, information studies, technology studies, applied ethics, and philosophy) to provide insights on ethical dimensions of technological systems and practices for advancing a technological society.
Technoethics views technology and ethics as socially embedded enterprises and focuses on discovering the ethical uses for technology, protecting against the misuse of technology, and devising common principles to guide new advances in technological development and application to benefit society. Typically, scholars in technoethics have a tendency to conceptualize technology and ethics as interconnected and embedded in life and society. Technoethics denotes a broad range of ethical issues revolving around technology – from specific areas of focus affecting professionals working with technology to broader social, ethical, and legal issues concerning the role of technology in society and everyday life.
Technoethical perspectives are constantly in transition as technology advances in areas unseen by creators and as users change the intended uses of new technologies. Humans cannot be separated from these technologies because it is an inherent part of consciousness. The short term and longer term ethical considerations for technologies engage the creator, producer, user, and governments.
With the increasing impact emerging technologies have on society, the importance of assessing ethical and social issues constantly becomes more important. While such technologies provide opportunities for novel applications and the potential to transform the society on a global scale, their rise is accompanied by new ethical challenges and problems that must be considered. This becomes more difficult with the increasing pace at which technology is progressing and the increasing impact it has on the societal understanding by seemingly outrunning human control. The concept of technoethics focuses on expanding the knowledge of existing research in the areas of technology and ethics in order to provide a holistic construct for the different aspects and subdisciplines of ethics related to technology-related human activity like economics, politics, globalization, and scientific research. It is also concerned with the rights and responsibilities that designers and developers have regarding the outcomes of the respective technology. This is of particular importance with the emergence of algorithmic technology capable of making decisions autonomously and the related issues of developer or data bias influencing these decisions. To work against the manifestation of these biases, the balance between human and technology accountability for ethical failure has to be carefully evaluated and has shifted the view from technology as a merely positive tool towards the perception of technology as inherently neutral. Technoethics thus has to focus on both sides of the human technology equation when confronted with upcoming technology innovations and applications.
With technology continuing to advance over time, there are new technoethical issues that come into play. For instance, discussions on genetically modified organisms (GMOs) have brought about a huge concern for technology, ethics, and safety. There is also a huge question of whether or not artificial intelligence (AI) should be trusted and relied upon. These are just some examples of how the advancements in technology will affect the ethical values of humans in the future.
Technoethics finds application in various areas of technology. The following key areas are mentioned in the literature:
Computer ethics: Focuses on the use of technology in areas including visual technology, artificial intelligence, and robotics.
Engineering ethics: Dealing with professional standards of engineers and their moral responsibilities to the public.
Internet ethics and cyberethics: Concerning the guarding against unethical Internet activity.
Media and communication technoethics: Concerning ethical issues and responsibilities when using mass media and communication technology.
Professional technoethics: Concerning all ethical considerations that revolve around the role of technology within professional conduct like in engineering, journalism, or medicine.
Educational technoethics: Concerning the ethical issues and outcomes associated with using technology for educational aims.
Biotech ethics: Linked to advances in bioethics and medical ethics like considerations arising in cloning, human genetic engineering, and stem cell research.
Environmental technoethics: Concerning technological innovations that impact the environment and life.
Nanoethics: Concerning ethical and social issues associated with developments in the alteration of matter at the level of atoms and molecules in various disciplines including computer science, engineering, and biology.
Military technoethics: Concerning ethical issues associated with technology use in military action.
Definitions
Ethics address the issues of what is 'right', what is 'just', and what is 'fair'. Ethics describe moral principles influencing conduct; accordingly, the study of ethics focuses on the actions and values of people in society (what people do and how they believe they should act in the world).
Technology is the branch of knowledge that deals with the creation and use of technical means and their interrelation with life, society, and the environment; it may draw upon a variety of fields, including industrial arts, engineering, applied science, and pure science. Technology "is core to human development and a key focus for understanding human life, society and human consciousness."
Using theories and methods from multiple domains, technoethics provides insights on ethical aspects of technological systems and practices, examines technology-related social policies and interventions, and provides guidelines for how to ethically use new advancements in technology. Technoethics provides a systems theory and methodology to guide a variety of separate areas of inquiry into human-technological activity and ethics. Moreover, the field unites both technocentric and bio-centric philosophies, providing "conceptual grounding to clarify the role of technology to those affected by it and to help guide ethical problem solving and decision making in areas of activity that rely on technology." As a bio-techno-centric field, technoethics "has a relational orientation to both technology and human activity"; it provides "a system of ethical reference that justifies that profound dimension of technology as a central element in the attainment of a 'finalized' perfection of man."
Fundamental problems
Technology is merely a tool like a device or gadget. With this thought process of technology just being a device or gadget, it is not possible for technology to possess a moral or ethical quality. Going by this thought process the tool maker or end user would be the one who decides the morality or ethicality behind a device or gadget. "Ethics of technology" refers to two basic subdivisions:
The ethics involved in the development of new technology—whether it is always, never, or contextually right or wrong to invent and implement a technological innovation.
The ethical questions that are exacerbated by the ways in which technology extends or curtails the power of individuals—how standard ethical questions are changed by the new powers.
In the former case, ethics of such things as computer security and computer viruses asks whether the very act of innovation is an ethically right or wrong act. Similarly, does a scientist have an ethical obligation to produce or fail to produce a nuclear weapon? What are the ethical questions surrounding the production of technologies that waste or conserve energy and resources? What are the ethical questions surrounding the production of new manufacturing processes that might inhibit employment, or might inflict suffering in the third world?
In the latter case, the ethics of technology quickly break down into the ethics of various human endeavors as they are altered by new technologies. For example, bioethics is now largely consumed with questions that have been exacerbated by the new life-preserving technologies, new cloning technologies, and new technologies for implantation. In law, the right of privacy is being continually attenuated by the emergence of new forms of surveillance and anonymity. The old ethical questions of privacy and free speech are given new shape and urgency in an Internet age. Such tracing devices as RFID, biometric analysis and identification, genetic screening, all take old ethical questions and amplify their significance. As you can see, the fundamental problem is as society produces and advances technology that we use in all areas of our life from work, school, medicine, surveillance, etc. we receive great benefits, but there are underlying costs to these benefits. As technology evolves even more, some of the technological innovations can be seen as inhumane and those same technological innovations can be seen by others as creative, life changing, and innovative.
History of technoethics
Though the ethical consequences of new technologies have existed since Socrates' attack on writing in Plato's dialogue Phaedrus, the formal field of technoethics had only existed for a few decades. The first traces of TE can be seen in Dewey and Peirce's pragmatism. With the advent of the Industrial Revolution, it was easy to see that technological advances were going to influence human activity. This is why they put emphasis on the responsible use of technology.
The term "technoethics" was coined in 1977 by the philosopher Mario Bunge to describe the responsibilities of technologists and scientists to develop ethics as a branch of technology. Bunge argued that the current state of technological progress was guided by ungrounded practices based on limited empirical evidence and trial-and-error learning. He recognized that "the technologist must be held not only technically but also morally responsible for whatever he designs or executes: not only should his artifacts be optimally efficient but, far from being harmful, they should be beneficial, and not only in the short run but also in the long term." He recognized a pressing need in society to create a new field called 'technoethics' to discover rationally grounded rules for guiding science and technological progress.
With the spurt in technological advances came technological inquiry. Societal views of technology were changing; people were becoming more critical of the developments that were occurring and scholars were emphasizing the need to understand and to take a deeper look and study the innovations. Associations were uniting scholars from different disciplines to study the various aspects of technology. The main disciplines being philosophy, social sciences and science and technology studies (STS). Though many technologies were already focused on ethics, each technology discipline was separated from each other, despite the potential for the information to intertwine and reinforce itself. As technologies became increasingly developed in each discipline, their ethical implications paralleled their development, and became increasingly complex. Each branch eventually became united, under the term technoethics, so that all areas of technology could be studied and researched based on existing, real-world examples and a variety of knowledge, rather than just discipline-specific knowledge.
Technology and ethics
Ethics theories
Technoethics involves the ethical aspects of technology within a society that is shaped by technology. This brings up a series of social and ethical questions regarding new technological advancements and new boundary crossing opportunities. Before moving forward and attempting to address any ethical questions and concerns, it is important to review the three major ethical theories to develop a perspective foundation:
Utilitarianism (Bentham) is an ethical theory which attempts to maximize happiness and reduce suffering for the greatest number of people. Utilitarianism focused on results and consequences rather than rules.
Duty ethics (Kant) notes the obligations that one has to society and follows society's universal rules. It focuses on the rightness of actions instead of the consequences, focusing on what an individual should do.
Virtue ethics is another main perspective in normative ethics. It highlights the role and virtues that an individual's character contains to be able to determine or evaluate ethical behaviour in society. By practicing honing honest and generous behavior, Aristotle, the philosopher of this theory believes that people will then make the right choice when faced with an ethical decision.
Relationship ethics states that care and consideration are both derived from human communication. Therefore, ethical communication is the core substance to maintain healthy relationships.
Historical framing of technology – four main periods
Greek civilization defined technology as techné. Techné is "the set principles, or rational method, involved in the production of an object or the accomplishment of an end; the knowledge such as principles of method; art." This conceptualization of technology used during the early Greek and Roman period to denote the mechanical arts, construction, and other efforts to create, in Cicero's words, a "second nature" within the natural world.
Modern conceptualization of technology as invention materialized in the 17th century in Bacon's futuristic vision of a perfect society governed by engineers and scientists in Saloman's House, to raise the importance of technology in society.
The German term "Technik" was used in the 19th-20th century. Technik is the totality of processes, machines, tools and systems employed in the practical arts and Engineering. Webber popularized it when it was used in broader fields. Mumford said it was underlying a civilization. Known as: before 1750: Eotechnic, in 1750-1890: Paleoethnic and in 1890: Neoethnic. Place it at the center of social life in close connection to social progress and societal change. Mumford says that a machine cannot be divorced from its larger social pattern, for it is the pattern that gives it meaning and purpose.
Rapid advances in technology provoked a negative reaction from scholars who saw technology as a controlling force in society with the potential to destroy how people live (Technological Determinism). Heidegger warned people that technology was dangerous in that it exerted control over people through its mediating effects, thus limiting authenticity of experience in the world that defines life and gives life meaning. It is an intimate part of the human condition, deeply entrenched in all human history, society and mind.
Significant technoethical developments in society
Many advancements within the past decades have added to the field of technoethics. There are multiple concrete examples that have illustrated the need to consider ethical dilemmas in relation to technological innovations. Beginning in the 1940s influenced by the British eugenic movement, the Nazis conduct "racial hygiene" experiments causing widespread, global anti-eugenic sentiment. In the 1950s the first satellite Sputnik 1 orbited the Earth, the Obninsk Nuclear Power Plant was the first nuclear power plant to be opened, the American nuclear tests take place. The 1960s brought about the first crewed Moon landing, ARPANET created which leads to the later creation of the Internet, first heart transplantation completed, and the Telstar communications satellite is launched. The 70s, 80s, 90s, 2000s and 2010s also brought multiple developments.
Technological consciousness
Technological consciousness is the relationship between humans and technology. Technology is seen as an integral component of human consciousness and development. Technology, consciousness and society are intertwined in a relational process of creation that is key to human evolution. Technology is rooted in the human mind, and is made manifest in the world in the form of new understandings and artifacts. The process of technological consciousness frames the inquiry into ethical responsibility concerning technology by grounding technology in human life.
The structure of technological consciousness is relational but also situational, organizational, aspectual and integrative. Technological consciousness situates new understandings by creating a context of time and space. As well, technological consciousness organizes disjointed sequences of experience under a sense of unity that allows for a continuity of experience. The aspectual component of technological consciousness recognizes that individuals can only be conscious of aspects of an experience, not the whole thing. For this reason, technology manifests itself in processes that can be shared with others. The integrative characteristics of technological consciousness are assimilation, substitution and conversation. Assimilation allows for unfamiliar experiences to be integrated with familiar ones. Substitution is a metaphorical process allowing for complex experiences to be codified and shared with others — for example, language. Conversation is the sense of an observer within an individual's consciousness, providing stability and a standpoint from which to interact with the process.
Misunderstandings of consciousness and technology
According to Rocci Luppicini, the common misunderstandings about consciousness and technology are listed as follows. The first misunderstanding is that consciousness is only in the head when according to Luppicini, consciousness is not only in the head meaning that "[c]onsciousness is responsible for the creation of new conscious relations wherever imagined, be it in the head, on the street or in the past." The second misunderstanding is technology is not a part of consciousness. Technology is a part of consciousness as "the conceptualization of technology has gone through drastic changes." The third misunderstanding is that technology controls society and consciousness, by which Luppicini means "that technology is rooted in consciousness as an integral part of mental life for everyone. This understanding will most likely alter how both patients and psychologists deal with the trials and tribunes of living with technology." The last misunderstanding is society controls technology and consciousness. "…(other) accounts fail to acknowledge the complex relational nature of technology as an operation within mind and society. This realization shifts the focus on technology to its origins within the human mind as explained through the theory of technological consciousness."
Consciousness (C) is only a part of the head: C is responsible for the creation of new conscious relations
Technology (T) is not part of C: Humans cannot be separated from technology
T controls society and C: Technology cannot control the mind
Society controls T and C: Society fails to take in account the consideration of society shaping what technology gets developed?
Types of technology ethics
Technology ethics are principles that can be used to govern technology including factors like risk management and individual rights. They are basically used to understand and resolve moral issues that have to do with the development and application of technology of different types.
There are many types of technology ethics:
Access rights: access to empowering technology as a right
Accountability: decisions made for who is responsible when considering success or harm in technological advancements
Digital rights: protecting intellectual property rights and privacy rights
Environment: how to produce technology that could harm the environment
Existential risk: technologies that represent a threat to the global quality of life pertaining to extinction
Freedom: technology that is used to control a society raising questions related to freedom and independence
Health and safety: health and safety risks that are increased and imposed by technologies
Human Enhancement: human genetic engineering and human-machine integration
Human judgement: when can decisions be judged by automation and when do they acquire a reasonable human?
Over-automation: when does automation decrease quality of life and start affecting society?
Precaution principle: Who decides that developing this new technology is safe for the world?
Privacy: protection of privacy rights
Security: Is due diligence required to ensure information security?
Self replicating technology: should self replicating be the norm?
Technology transparency: clearly explaining how a technology works and what its intentions are
Terms of service: ethics related to legal agreements
Ethical challenges
Ethical challenges arise in many different situations:
Human knowledge processes
Workplace discrimination
Strained work-life balance in technologically enhanced work environments: Many people find that simply having the technology allowing one to do work while at home increases stress levels. In a recent study 70% of respondents said that since technology, work has crept into their personal lives.
Digital divide: Inequalities in information access for parts of the population
Unequal opportunities for scientific and technological development
Norris says access to information and knowledge resources within a knowledge society tend to favour the economically privileged who have greater access to technological tools needed to access information and knowledge resources disseminated online and the privatization of knowledge
Inequality in terms of how scientific and technological knowledge is developed around the globe. Developing countries do not have the same opportunities as developed countries to invest in costly large-scale research and expensive research facilities and instrumentation
Organizational responsibility and accountability issues
Intellectual property ownership issues
Information overload: Information processing theory asserts that working memory that has a limited capacity and too much information can lead to cognitive overload resulting in loss of information from short-term memory
Knowledge society is intertwined with changing technology requiring new skills of its workforce. Cutler says that there is the perception that older workers lack experience with new technology and that retaining programs may be less effective and more expensive for older workers. Cascio says that there is a growth of virtual organizations. Saetre & Sornes say that it is a blurring of the traditional time and space boundaries has also led to many cases in the blurring of work and personal life
Negative impacts of many scientific and technological innovations have on humans and the environment has led to some skepticism and resistance to increasing dependence on technology within the Knowledge Society. Doucet calls for city empowerment to have the courage and foresight to make decisions that are acceptable to its inhabitants rather that succumb to global consumer capitalism and the forces of international corporations on national and local governments
Scientific and technological innovations that have transformed organizational life within a global economy have also supplanted human autonomy and control in work within a technologically oriented workplace
The persuasive potential of technology raises the question of "how sensitive ... designers and programmers [should] be to the ethics of the persuasive technology they design." Technoethics can be used to determine the level of ethical responsibility that should be associated with outcomes of the use of technology, whether intended or unintended
Rapidly changing organizational life and the history of unethical business practices have given rise to public debates concerning organizational responsibility and trust. The advent of virtual organizations and increase in remote work has bolstered ethical problems by providing more opportunities for fraud and the production of misinformation. Concerted efforts are required to uphold ethical values in advancing new knowledge and tools within societal relations which do not exclude people or limit liberties of some people at the expense of others
Artificial Intelligence: Artificial Intelligence seems to be the one of the most talked of challenges when it comes ethics. In order to avoid these ethical challenges some solutions have been established; first and for most it should be developed for the common good and benefit of humanity. Secondly, it should operate on principles of intelligibility and fairness. It should also not be used to diminish the data rights or privacy of individuals, families, or communities. It is also believed that all citizens should have the right to be educated on artificial intelligence in order to be able to understand it. Finally, the autonomous power to hurt, destroy, or deceive humans should never be vested in artificial intelligence.
Current issues
Copyrights
Digital copyrights are a complicated issue because there are multiple sides to the discussion. There are ethical considerations surrounding the artist, producer, and end user. Not to mention the relationships with other countries and the impact on the use of content housed in their countries. In Canada, national laws such as the Copyright Act and the history behind Bill C-32 are just the beginning of the government's attempt to shape the "wild west" of Canadian Internet activities. The ethical considerations behind Internet activities such a peer-to-peer file sharing involve every layer of the discussion – the consumer, artist, producer, music/movie/software industry, national government, and international relations. Overall, technoethics forces the "big picture" approach to all discussions on technology in society. Although time-consuming, this "big picture" approach offers some level of reassurance when considering that any law put in place could drastically alter the way we interact with our technology and thus the direction of work and innovation in the country.
The use of copyrighted material to create new content is a hotly debated topic. The emergence of the musical "mashup" genre has compounded the issue of creative licensing. A moral conflict is created between those who believe that copyright protects any unauthorized use of content, and those who maintain that sampling and mash-ups are acceptable musical styles and, though they use portions of copyrighted material, the result is a new creative piece which is the property of the creator, and not of the original copyright holder. Whether or not the mashup genre should be allowed to use portions of copyrighted material to create new content is one which is currently under debate.
Cybercriminality
Cybercrime can consist of many subcategories and can be referred to as a big umbrella. Cyber theft such as online fraud, identity theft, and digital piracy can be classified as one sector. Another section of cybercrime can include cyber-violence which can be defined as online behavior that can be anywhere from hate speeches, harassment, cyberstalking, to behavior that leads to physical, psychological, or emotional assault against the well-being of an individual. Cyber obscenity is another section when child sexual exploitation materials are involved. Cyber trespass is when there is unauthorized computer system access. Cybercrime can encompass many other sections where technology and computers are used to assist and commit various forms of crimes.
For many years , new technologies took an important place in social, cultural, political, and economic life. Thanks to the democratization of informatics access and the network's globalization, the number of exchanges and transaction is in perpetual progress.
In the article, "The Dark Figure of Online Property Crime: Is Cyberspace Hiding a Crime Wave?", the authors analyze evidence that reveals cyber criminality rates are increasing as the typical street crimes gradually decrease. With the increase in cyber criminality, it is imperative to research more information on how to increase cyber security. The issue with increasing cyber security is that the more laws to protect people, the more citizens would feel threatened that their freedom is being compromised. One way to avoid making people feel threatened by all the security measures and protocols is by being as clear and straightforward as possible. Gregory Nojeim in his article "Cybersecurity and Freedom on the Internet" state, "Transparency in the cybersecurity program will build the confidence and trust that is essential to industry and public support for cybersecurity measures." It is important to create ethical laws that protect privacy, innovation, and consumers' freedom.
Many people are exploiting the facilities and anonymity that modern technologies offer in order to commit multiple criminal activities. Cybercrime is one of the fastest growing areas of crime. The problem is that some laws that profess to protect people from those who would do wrong things via digital means also threaten to take away people's freedom.
Privacy vs. security: Full-body airport scanners
Since the introduction of full body X-ray scanners to airports in 2007, many concerns over traveler privacy have arisen. Individuals are asked to step inside a rectangular machine that takes an alternate wavelength image of the person's naked body for the purpose of detecting metal and non-metal objects being carried under the clothes of the traveler. This screening technology comes in two forms, millimeter wave technology (MM-wave technology) or backscatter X-rays (similar to x-rays used by dentists). Full-body scanners were introduced into airports to increase security and improve the quality of screening for objects such as weapons or explosives due to an increase of terrorist attacks involving airplanes occurring in the early 2000s.
Ethical concerns of both travelers and academic groups include fear of humiliation due to the disclosure of anatomic or medical details, exposure to a low level of radiation (in the case of backscatter X-ray technology), violation of modesty and personal privacy, clarity of operating procedures, the use of this technology to discriminate against groups, and potential misuse of this technology for reasons other than detecting concealed objects. Also people with religious beliefs that require them to remain physically covered (arms, legs, face etc.) at all times will be unable and morally opposed to stepping inside of this virtually intrusive scanning technology. The Centre for Society, Science and Citizenship have discussed their ethical concerns including the ones mentioned above and suggest recommendations for the use of this technology in their report titled "Whole Body Imaging at airport checkpoints: the ethical and policy context" (2010).
Privacy and GPS technologies
The discourse around GPS tracking devices and geolocation technologies and this contemporary technology's ethical ramifications on privacy is growing as the technology becomes more prevalent in society. As discussed in the New York Timess Sunday Review on September 22, 2012, the editorial focused on the ethical ramifications that imprisoned a drug offender because of the GPS technology in his cellphone was able to locate the criminal's position. Now that most people carry on the person a cell, the authorities have the ability to constantly know the location of a large majority of citizens. The ethical discussion now can be framed from a legal perspective. As raised in the editorial, there are stark infractions that these geolocation devices on citizens' Fourth Amendment and their protection against unreasonable searches. This reach of this issue is not just limited to the United States but affects more democratic state that uphold similar citizens' rights and freedoms against unreasonable searches.
These geolocation technologies are not only affecting how citizens interact with their state but also how employees interact with their workplaces. As discussed in article by the Canadian Broadcasting Company, "GPS and privacy", that a growing number of employers are installing geolocation technologies in "company vehicles, equipment and cellphones" (Hein, 2007). Both academia and unions are finding these new powers of employers to be indirect contradiction with civil liberties. This changing relationship between employee and employer because of the integration of GPS technology into popular society is demonstrating a larger ethical discussion on what are appropriate privacy levels. This discussion will only become more prevalent as the technology becomes more popular.
Genetically modified organisms (GMOs)
Genetically modified foods have become quite common in developed countries around the world, boasting greater yields, higher nutritional value, and greater resistance to pests, but there are still many ethical concerns regarding their use. Even commonplace genetically modified crops like corn raise questions of the ecological consequences of unintended cross pollination, potential horizontal gene transfer, and other unforeseen health concerns for humans and animals.
Trademarked organisms like the "Glofish" are a relatively new occurrence. These zebrafish, genetically modified to appear in several fluorescent colours and sold as pets in the United States, could have unforeseen effects on freshwater environments were they ever to breed in the wild.
Providing they receive approval from the U.S. Food and Drug Administration (FDA), another new type of fish may be arriving soon. The "AquAdvantage salmon", engineered to reach maturity within roughly 18 months (as opposed to three years in the wild), could help meet growing global demand. There are health and environmental concerns associated with the introduction any new GMO, but more importantly this scenario highlights the potential economic impact a new product may have. The FDA does perform an economic impact analysis to weigh, for example, the consequences these new genetically modified fish may have on the traditional salmon fishing industry against the long term gain of a cheaper, more plentiful source of salmon. These technoethical assessments, which regulatory organizations like the FDA are increasingly faced with worldwide, are vitally important in determining how GMOs—with all of their potential beneficial and harmful effects—will be handled moving forward.
Pregnancy screening technology
For over 40 years, newborn screening has been a triumph of the 20th century public health system. Through this technology, millions of parents are given the opportunity to screen for and test a number of disorders, sparing the death of their children or complications such as mental retardation. However, this technology is growing at a fast pace, disallowing researchers and practitioners from being able to fully understand how to treat diseases and provide families in need with the resources to cope.
A version of pre-natal testing, called tandem mass spectrometry, is a procedure that "measures levels and patterns of numerous metabolites in a single drop of blood, which are then used to identify potential diseases. Using this same drop of blood, tandem mass spectrometry enables the detection of at least four times the number of disorders than was possible with previous technologies." This allows for a cost-effective and fast method of pre-natal testing.
However, critics of tandem mass spectrometry and technologies like it are concerned about the adverse consequences of expanding newborn screen technology and the lack of appropriate research and infrastructure needed to provide optimum medical services to patients. Further concerns include "diagnostic odysseys", a situation in which the patient aimlessly continues to search for diagnoses where none exists.
Among other consequences, this technology raises the issue of whether individuals other than newborn will benefit from newborn screening practices. A reconceptualization of the purpose of this screening will have far reaching economic, health and legal impact. This discussion is only just beginning and requires informed citizenry to reach legal if not moral consensus on how far we as a society are comfortable with taking this technology.
Citizen journalism
Citizen journalism is a concept describing citizens who wish to act as a professional journalist or media person by "collecting, reporting, analyzing, and disseminating news and information" According to Jay Rosen, citizen journalists are "the people formerly known as the audience," who "were on the receiving end of a media system that ran one way, in a broadcasting pattern, with high entry fees and a few firms competing to speak very loudly while the rest of the population listened in isolation from one another—and who today are not in a situation like that at all. ... The people formerly known as the audience are simply the public made realer, less fictional, more able, less predictable".
The internet has provided society with a modern and accessible public space. Due to the openness of the internet, there are discernible effects on the traditional profession of journalism. Although the concept of citizen journalism is a seasoned one, "the presence of online citizen journalism content in the marketplace may add to the diversity of information that citizens have access to when making decisions related to the betterment of their community or their life". The emergence of online citizen journalism is fueled by the growing use of social media websites to share information about current events and issues locally, nationally and internationally.
The open and instantaneous nature of the internet affects the criteria of information quality on the web. A journalistic code of ethics is not instilled for those who are practicing citizen journalism. Journalists, whether professional or citizen, have needed to adapt to new priorities of current audiences: accessibility, quantity of information, quick delivery and aesthetic appeal. Thus, technology has affected the ethical code of the profession of journalism with the popular free and instant sharing qualities of the internet. Professional journalists have had to adapt to these new practices to ensure that truthful and quality reporting is being distributed. The concept can be seen as a great advancement in how society communicates freely and openly or can be seen as contributing to the decay of traditional journalistic practices and codes of ethics.
Other issues to consider:
Privacy concerns: location services on cell devices which tell all users where a person is should they decide to turn on this feature, social media, online banking, new capabilities of cellular devices, Wi-fi, etc.
New music technology: People see more electronic music today with the new technology able to create it, as well as more advanced recording technology
Recent developments
Despite the amassing body of scholarly work related to technoethics beginning in the 1970s, only recently has it become institutionalized and recognized as an important interdisciplinary research area and field of study. In 1998, the Epson Foundation founded the Instituto de Tecnoética in Spain under the direction of Josep Esquirol. This institute has actively promoted technoethical scholarship through awards, conferences, and publications. This helped encourage scholarly work for a largely European audience. The major driver for the emergence of technoethics can be attributed to the publication of major reference works available in English and circulated globally. The "Encyclopedia of Science, Technology, and Ethics" included a section on technoethics which helped bring it into mainstream philosophy. This helped to raise further interest leading to the publication of the first reference volume in the English language dedicated to the emerging field of Technoethics. The two volume Handbook of Research on Technoethics explores the complex connections between ethics and the rise of new technologies (e.g., life-preserving technologies, stem cell research, cloning technologies, new forms of surveillance and anonymity, computer networks, Internet advancement, etc.) This recent major collection provides the first comprehensive examination of technoethics and its various branches from over 50 scholars around the globe. The emergence of technoethics can be juxtaposed with a number of other innovative interdisciplinary areas of scholarship which have surfaced in recent years such as technoscience and technocriticism.
Technology and ethics in the music industry
With all the developments we've had in technology it has created a lot advancement for the music industry both positive and negative. A main concern is piracy and illegal downloading; with all that is available through the internet a lot of music (TV shows and movies as well) have become easily accessible to download and upload for free. This does create new challenges for artist, producers, and copyright laws. The advances it has positively made for the industry is a whole new genre of music. Computers are being used to create electronic music, as well as synthesizers (computerized/electronic piano). This type of music is becoming rapidly more common and listened to. These advances have allowed the industry to try new things and make new explorations.
Because the internet is not controlled by a centralized power, users can maintain anonymity and find loopholes to avoid consequences for using peer-to-peer technology. The peer-to-peer network allows users to connect to a computer network and freely trade songs. Many companies, like Napster, have taken advantage of this because the protection of intellectual property is close to impossible on the internet. Digital and downloadable music has become a severe threat to major record companies. Associated digital music technologies have changed the power dynamics greatly for major record companies, music consumers, and the artists. Not only has this change in power dynamics provided more opportunities for independent music labels but also reduce costs for music.
The digital environment in the music industry is always evolving. "The industry is beginning to work at adapting to the digital environment and downturns in a business performance like by online distribution and sales; harnessing visibility events for sales momentum; new capabilities for artist management in the digital age and by leveraging online communities to influence product development, among others". These new capabilities and new developments need strong intellectual property regulations to protect artists.
Technology is a pillar in the music industry; therefore, it is imperative to have strong technology ethics. Copyright protections and legislation help artists trademark their music and protect their intellectual property. Protecting intellectual property in the music industry becomes tricky when music firms are in the process of incorporating new technologies and methodologies, which forces firms to be innovative and update the industry standards.
Technology and ethics during the coronavirus pandemic
As of April 20, 2020 there has been over 43 contract tracing apps available globally. Countries are in the process of creating their own methods of digitally tracing coronavirus status (symptoms, confirmed infected, exposed). Apple and Google are working together on a shared solution that helps with contract tracing around the world. Since this is a global pandemic with no end in sight, the restriction of some fundamental rights and freedoms may be ethically justifiable. It may be unethical to not use these tracing solutions to slow the spread. The European Convention on Human Rights, the United Nations International Covenant on Civil and Political Rights, and the United Nations Siracusa Principles all indicate when it is ethical to restrict the rights of the population to prevent the spread of infectious disease. All three documents cite that the circumstances for restricting rights must be time-bound, meet standards of necessity, proportionality, and scientific validity. We must evaluate if the gravity of the situation justifies the potential negative impact, if the evidence shows that the technology will work, is timely, will be adopted by enough people and yields accurate data and insights, and evaluate if the technology will only be temporary. These three documents also provide guidelines on how to ethically develop and design technologies. The development and design guidelines are important for being effective and for security reasons.
The development of technology has enhanced the ability to obtain, track, and share data. Technology has been mobilized by governments around the world to combat the issue of COVID-19, which has brought attention to several issues surrounding ethics. Governments have implemented technologies such as smartphone metadata and Bluetooth applications to contact trace and notify the public of any important information. There are implications for privacy as technologies such as metadata have the capacity to track every movement of an individual. Due to the Coronavirus Pandemic, contact tracing and other tracking apps have been implemented globally in order to fight against the pandemic. Countries across the globe have developing various methods of digitally tracing corona virus such as outbreak origin, symptoms, confirmed positives, and those who are potentially exposed. Governments around the world combined available technology to identify individuals and surveillance technology while still having a low impact on individuals privacy. In 2020, the Australian government released a Bluetooth connected app that allows phones communicate through Bluetooth opposed to metadata. This allowed the app to connect with surrounding phones through Bluetooth opposed to metadata or GPS, which can have a bigger impact on individual privacy. The technology records individuals who have been in close proximity, by connecting through their phones, and recording the data for a certain time period before deleting itself. The app does not track individual's locations but still can pinpoint if they have had close contact with those who were positive or exposed.
On the other hand, some countries such as South Korea utilized metadata technology to closely surveillance their citizens. Metadata can provide a detailed description of an individual's movements by staying regularly in contact with the local cellular towers to maintain reception. In S. Korea, the government utilized individuals' metadata information to convey any public health message to the public. Anonymized information would be released to the public of the locations of individuals who have tested positive for COVID-19. Similarly in Israel, the government approved emergency regulations that allowed authorities to utilize a database that tracks the movements of individuals that have tested positive for COVID-19.
The rise of surveillance technologies by the government to track individuals raises many questions of ethic concerns. As lockdowns and Covid protocols continue, the focus on protecting public health can severely conflict with individual autonomy, although it can be necessary to implement certain technologies and protocols.
Even though these three bodies of government can deem contact tracing ethical, all these contact tracing apps come with a price. They are collecting sensitive personal data including health data. This poses a threat to violate HIPAA and PII if not handled and processed correctly. Even if these apps are only used temporarily, they are storing permanent records of health, movements, and social interactions. Not only do we have to consider the ethical implications of your personal information being stored, but we must also look at the accessibility and digital literacy of the users. Not everyone has access to a smartphone or a cell phone. If we are developing smartphone applications, we will be missing a huge portion of coronavirus data.
While it may be necessary to utilize technology to slow the spread of coronavirus, the Government needs to design and deploy the technology in a way that does not breach the public trust. There is a fine line of saving lives and possibly harming the fundamental rights and freedoms of individuals.
Future developments
The future of technoethics is a promising, yet evolving field. The studies of e-technology in workplace environments are an evolving trend in technoethics. With the constant evolution of technology, and innovations coming out daily, technoethics is looking to be a rather promising guiding framework for the ethical assessments of new technologies. Some of the questions regarding technoethics and the workplace environment that have yet to be examined and treated are listed below:
Are organizational counter measures not necessary because it invades employee privacy?
Are surveillance cameras and computer monitoring devices invasive methods that can have ethical repercussions?
Should organizations have the right and power to impose consequences?
Artificial intelligence
Artificial intelligence is a large range of technology that deals with building smart machines and data processing so tasks can be performed by machines that are normally completed by humans. AI may prove to be beneficial to human life, but it can also quickly become pervasive and dangerous. Changes in AI are difficult to anticipate and understand, such as employers spying on workers, facial recognition, deep fakes, etc. Along with AI, the algorithms used to implement the technology may prove to be biased which can have detrimental effects on individuals. For example, in facial recognition technology, the AI may be proven to be biased toward different ethnic and racial groups than others. These challenges have social, racial, ethical, and economic implications.
Deepfakes
Deepfake is a form of media in which one existing image or video is replaced or altered by someone else. Altering may include acting out fake content, false advertisement, hoaxes, and financial fraud. The technology of deepfakes may also use machine learning or artificial intelligence. Deepfakes propose an ethical dilemma due to how accessible they are as well as the implications on one's integrity it may cause to viewers. Deepfakes reconsider the challenge of trustworthiness of the visual experience and can create negative consequences. Deepfakes contribute to the problem of "fake news" by enabling both the more widespread fabrication or manipulation of media that may be deliberately used for the purposes of disinformation. There are four categories of deepfakes: deepfake porn, deepfake political campaigns, deepfake for commercial use, and creative deepfakes. Deepfakes have many harmful effects such as deception, intimidation, and reputational harm. Deception causes views to synthesize a form of reality that did not exist before and may think of it as real footage. The contents of the footage may be detrimental depending on what it is. Detrimental information may include fraudulent voter information, candidate information, money fraud, etc. Intimidation may occur by targeting a certain audience with harmful threats to generate fear. An example of intimidation may be deepfake revenge pornography which also ties into reputational harm.
Accessibility of deepfakes also raises ethical dilemmas as it can be accessed through apps like FakeApp, Zao, and Impressions. The accessibility to these applications may cause legal action. In 2018 the Malicious Deep Fake Prohibition Act was introduced to protect those who may be harmed by deepfakes. These crimes can result in prosecution for harassment or sentences to imprisonment. Although there can be legal actions for deepfakes, they do become increasingly difficult as many parties are involved in its development. The many parties for a deepfake such as the software developer, the application for amplification, the user of the software, etc. Due to these many different components, it may be difficult to prosecute individuals for deepfakes.
United Nations Educational, Scientific and Cultural Organization (UNESCO)
UNESCO – a specialized intergovernmental agency of the United Nations, focusing on promotion of education, culture social and natural sciences and communication and information.
In the future, the use of principles as expressed in the UNESCO Universal Declaration on Bioethics and Human Rights (2005) will also be analyzed to broaden the description of bioethical reasoning (Adell & Luppicini, 2009).
User data
In a digital world, much of users' personal lives are stored on devices such as computers and smartphones, and we trust the companies we store our lives on to take care of our data. A topic of discussion regarding the ethics of technology is just exactly how much data these companies really need and what they are doing with it. Another major cause for concern is the security of our personal data and privacy, whether it is leaked intentionally or not.
User data has been one of the main topics regarding ethics as companies and government entities increasingly have access to billions of users' information. Why do companies need so much data regarding their users and are users aware that their data is being tacked? These questions have risen over the years over concerns of how much do companies actually know. Some websites and apps now ask users if they are allowed to track user activity across different apps with the option to decline. Most companies before did not ask or notify users that their app activity would be tracked. Companies over the years have been facing an increased number of data hacks where user's data such as credit cards, social security, phone numbers, and addresses have been leaked. Users of social networks such as Snapchat and Facebook have been facing phone calls from scammers as recent data hacks released users' phone numbers. The most recent breach to affect Facebook leaked over 533 million Facebook users from 106 countries, including 32 million users alone in the U.S. The type of information leaked included user phone numbers, Facebook IDs, full names, locations, birthdates, bios, and email addresses. Hackers and web scrapers have been selling Facebook user data on hacker forums, information for 1 million users can go for $5,000 on these forums.
Large companies share their users' data constantly. In 2018, the U.S, government cracked down on Facebook selling user data to other companies after declaring that it had made the data in question inaccessible. One such case was in a scandal regarding Cambridge Analytica, in which Facebook sold user data to the company without consent from the users whose data was being accessed. The data was then used for several political agendas, such as the Brexit vote and the U.S. Presidential Election of 2016. In an interview with CBS' 60 Minutes, Trump campaign manager Brad Parscale described in detail how he used data taken from different social media websites to create ads that were both visually appealing to potential voters and targeted the issues that they felt strongest about.
Besides swinging political races, the theft of people's data can result in serious consequences on an individual level. In some cases, hackers can breach websites or businesses that have identifying information about a person, such as their credit card number, cell phone number, and address, and upload it to the dark web for sale, if they decide not to use it for their own deviant purposes.
Drones
In the book Society and Technological Change, 8th Edition, by Rudi Volti, the author comments on unmanned aerial vehicles, also known as UAVs or drones. Once used primarily as military technology, these are becoming increasingly accessible tools to the common person for hobbies like photography. In the author's belief, this can also cause concern for security and privacy, as these tools allow people with malicious intents easier access to spying.
Outside of public areas, drones are also able to be used for spying on people in private settings, even in their own homes. In an article by today.com, the author writes about people using drones and taking videos and photographs of people in their most private moments, even in the privacy of their own home.
From an ethical perspective, drones have a multitude of ethical issues many of which are determining current legal policy. Some areas include the ethical military usage of drones, private non-military use by hobbyists for photography or potential spying, drone usage in political campaigns as a way to spread campaign messages, drone usage in the private business sector as a means for delivery, and ethical usage of public/private airspace.
Pet Cloning
In 2020 pet cloning is to become something of interest for those who can afford it. For $25k - $50k anyone will be able to clone their house pet but there is no guarantee you will get the exact same pet that you once had. This may seem very appealing to certain animal-lovers, but what about all of those animals that already have no home?
There are a few different ethical questions here; the first being how is this fair to the animals that are suffering out in the wilderness with no home? The second being that cloning animals is not only for pets, but for all animals in general. Maybe people are concerned that people are going to clone animals for food purposes.Another question about animal cloning is it is good for the welfare of the animal or will the radiation and other procedural aspects cause the animals life to end earlier? These are just some of many concerns some people have with animal cloning.
Animal cloning
The ethical standpoint of animal cloning is a heavily debated topic in a plethora of different career paths. Some of these ethical concerns are the health and well being of the animals, long term side effects, obstetrical complications that occur during cloning, environmental impacts, use of clones in farming/repopulation of endangered species, and the use of clones for other research, specifically in the medical/pharmaceutical field. Many of these concerns are only more recently spoken about due to the advancement of cloning technology in the past decade since humanity's first clone was only twenty-five years ago in 1996 resulting in the birth of a sheep known as Dolly.
Facebook and Meta's ethical concerns
Facebook, or rather Meta Platforms, Facebook's parent company is currently one of the top social networking sites, throughout the early to late 2010s and into current times (2022). A variety of issues ranging from privacy concerns, the issue of who bears the responsibility of unhealthy social interactions and other unhealthy behaviors, to deliberate enabling of misinformation on the platform. Recent issues such as Facebook data leaks and circulation of fake news highlights the downside of social media when in the wrong hands. following are a few examples of various ethical concerns raised throughout the years in relation to Facebook.
Federal Trade Commission v. Facebook
A recent Forbes interview conducted on October 22, 2021, by Curt Steinhorst, a contributor for Forbes, with Michael Thate, an ethics teacher employed at Princeton University, asserts that in addition to the Federal Trade Commission v. Facebook ruling determining that Facebook had engaged in unethical antitrust behaviors with the acquisition of its competing social media platforms Instagram and WhatsApp, "Facebook developed an algorithm to capture user attention and information into a platform that they knew promoted unhealthy behaviors." Firstly the unethical acquisition of smaller competing social media platforms restricts free-market practices and restricts users' choices in, at least in this case, what social media sites they choose to access. In addition to the antitrust, the promotion of unhealthy behaviors and lifestyles to increase user engagement on the platform is considered to be by Michael Thate to be an ethical concern, as users of the social media platform are given a choice between maintaining a healthy lifestyle and engaging in the social media platform that is designed to keep them on the site and actively engaged regardless of its impact on the wellbeing of the user.
Facebook's algorithm
On October 4, 2021, CBS News interviewed Frances Haugen, a whistleblower and former employee of Facebook, who revealed Facebook was aware of various concerning ethical practices. "The complaints say Facebook's own research shows that it amplifies hate, misinformation, and political unrest—but the company hides what it knows. One complaint alleges that Facebook's Instagram harms teenage girls." These various unethical practices were all employed to, yet again promote increased user engagement with the social media platform. Fences Haugen stated in the interview: "The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimize for its own interests, like making more money." An article written on February 10, 2021, by Paige Cooper outlines how Facebook's algorithm has changed over the years highlights the changes made by Facebook to prioritize the more emotional interactions on the site.
Facebook–Cambridge Analytica
Through the 2010s the British political consulting firm Cambridge Analytica in a conjoined effort with Facebook gathered information and personal data on upwards of 87 million nonconsenting users as stated by a New York Times article titled "Cambridge Analytica and Facebook: The Scandal and the Fallout So Far". The illegally obtained data was then utilized in Donald Trump's 2016 presidential campaign to help develop personalized ads and campaign messages based on the data provided by Cambridge Analytica. An article was written by The Guardian on March 18, 2021, interviewing a Cambridge Analytica whistleblower Christopher Wylie. In the interview, Wylie asserted that the data given to him was legally obtained and that he and various other academic analyses were also unaware of the nature to which the data used in the psychological profiles was obtained.
Areas of technoethical inquiry
Biotech ethics
Biotech ethics concerned with ethical dilemmas surrounding the use of biotechnologies in fields including medical research, health care, and industrial applications. Topics such as cloning ethics, e-health ethics, telemedicine ethics, genetics ethics, neuroethics, and sport and nutrition ethics fall into this category; examples of specific issues include the debates surrounding euthanasia and reproductive rights.
Telemedicine is a medical technology that has been used to advance clinical care with the use of video conferencing, text messaging, and applications. With the advantage of telemedicine, there are concerns about its pitfalls such as threats to patient privacy and HIPAA regulations. Cyberattacks in healthcare are a significant concern when implementing technology because there needs to be measures in place to keep patient privacy secure. One type of cyber attack is a medical device hijack also known as medjack where hackers can alter the functionality of implants, and expose patient medical history. When implementing technology, it is important to check for weaknesses that can cause vulnerability to hacking.
The use of technology in ethics also becomes a key factor when considering artificial intelligence. AI is not seen as a neutral tool, and policies have been set in place to ensure it is not misused under human bias. Although AI is a valuable tool in medicine, the current ethical policies are not up to standard to accommodate AI as it is a multi-disciplinary approach. AI in healthcare is not available to make clinical decisions; however, it can provide assistance in surgeries, imaging, etc.
Technoethics and cognition
This area of technoethical inquiry is concerned with technology's relation to the human mind, artificial agents, and society. Topics of study that would fit into this category would be artificial morality and moral agents, moral outsourcing, technoethical systems and techno-addiction.
An artificial agent describes any type of technology that is created to act as an agent, either of its own power or on behalf of another agent. An artificial agent may try to advance its own goals or those of another agent.
Mass surveillance
The ethics behind mass surveillance has become a highly discussed ethical topic in the twenty-first century, especially in the United States due to the tragedy of 9/11. Some areas of ethical concern involve privacy, discrimination, trust in government, infringement of government-granted rights/basic human rights, conflict of interest, stigmatization, and obtrusiveness. Many of these ethical topics in the timeframe between 2001 and 2021 have become the main topic of discussion in many recent laws all throughout the world. Shortly after 9/11 when the United States began to fear the idea that more terrorist attacks could occur on American soil. A law passed on October 26, 2001, known as the Patriot Act was one of the first larger Mass Surveillance laws passed in the United States. Years later Europe would begin to follow suit with their own set of mass surveillance laws after a string of terrorist attacks. After the 2015 terrorist attacks in France, the French government would move forward with passing the International Electronic Communications Law. The IEC would recognize the power of the French Directorate-General for External Security allowing them to collect, monitor, and intercept all communications sent or received on French territory. In 2016, the United Kingdom would pass the Investigatory Powers Act of 2016, a law allowing the GCHQ to engage in acquisition, interception, and equipment interference of communications/systems sent by anyone on British territory. Finally, in 2016, another law like the Investigatory Powers Act was passed in Germany that was named the Communications Intelligence Gathering Act. This act allowed the German intelligence community to gather foreign nationals communications while they were in German territory. In 2021, Australia passed a law known as the Surveillance Legislation Amendment, which granted the Australian Federal Police and Australian Criminal Intelligence Commission the right to modify or delete data of suspected offenders, Collect intelligence on criminal networks, and finally, forcefully break into a suspected offender's online account. After these laws were passed all throughout Europe, and later on in Australia, a string of protests would begin to arise involving the laws, as citizens from each country would feel it infringed their privacy rights.
Two years after the Investigatory Powers Act of 2016 was passed in the United Kingdom the English High Court would rule that the act would have to be rewritten. This ruling occurred due to the High Court finding the law to be incompatible with EU law since the law "authorizes the UK government to issue retention notices with no prior independent checks, such as review by a court or other body, and for the purpose of investigating crime that is not "serious crime"; and (2) subsequent access to any retained data was similarly not subject to any independent authorization and not limited to the purpose of combating "serious crime". The origin of this ruling comes from a human rights group known as Liberty who first began to battle the act shortly after it was enacted as they stated it violates the United Kingdom's citizens the right to privacy. In 2020, Four years after Germany enacted the Communications Intelligence Gathering Act it would also make its way to court to be reviewed. Receiving heavy backlash from multiple members of the German public and Non-German citizens. Many of these complaints continued to dwell on the same issue of the privacy of both German and non-German citizens. After a two day trial, the high German court did rule that the law was unconstitutional and gave the German parliament until 2021 to make corrections to the Act.
Though, as of recent in the year 2020 during the height of the COVID-19 Pandemic. The ethical atmosphere regarding public health surveillance began to take center stage due to its overall use during the height of the pandemic. The purpose of this mass surveillance was for data collection of the transmission of the COVID-19. Though, many individuals around the world cited they felt this form of surveillance infringed on their privacy and basic human rights. Another concern regarding this level of surveillance was the lack of government or institutional policy documents regarding how to address the ethical challenges around mass surveillance to track a pandemic transmission rate. The use of this mass surveillance was used on a far larger scale compared to some of the other acts passed in recent years, as this had a more global focus due to the want to bring the transmission of COVID-19 to a halt. For example, on March 16, 2020, the Israeli government allowed emergency regulations regarding mass location tracking of citizens to slow the spread of the disease. Singapore and Taiwan also did something similar, yet their method of mass surveillance was allowing their law-enforcement agencies to monitor quarantine orders.
Technoethics and society
This field is concerned with the uses of technology to ethically regulate aspects of a society. For example: digital property ethics, social theory, law, science, organizational ethics and global ethics.
Digital property rights or DPR refers to individual rights on information available online such as email accounts, online website accounts, posts, blogs, pictures, and other digital media. Digital property rights can be regulated and protected by making the digital property tamper-proof, by adding legal clause to the digital properties, and limiting the sharing of software code.
Social theory refers to how societies change and develop over time in terms of behavior and explanation of behaviors. Technology has a great impact on social change. As technology evolves and upgrades, human interaction goes along with the changes. "Technological theory suggests that technology is an important factor for social change, and it would initiate changes in the arrangement of social relationships".
Organizational ethics refers to the code of conduct and the way an organization responds to stimulus. Techno ethics plays a role in organizational ethics because technology can be embedded and incorporated in many different aspects of ethical values.
Technofeminism
Technoethics has concerned itself with society as a general group and made no distinctions between the genders, but considers technological effects and influences on each gender individually. This is an important consideration as some technologies are created for use by a specific gender, including birth control, abortion, fertility treatments, and Viagra. Feminists have had a significant influence on the prominence and development of reproductive technologies. Technoethical inquiry must examine these technologies' effects on the intended gender while also considering their influence on the other gender. Another dimension of technofeminism concerns female involvement in technological development: women's participation in the field of technology has broadened society's understanding of how technology affects the female experience in society.
Information and communication technoethics
Information and communication technoethics is "concerned with ethical issues and responsibilities arising when dealing with information and communication technology in the realm of communication." This field is related to internet ethics, rational and ethical decision making models, and information ethics. A major area of interest is the convergence of technologies: as technologies become more interdependent and provide people with multiple ways of accessing the same information, they transform society and create new ethical dilemmas. This is particularly evident in the realms of the internet. In recent years, users have had the unprecedented position of power in creating and disseminating news and other information globally via social networking; the concept of "citizen journalism" primarily relates to this. With developments in the media, has led to open media ethics as Ward writes, leading to citizen journalism.
In cases such as the 2004 Indian Ocean Tsunami or the 2011 Arab Spring movements, citizen journalists were seen to have been significant sources of facts and information in relation to the events. These were re-broadcast by news outlets, and more importantly, re-circulated by and to other internet users. As Jay David Bolter and Richard Grusin state in their book Remediation: Understanding New Media (1999): "The liveness of the Web is a refashioned version of the liveness of broadcast television" However, it is commonly political events (such as 'Occupy' movements or the Iran Elections of 2009) that tend to raise ethical questions and concerns. In the latter example, there had been efforts made by the Iranian government in censoring and prohibiting the spread of internal happenings to the outside by its citizen journalists. This occurrence questioned the importance of the spread of crucial information regarding the issue, and the source from which it came from (citizen journalists, government authorities, etc.). This goes to prove how the internet "enables new forms of human action and expression [but] at the same time it disables [it]" Information and Communication Technoethics also identifies ways to develop ethical frameworks of research structures in order to capture the essence of new technologies.
Educational and professional technoethics
Technoethical inquiry in the field of education examines how technology impacts the roles and values of education in society. This field considers changes in student values and behavior related to technology, including access to inappropriate material in schools, online plagiarism using material copied directly from the internet, or purchasing papers from online resources and passing them off as the student's own work. Educational technoethics also examines the digital divide that exists between educational institutions in developed and developing countries or between unequally-funded institutions within the same country: for instance, some schools offer students access to online material, while others do not. Professional technoethics focuses on the issue of ethical responsibility for those who work with technology within a professional setting, including engineers, medical professionals, and so on. Efforts have been made to delineate ethical principles in professions such as computer programming (see programming ethics).
Environmental and engineering technoethics
Environmental technoethics originate from the 1960s and 1970s' interest in environment and nature. The field focuses on the human use of technologies that may impact the environment; areas of concern include transport, mining, and sanitation. Engineering technoethics emerged in the late 19th century. As the Industrial Revolution triggered a demand for expertise in engineering and a need to improve engineering standards, societies began to develop codes of professional ethics and associations to enforce these codes. Ethical inquiry into engineering examines the "responsibilities of engineers combining insights from both philosophy and the social sciences."
Technoethical assessment and design
A technoethical assessment (TEA) is an interdisciplinary, systems-based approach to assessing ethical dilemmas related to technology. TEAs aim to guide actions related to technology in an ethical direction by advancing knowledge of technologies and their effects; successful TEAs thus produce a shared understanding of knowledge, values, priorities, and other ethical aspects associated with technology. TEAs involve five key steps:
Evaluate the intended ends and possible side effects of the technology in order to discern its overall value (interest).
Compare the means and intended ends in terms of technical and non-technical (moral and social) aspects.
Reject those actions where the output (overall value) does not balance the input in terms of efficiency and fairness.
Consider perspectives from all stakeholder groups.
Examine technological relations at a variety of levels (e.g. biological, physical, psychological, social, and environmental).
Technoethical design (TED) refers to the process of designing technologies in an ethical manner, involving stakeholders in participatory design efforts, revealing hidden or tacit technological relations, and investigating what technologies make possible and how people will use them. TED involves the following four steps:
Ensure that the components and relations within the technological system are explicitly understood by those in the design context.
Perform a TEA to identify relevant technical knowledge.
Optimize the technological system in order to meet stakeholders' and affected individuals' needs and interests.
Consult with representatives of stakeholder and affected groups in order to establish consensus on key design issues.
Both TEA and TED rely on systems theory, a perspective that conceptualizes society in terms of events and occurrences resulting from investigating system operations. Systems theory assumes that complex ideas can be studied as systems with common designs and properties which can be further explained using systems methodology. The field of technoethics regards technologies as self-producing systems that draw upon external resources and maintain themselves through knowledge creation; these systems, of which humans are a part, are constantly in flux as relations between technology, nature, and society change. TEA attempts to elicit the knowledge, goals, inputs, and outputs that comprise technological systems. Similarly, TED enables designers to recognize technology's complexity and power, to include facts and values in their designs, and to contextualize technology in terms of what it makes possible and what makes it possible.
Organizational technoethics
Recent advances in technology and their ability to transmit vast amounts of information in a short amount of time has changed the way information is being shared amongst co-workers and managers throughout organizations across the globe. Starting in the 1980s with information and communications technologies (ICTs), organizations have seen an increase in the amount of technology that they rely on to communicate within and outside of the workplace. However, these implementations of technology in the workplace create various ethical concerns and in turn a need for further analysis of technology in organizations. As a result of this growing trend, a subsection of technoethics known as organizational technoethics has emerged to address these issues.
Key scholarly contributions
Key scholarly contributions linking ethics, technology, and society can be found in a number of seminal works:
The Imperative of Responsibility: In Search of Ethics for the Technological Age (Hans Jonas, 1979).
On Technology, Medicine and Ethics (Hans Jonas, 1985).
The Real World of Technology (Franklin, 1990).
Thinking Ethics in Technology: Hennebach Lectures and Papers, 1995-1996 (Mitcham, 1997).
Technology and the Good Life (Higgs, Light & Strong, 2000).
Readings in the Philosophy of Technology (Kaplan, 2004).
Ethics and technology: Ethical issues in an age of information and communication technology (Tavani, 2004).
This resulting scholarly attention to ethical issues arising from technological transformations of work and life has helped given rise to a number of key areas (or branches) of technoethical inquiry under various research programs (i.e., computer ethics, engineering ethics, environmental technoethics, biotech ethics, nanoethics, educational technoethics, information and communication ethics, media ethics, and Internet ethics).
See also
Algorithmic bias
Democratic transhumanism
Engineering ethics
Ethics of artificial intelligence
Information ethics
Information privacy
Organizational technoethics
Philosophy of technology
Robotic governance
Techno-progressivism
Technocriticism
References
Hans Jonas, The Imperative of Responsibility: In Search of Ethics for the Technological Age (1979).
Hans Jonas, On Technology, Medicine and Ethics (1985).
Melanie G. Snyders, CyberEthics and Internet Downloads: An Age by Age Guide to Teaching Children what they need to know (2005).
Further reading
General
Kristin Shrader-Frechette. (2003). "Technology and Ethics," in Philosophy of Technology: The Technological Condition, Oxford: Blackwell Publishing.
Eugene Mirman. (2009) "The Will To Whatevs: A Guide to Modern Life." Harper Perennial.
Daniel A. Vallero. (2007) "Biomedical Ethics for Engineers: Ethics and Decision Making in Biomedical and Biosystem Engineering." Amsderdam: Academic Press.
Ethics, technology and engineering
Fleddermann, C.B. (2011). Engineering Ethics. Prentice Hall. 4th edition.
Harris, C.E., M.S. Pritchard, and M.J. Rabins (2008). Engineering Ethics: Concepts and Cases. Wadsworth Publishing, 4th edition.
Hauser-Katenberg, G., W.E. Katenberg, and D. Norris (2003). "Towards Emergent Ethical Action and the Culture of Engineering," Science and Engineering Ethics, 9, 377–387.
Huesemann M.H., and J.A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 14, "Critical Science and Social Responsibility", New Society Publishers.
Layton, E. (1986). The Revolt of the Engineers: Social Responsibility and the American Engineering Profession. The Johns Hopkins University Press.
Martin, M.W., and R. Schinzinger (2004). Ethics in Engineering. McGraw-Hill. 4th edition.
Peterson, M. (2017). The Ethics of Technology: A Geometric Analysis of Five Moral Principles. Oxford University Press.
Mitcham, C. (1984). Thinking through technology, the path between engineering and philosophy. Chicago: The University of Chicago Press.
Van de Poel, I., and L. Royakkers (2011). Ethics, Technology, and Engineering: An Introduction. Wiley-Blackwell.
Education and technology
Marga, A. (2004). "University Reforms in Europe: Some Ethical Considerations," Higher Education in Europe, Vol. 79, No. 3, pp. 432–820.
External links
National Academies of Engineering's Center for Engineering, Ethics, and Society
Stanford Law School's Center for Internet and Society
California Polytechnic State University's Ethics + Emerging Sciences Group
University of Notre Dame's Reilly Center for Science, Technology, and Values
Arizona State University's Lincoln Center for Applied Ethics
Santa Clara University's Markkula for Applied Ethics
Centre for Applied Philosophy and Public Ethics, Australia
Yale University's Interdisciplinary Center for Bioethics
Case Western Reserve University's Inamori Center for Ethics and Excellence
University of Delaware's Center for Science, Ethics, and Public Policy
University of Oxford's Future of Humanity Institute
UNESCO - Ethics of Science and Technology
4TU.Centre for Ethics and Technology
Cyber Crime
Journals
Stanford Encyclopedia of Philosophy
Journal of Ethics and Social Philosophy
Philosophy and Technology
Ethics and Information Technology
Journal of Responsible Innovation
Technology in Society
Minds and Machines
Journal of Information, Communication and Ethics in Society
Organizations
Ethics and Emerging Sciences Group
W. Maurice Centre for Applied Ethics
United Nations Educational, Scientific and Cultural Organization UNESCO
Institute for Ethics in Artificial intelligence
Institute for Ethics and Emerging Technologies
Institute for Ethics in AI
Technoethics
Ahmad Al Khabaz vs Dawson college
Adam-swartz case
Bagheri, A. (2011). The Impact of the UNESCO Declaration in Asian and Global Bioethics. Asian Bioethics Review, Vol. 3(2), 52–64.
Bolter, J. D., Grusin, R., & Grusin, R. A. (2000). Remediation: Understanding new media. MIT Press.
Borgmann, A. (1984). Technology and the character of contemporary life: A philosophical inquiry. Chicago: University of Chicago Press.
Coyne, R., 1995, Designing information technology in the postmodern age: From method to metaphor. Cambridge MA: MIT Press.
Castells, M. (2000). The rise of the network society. The information age: economy, society and culture (Vol. 1). Malden, UK:Blackwell.
Canada Foundation for Innovation: www.innovation.ca
Puig de la Bellacasa, M. (2017). Matters of care : speculative ethics in more than human worlds. Minneapolis: University of Minnesota Press.
Dreyfus, H.L., 1999, "Anonymity versus commitment: The dangers of education on the internet," Ethics and Information Technology, 1/1, p. 15-20, 1999
Gert, Bernard. 1999, "Common Morality and Computing," Ethics and Information Technology, 1/1, 57–64.
Fleddermann, C.B. (2011). Engineering Ethics. Prentice Hall. 4th edition.
Harris, C.E., M.S. Pritchard, and M.J. Rabins (2008). Engineering Ethics: Concepts and Cases. Wadsworth Publishing, 4th edition.
Heidegger, M., 1977, The Question Concerning Technology and Other Essays, New York: Harper Torchbooks.
Huesemann M.H., and J.A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 14, "Critical Science and Social Responsibility", New Society Publishers, , 464 pp.
Ihde, D. 1990, Technology and the Lifeworld: From garden to earth. Bloomington and Indianapolis: Indiana University Press.
Jonas, H. (1979). The Imperative of Responsibility: In Search of Ethics for the Technological Age, Chicago: Chicago University Press.
Jonas, H. (1985). On technology, medicine and ethics. Chicago: Chicago University Press.
Levinas, E., 1991, Otherwise than Being or Beyond Essence, Dordrecht: Kluwer Academic Publishers.
Luppicini, R., (2008). The emerging field of Technoethics. In R. Luppicini and R. Adell (eds.). Handbook of Research on Technoethics (pp. 49–51). Hershey: Idea Group Publishing.
Luppicini, R., (2010). Technoethics and the Evolving Knowledge Society: Ethical Issues in Technological Design, Research, Development and Innovation. Hershey, PA: IGI Global.
Martin, M.W., and R. Schinzinger (2004). Ethics in Engineering. McGraw-Hill. 4th edition.
Mitcham, C. (1994). Thinking through technology. University of Chicago Press.
Mitcham, C. (1997). Thinking ethics in technology: Hennebach lectures and papers, 1995–1996. Golden, CO: Colorado School of Mines Press.
Mitcham, C. (2005). Encyclopedia of science, technology, and ethics. Detroit: Macmillan Reference.
Sullins, J. (2010). RoboWarfare: can robots be more ethical than humans on the battlefield. Journal of Ethics and Information Technology, Vol. 12(3), 263–275.
Tavani, H. T. (2004). Ethics and technology: Ethical issues in an age of information and communication technology. Hoboken, NJ: John Wiley & Sons.
Turkle, S. 1996, "Parallel lives: Working on identity in virtual space." in D. Grodin & T. R. Lindlof, (eds.), Constructing the self in a mediated world, London: Sage, 156–175.
Van de Poel, I., and L. Royakkers (2011). Ethics, Technology, and Engineering: An Introduction. Wiley-Blackwell.
Management cybernetics | 0.781535 | 0.989495 | 0.773325 |
Informal fallacy | Informal fallacies are a type of incorrect argument in natural language. The source of the error is not just due to the form of the argument, as is the case for formal fallacies, but can also be due to their content and context. Fallacies, despite being incorrect, usually appear to be correct and thereby can seduce people into accepting and using them. These misleading appearances are often connected to various aspects of natural language, such as ambiguous or vague expressions, or the assumption of implicit premises instead of making them explicit.
Traditionally, a great number of informal fallacies have been identified, including the fallacy of equivocation, the fallacy of amphiboly, the fallacies of composition and division, the false dilemma, the fallacy of begging the question, the ad hominem fallacy and the appeal to ignorance. There is no general agreement as to how the various fallacies are to be grouped into categories. One approach sometimes found in the literature is to distinguish between fallacies of ambiguity, which have their root in ambiguous or vague language, fallacies of presumption, which involve false or unjustified premises, and fallacies of relevance, in which the premises are not relevant to the conclusion despite appearances otherwise.
Some approaches in contemporary philosophy consider additional factors besides content and context. As a result, some arguments traditionally viewed as informal fallacies are not considered fallacious from their perspective, or at least not in all cases. One such framework proposed is the dialogical approach, which conceives arguments as moves in a dialogue-game aimed at rationally persuading the other person. This game is governed by various rules. Fallacies are defined as violations of the dialogue rules impeding the progress of the dialogue. The epistemic approach constitutes another framework. Its core idea is that arguments play an epistemic role: they aim to expand our knowledge by providing a bridge from already justified beliefs to not yet justified beliefs. Fallacies are arguments that fall short of this goal by breaking a rule of epistemic justification. A particular form of the epistemic framework is the Bayesian approach, where the epistemic norms are given by the laws of probability, which our degrees of belief should track.
The study of fallacies aims at providing an account for evaluating and criticizing arguments. This involves both a descriptive account of what constitutes an argument and a normative account of which arguments are good or bad. In philosophy, fallacies are usually seen as a form of bad argument and are discussed as such in this article. Another conception, more common in non-scholarly discourse, sees fallacies not as arguments but rather as false yet popular beliefs.
Traditional account
Informal fallacies are a form of incorrect argument in natural language. An argument is a series of propositions, called the premises, together with one more proposition, called the conclusion. The premises in correct arguments offer either deductive or defeasible support for the conclusion. The source of the error in incorrect arguments can be in the argument's form, content or context. If the error is only due to the form, it is considered a formal fallacy. Informal fallacies may also include formal errors but they primarily involve errors on the level of content and context. Informal fallacies are expressed in natural language. This brings with it various difficulties not faced when studying formal fallacies, like ambiguous terms, vague expressions or the premises being assumed implicitly rather than stated explicitly. Traditionally, a great number of informal fallacies have been listed, including the fallacy of equivocation, the fallacy of amphiboly, the fallacies of composition and division, the false dilemma, the fallacy of begging the question, the ad hominem fallacy or the appeal to ignorance. The traditional approach tries to account for these fallacies using the concepts and theses discussed in this section.
Arguments and fallacies
Only arguments can constitute a fallacy. Various erroneous expressions do not count as fallacies because no argument is made, e.g. because no reasons are cited or no assertion is made. The core idea of arguments is that the premises support the conclusion or that the conclusion follows from the premises. Deductively valid arguments offer the strongest form of support: for them, it is impossible for the conclusion to be false if all the premises are true. The premises in non-deductive arguments offer a certain degree of support for their conclusion but they are defeasible: it is possible for all the premises to be true and the conclusion to be false. Defeasible arguments may still be rationally compelling despite being fallible, so they do not automatically constitute fallacies. The premises of an argument may be seen as the foundation on which the conclusion is built. According to this analogy, two things can go wrong and turn an argument into a fallacy. It could be that the foundation is shaky. But even a solid foundation is not helpful if it does not provide support for the conclusion in question.
Traditionally, fallacies have been defined by three necessary conditions: "a fallacy (i) is an argument, (ii) that is invalid, and (iii) appears to be valid." This definition covers only formal fallacy since it has deductive invalidity as a necessary condition. But it can easily be modified to include informal fallacy by replacing this condition with a more general term, like logical weakness or incorrect reasoning. The last clause includes a psychological element in referring to how the argument appears to the arguer. This clause is used to distinguish genuine fallacies from mere mistakes in reasoning, for example, due to carelessness. The idea is that fallacies have an alluring element that goes beyond mere carelessness by seducing us into committing the mistake, thereby explaining why they are committed in the first place. Some philosophers reject this appeal to appearances because the reference to psychology would complicate the investigation in various ways. One issue is that appearances are different for different people. This problem also involves social sciences in order to determine which reference group of people to consult for defining fallacies. It has been suggested that, at its core, the study of fallacies is about normative aspects of arguments and not about their persuasive force, which is studied by empirical psychology instead.
Form, content, and context
The source of the error in incorrect arguments can lie in the argument's form, content, or context. The form or structure of an argument is also called "rule of inference". The most well-known rule of inference is modus ponens, which states that given a premise of the form "If p then q" and another in the form "p", then the conclusion is "q". Rules of inferences are formal because it depends only on the structure or the syntax of the premises and not on their content. So an argument based on modus ponens is valid no matter what propositional contents are used for "p" and "q".
The content of an argument is found on the level of its propositions: it is what is expressed in them. The source of many informal fallacies is found in a false premise. For example, a false dilemma is a fallacy based on a false disjunctive claim that oversimplifies reality by excluding viable alternatives.
The context of an argument refers to the situation in which it is used. Based on its context it may be intended to play different roles. One way for an argument to be fallacious is if it fails to perform the role it was supposed to play. The strawman fallacy, for example, involves inaccurately attributing a weak position to one's opponent and then refuting this position. The argument itself may be valid in that the refutation of the opposed position really is successful. The error is found on the level of the context since the opponent does not hold this position. This dependence on a context means that the same argument may be successful in another context: against an opponent who actually holds the strawman position.
Natural language and contrast to formal fallacies
Formal fallacies are deductively invalid arguments. They are of special interest to the field of formal logic but they can only account for a small number of the known fallacies, for example, for affirming the consequent or denying the antecedent. Many other fallacies used in natural language, e.g. in advertising or in politics, involve informal fallacies. For example, false dilemmas or begging the question are fallacies despite being deductively valid. They are studied by informal logic. Part of the difficulty in analyzing informal fallacies is due to the fact that their structure is not always clearly expressed in natural language. Sometimes certain keywords like "because", "therefore", "since" or "consequently" indicate which parts of the expression constitute the premises and which part the conclusion. But other times this distinction remains implicit and it is not always obvious which parts should be identified as the premises and the conclusions. Many informal arguments include enthymematic premises: premises that are not explicitly stated but tacitly presumed. In some domestic quarrels and political debates, it is not clear from the outset what the two parties are arguing about and which theses they intend to defend. Sometimes the function of the debate is more to clarify these preliminary points than to advance actual arguments.
The distinction between formal and informal fallacies is opposed by deductivists, who hold that deductive invalidity is the reason for all fallacies. One way to explain that some fallacies do not seem to be deductively invalid is to hold that they contain various hidden assumptions, as is common for natural language arguments. The idea is that apparent informal fallacies can be turned into formal fallacies by making all these assumptions explicit and thereby revealing the deductive invalidity. The claim that this is possible for all fallacies is not generally accepted. One requirement for a formal treatment is translating the arguments in question into the language of formal logic, a process known as "formalization". Often many of the subtleties of natural language have to be ignored in this process. Some bodies of knowledge can be formalized without much residue but others resist formalization. This is also true for many informal fallacies.
Contemporary approaches
The traditional approach to fallacies has received a lot of criticism in contemporary philosophy. This criticism is often based on the argument that some of the alleged fallacies are not fallacious at all, or at least not in all cases. It is argued that the traditional approach does not fully consider the aim of an argument in a particular context, and a framework is required in order to show that, given their perspective, it is possible to evaluate if an alleged fallacy is actually fallacious in a given case. It has been suggested that there may not be one single framework for evaluating all fallacies but only a manifold of ideals according to which a given argument may be good or bad.
Two prominent frameworks which have been proposed are the dialogical and epistemic approaches. The dialogical approach uses a game-theoretic framework to define arguments and sees fallacies as violations of the rules of the game. According to the epistemic approach, it is the goal of arguments to expand our knowledge by providing a bridge from already justified beliefs to not yet justified beliefs. Fallacies are arguments that fall short of this goal by breaking a rule of epistemic justification.
Dialogical
The dialogical approach sees arguments not simply as a series of premises together with a conclusion but as a speech act within a dialogue that aims to rationally persuade the other person of one's own position. A prominent version of this approach is defended by Douglas N. Walton. On his game-theoretic conception, a dialogue is a game between two players. At the outset, each player is committed to a set of propositions and has a conclusion they intend to prove. A player has won if they are able to persuade the opponent of their own conclusion. In this sense, dialogues can be characterized as "games of persuasion". The players can perform various moves that affect what they are committed to. In this framework, arguments are moves that take the opponent's commitments as premises and lead to the conclusion one is trying to prove. Since this is often not possible directly, various intermediary steps are taken, in which each argument takes a few steps towards one's intended conclusion by proposing an intermediary conclusion for the opponent to accept. This game is governed by various rules determining, among other things, which moves are allowed and when. The dialogical approach makes it possible to distinguish between positive arguments, which support one's own conclusion, and negative arguments, which deny the opponent's conclusion.
From this perspective, fallacies are defined as violations of the dialogue rules. They are "deceptively bad argument[s] that impede the progress of the dialogue". The strawman fallacy, for example, involves inaccurately attributing a weak position to one's opponent and then proving this position to lead to one's own conclusion. This mistake is not logical in the strict sense but dialogical: the conclusion may as well follow from these premises but the opponent does not hold these commitments. In some cases, it varies from game to game whether a certain move counts as a fallacy or not. For example, there are cases where the tu quoque "fallacy" is no fallacy at all. This argument, also known as appeal to hypocrisy, tries to discredit the opponent's argument by claiming that the opponent's behavior is inconsistent with the argument's conclusion. This move does not necessarily break the rules of the dialogue. Instead, it can reveal a weakness in the opponent's position by reflecting their criticism back onto them. This move shifts the burden of proof back to the opponent, thereby strengthening one's own position. But it still constitutes a fallacy if it is only used to evade an argument.
Epistemic
The core idea behind the epistemic approach is that arguments play an epistemic role: they aim to expand our knowledge by providing a bridge from already justified beliefs to not yet justified beliefs. Fallacies are arguments that fall short of this goal by breaking a rule of epistemic justification. This explains, for example, why arguments that are accidentally valid are still somehow flawed: because the arguer himself lacks a good reason to believe the conclusion.
The fallacy of begging the question, on this perspective, is a fallacy because it fails to expand our knowledge by providing independent justification for its conclusion. Instead, the conclusion is already assumed in one of its premises. A purely logical approach, on the other hand, fails to explain the fallacious nature of begging the question since the argument is deductively valid.
The Bayesian approach constitutes a special form of the epistemic approach. Bayesianism interprets degrees of belief as subjective probabilities, i.e. as the degree of certainty of the believer that the believed proposition is true. On this view, reasoning based on an argument can be interpreted as a process of changing one's degrees of belief, usually in response to new incoming information. Fallacies are probabilistically weak arguments, i.e. they have a low probability on the Bayesian model. Whether an argument constitutes a fallacy or not depends on the credences of the person evaluating the argument. This means that what constitutes a fallacy for one arguer may be a sound argument for another. This explains why, when trying to persuade someone, one should take the audience's beliefs into account. But it can also make sense of arguments independent of an audience, unlike the dialogical approach.
This perspective is well suited for explaining why some slippery slope arguments constitute fallacies but others not. Slippery slope arguments argue against a certain proposal based on the fact that this proposal would bring with it a causal chain of events eventually leading to a bad outcome. But even if every step in this chain is relatively probable, probabilistic calculus may still reveal that the likelihood of all steps occurring together is quite small. In this case, the argument would constitute a fallacy. But slippery slope arguments are rationally justified if the associated probabilities are sufficiently high.
Types
A great variety of informal fallacies have been discussed in academic literature. There is controversy both concerning whether a given argument really constitutes a fallacy in all of its instances and concerning how the different fallacies should be grouped together into categories. The categorization here follows proposals commonly found in the academic literature in these or similar terms. It distinguishes between fallacies of ambiguity, which have their root in ambiguous or vague language, fallacies of presumption, which involve false or unjustified premises, and fallacies of relevance, in which the premises are not relevant to the conclusion despite appearances otherwise. Other categorizations have been proposed and some fallacies within this categorization could also be grouped in another category.
Fallacies of ambiguity
The source of the error for fallacies of ambiguity lies in the usage of language. This is due to the fact that many terms in natural language have ambiguous or vague meanings. Ambiguous terms have several meanings while vague terms have an unclear meaning. Fallacies of ambiguity often result in merely verbal disputes: the arguing parties have different topics in mind and thereby talk past each other without being aware of this. One way to avoid or solve these fallacies is to clarify language, e.g. by committing to definitions and by introducing new distinctions. Such reformulations may include a condensation of the original argument in order to make it easier to spot the erroneous step.
Fallacies of ambiguity are perhaps best exemplified by the fallacy of equivocation, in which the same term appears with two different meanings in the premises, for example:
Feathers are light. ("light" as "not heavy")
What is light cannot be dark. ("light" as "pale in color")
Therefore, feathers cannot be dark.
Equivocations are especially difficult to detect in cases where the two meanings are very closely related to each other.
The fallacy of amphiboly also involves ambiguity in meaning, but this ambiguity arises not on the level of individual terms but on the level of the sentence as a whole due to syntactic ambiguity, for example:
"The police were told to stop drinking on campus after midnight.
So, now they are able to respond to emergencies much better than before"
On one interpretation, the police are not allowed to drink alcohol. On another, it is now the job of the police to stop other people from drinking. The argument seems plausible on the former reading but fallacious on the latter reading.
The fallacies of division and composition are due to ambiguity of the term "all" and similar expressions. This term has both a collective and a distributive meaning. For example, the sentence "all the citizens are strong enough to resist a tyrant" may mean either that all together are strong enough (collective) or that each one individually is strong enough (distributive). The fallacy of division is committed if one infers from the sentence in the collective sense that one specific individual is strong enough. The fallacy of composition is committed if one infers from the fact that each member of a group has a property that the group as a whole has this property. For example, "[e]very member of the investigative team was an excellent researcher", therefore "[i]t was an excellent investigative team". Any form of fallaciously transferring a property from the whole to its parts or the other way round belongs to the category of fallacies of division and composition, even when linguistic ambiguity is not the cause.
Fallacies of presumption
Fallacies of presumption involve a false or unjustified premise but are often valid otherwise. This problematic premise can take different forms and the belief in it can be caused in different ways, corresponding to the various sub-categories in this field. These fallacies include the naturalistic fallacy, the moralistic fallacy and the intentional fallacy.
A false dilemma is a fallacy of presumption based on a false disjunctive claim that oversimplifies reality by excluding viable alternatives. For example, a false dilemma is committed when it is claimed that "Stacey spoke out against capitalism, therefore she must be a communist". One of the options excluded is that Stacey may be neither communist nor capitalist. Our liability to commit false dilemmas may be due to the tendency to simplify reality by ordering it through either-or-statements.
For fallacies of generalization, the false premise is due to an erroneous generalization. In the case of the fallacy of sweeping generalization, a general rule is applied incorrectly to an exceptional case. For example, "[e]veryone has a right to his or her property. Therefore, even though Jones had been declared insane, you had no right to take his weapon away." The generalization, in this case, ignores that insanity is an exceptional case to which the general rights of property do not unrestrictedly apply. Hasty generalization, on the other hand, involves the converse mistake of drawing a universal conclusion based on a small number of instances. For example, "I've met two people in Nicaragua so far, and they were both nice to me. So, all people I will meet in Nicaragua will be nice to me".
Begging the question is a form of circular reasoning in which the conclusion is already assumed in the premises. Because of this, the premises are unable to provide independent support for the conclusion. For example, the statement "Green is the best color because it is the greenest of all colors", offers no independent reason besides the initial assumption for its conclusion. Detecting this fallacy can be difficult when a complex argument with many sub-arguments is involved, resulting in a large circle.
Fallacies of relevance
Fallacies of relevance involve premises that are not relevant to the conclusion despite appearances otherwise. They may succeed in persuading the audience nonetheless due to being emotionally loaded (for example: by playing on prejudice, pity or fear).
Ad hominem arguments constitute an important class among the fallacies of relevance. In them, the arguer tries to attack a thesis by attacking the person pronouncing this thesis instead of attacking the thesis itself. Rejecting a theory in physics because its author is Jewish, which was common in the German physics community in the early 1930s, is an example of the ad hominem fallacy. But not all ad hominem arguments constitute fallacies. It is a common and reasonable practice in court, for example, to defend oneself against an accusation by casting doubt on the reliability of the witnesses. The difference between fallacious and justified ad hominem arguments depends on the relevancy of the character of the attacked person to the thesis in question. The author's cultural heritage seems to have very little relevance in most cases for theories in physics, but the reliability of a witness in court is highly relevant for whether one is justified in believing their testimony. Whataboutism is a special form of the ad hominem fallacy that attempts to discredit an opponent's position by charging them with hypocrisy without directly refuting or disproving their argument. It is particularly associated with contemporary Russian propaganda.
Appeal to ignorance is another fallacy due to irrelevance. It is based on the premise that there is no proof for a certain claim. From this premise, the conclusion is drawn that this claim must therefore be false. For example, "Nobody has ever proved to me there's a God, so I know there is no God". Another version of the appeal to ignorance concludes from the absence of proof against a claim that this claim must be true.
Arguments from analogy are also susceptible to fallacies of relevance. An analogy is a comparison between two objects based on similarity. Arguments from analogy involve inferences from information about a known object (the source) to the features of an unknown object (the target) based on the similarity between the two objects. Arguments from analogy have the following form: a is similar to b and a has feature F, therefore b probably also has feature F. The soundness of such arguments depends on the relevance of this similarity to the inferred feature. Without this relevance, the argument constitutes a faulty or false analogy, for example: "If a child gets a new toy he or she will want to play with it; So, if a nation gets new weapons, it will want to use them".
Etymological fallacies may confuse older or "original" meanings of words with current semantic usage.
See also
List of fallacies
Fallacy
Formal fallacy
References
Arguments
Barriers to critical thinking
Fallacies
Informal arguments
Information
Philosophical logic
Rhetoric | 0.776723 | 0.995618 | 0.77332 |
Biocentrism (ethics) | Biocentrism (from Greek βίος bios, "life" and κέντρον kentron, "center"), in a political and ecological sense, as well as literally, is an ethical point of view that extends inherent value to all living things. It is an understanding of how the earth works, particularly as it relates to its biosphere or biodiversity. It stands in contrast to anthropocentrism, which centers on the value of humans. The related ecocentrism extends inherent value to the whole of nature.
Advocates of biocentrism often promote the preservation of biodiversity, animal rights, and environmental protection. The term has also been employed by advocates of "left biocentrism", which combines deep ecology with an "anti-industrial and anti-capitalist" position (according to David Orton et al.).
Definition
In the simplest of terms as well as form, biocentrism is just the belief that all living organisms, regardless of species, complexity, or traits, individually possess equal value and the same exact right to live.
Usually, the term biocentrism encompasses all environmental ethics that "extend the status of moral object from human beings to all living things in nature". Biocentric ethics calls for a rethinking of the relationship between humans and nature. It states that nature does not exist simply to be used or consumed by humans, but that humans are simply one species amongst many, and that because we are part of an ecosystem, any actions which negatively affect the living systems of which we are a part adversely affect us as well, whether or not we maintain a biocentric worldview. Biocentrists observe that all species have inherent value, and that humans are not "superior" to other species in a moral or ethical sense.
The four main pillars of a biocentric outlook are:
Humans and all other species are members of Earth's community.
All species are part of a system of interdependence.
All living organisms pursue their own "good" in their own ways.
Human beings are not inherently superior to other living things.
The most important of these four main pillars is likely the idea that human beings are not inherently superior to other living things. People have divergent views on many specific aspects of almost everything. Not all biocentrists even subscribe to the abstract concept of value, which is why heavy emphasis is placed on the fourth pillar.
Relationship with animals and environment
Biocentrism views individual species as parts of the living biosphere. It observes the consequences of reducing biodiversity on both small and large scales and points to the inherent value all species have to the environment.
The environment is seen for what it is; the biosphere within which we live and depend on the maintaining of its diversity for our health. From these observations the ethical points are raised.
History and development
Biocentric ethics differs from classical and traditional ethical thinking. Rather than focusing on strict moral rules, as in Classical ethics, it focuses on attitudes and character. In contrast with traditional ethics, it is nonhierarchical and gives priority to the natural world rather than to humankind exclusively.
Biocentric ethics includes Albert Schweitzer's ethics of "Reverence for Life", Peter Singer's ethics of Animal Liberation and Paul W. Taylor's ethics of biocentric egalitarianism.
Albert Schweitzer's "reverence for life" principle was a precursor of modern biocentric ethics. In contrast with traditional ethics, the ethics of "reverence for life" denies any distinction between "high and low" or "valuable and less valuable" life forms, dismissing such categorization as arbitrary and subjective. Conventional ethics concerned itself exclusively with human beings—that is to say, morality applied only to interpersonal relationships—whereas Schweitzer's ethical philosophy introduced a "depth, energy, and function that differ[s] from the ethics that merely involved humans". "Reverence for life" was a "new ethics, because it is not only an extension of ethics, but also a transformation of the nature of ethics".
Similarly, Peter Singer argues that non-human animals deserve the same equality of consideration that we extend to human beings. His argument is roughly as follows:
Membership in the species Homo sapiens is the only criterion of moral importance that includes all humans and excludes all non-humans.
Using membership in the species Homo sapiens as a criterion of moral importance is completely arbitrary.
Of the remaining criteria we might consider, only sentience is a plausible criterion of moral importance.
Using sentience as a criterion of moral importance entails that we extend the same basic moral consideration (i.e. "basic principle of equality") to other sentient creatures that we do to human beings.
Therefore, we ought to extend to animals the same equality of consideration that we extend to human beings.
Singer's work, while notable in the canon of environmental ethics, should not be considered as fully biocentric. Singer's ethics is extended from humans to nonhuman animals because the criterion for moral inclusion (sentience) is found in both humans and nonhuman animals, thus it would be arbitrary to deny it to nonhuman animals simply because they were not human. However, not all biological entities are sentient, consider: algae, plants and trees, fungi, lichens, mollusks, protozoa, for example. For an ethical theory to be biocentric, it must have a reason for extending ethical inclusion to the entire biosphere (as in Taylor and Schweitzer). The requirement for environmental ethics to move beyond sentience as criteria for inclusion in the moral realm is discussed in Tom Regan's 1981 paper "The Nature and Possibility of an Environmental Ethic".
Biocentrism is most commonly associated with the work of Paul W. Taylor, especially his book Respect for Nature: A Theory of Environmental Ethics (1986). Taylor maintains that biocentrism is an "attitude of respect for nature", whereby one attempts to make an effort to live one's life in a way that respects the welfare and inherent worth of all living creatures. Taylor states that:
Humans are members of a community of life along with all other species, and on equal terms.
This community consists of a system of interdependence between all members, both physically, and in terms of relationships with other species.
Every organism is a "teleological centre of life", that is, each organism has a purpose and a reason for being, which is inherently "good" or "valuable".
Humans are not inherently superior to other species.
Historian Donald Worster traces today's biocentric philosophies, which he sees as part of a recovery of a sense of kinship between man and nature, to the reaction by the British intelligencia of the Victorian era against the Christian ethic of dominion over nature. He has pointed to Charles Darwin as an important spokesman for the biocentric view in ecological thought and quotes from Darwin's Notebook on Transmutation of Species (1837): If we choose to let conjecture run wild, then animals, our fellow brethren in pain, diseases, death, suffering and famine—our slaves in the most laborious works, our companions in our amusement—they may partake of our origin in one common ancestor—we may be all netted together.
In 1859, Charles Darwin published his book On the Origin of Species. This publication sparked the beginning of biocentrist views by introducing evolution and "its removal of humans from their supernatural origins and
placement into the framework of natural laws".
The work of Aldo Leopold has also been associated with biocentrism. The essay "The Land Ethic" in Leopold's book Sand County Almanac (1949) points out that although throughout history women and slaves have been considered property, all people have now been granted rights and freedoms. Leopold notes that today land is still considered property as people once were. He asserts that ethics should be extended to the land as "an evolutionary possibility and an ecological necessity". He argues that while people's instincts encourage them to compete with others, their ethics encourage them to co-operate with others. He suggests that "the land ethic simply enlarges the boundaries of the community to include soils, waters, plants, and animals, or collectively: the land". In a sense this attitude would encourage humans to co-operate with the land rather than compete with it.
Outside of formal philosophical works biocentric thought is common among pre-colonial tribal peoples who knew no world other than the natural world.
In law
The paradigm of biocentrism and the values that it promotes are beginning to be used in law.
In recent years (as of 2011), cities in Maine, Pennsylvania, New Hampshire and Virginia have adopted laws that protect the rights of nature. The purpose of these laws is to prevent the degradation of nature, especially by corporations who may want to exploit natural resources and land space, and to also use the environment as a dumping ground for toxic waste.
The first country to include rights of nature in its constitution is Ecuador (see 2008 Constitution of Ecuador). Article 71 states that nature "has the right to integral respect for its existence and for the maintenance and regeneration of its life cycles, structure, functions and evolutionary processes".
In religion
Islam
In Islam:
In Islam, biocentric ethics stem from the belief that all of creation belongs to Allah (God), not humans, and to assume that non-human animals and plants exist merely to benefit humankind leads to environmental destruction and misuse. As all living organisms exist to praise God, human destruction of other living things prevents the earth's natural and subtle means of praising God. The Qur'an acknowledges that humans are not the only all-important creatures and emphasizes a respect for nature. Muhammad was once asked whether there would be a reward for those who show charity to nature and animals, to which he replied, "for charity shown to each creature with a wet heart [i.e. that is alive], there is a reward."
Hinduism
In Hinduism:
Hinduism contains many elements of biocentrism. In Hinduism, humans have no special authority over other creatures, and all living things have souls ('atman'). Brahman (God) is the "efficient cause" and Prakrti (nature), is the "material cause" of the universe. However, Brahman and Prakrti are not considered truly divided: "They are one in [sic] the same, or perhaps better stated, they are the one in the many and the many in the one."
However, while Hinduism does not give the same direct authority over nature that the Judeo-Christian-Islamic god grants, they are subject to a "higher and more authoritative responsibility for creation". The most important aspect of this is the doctrine of Ahimsa (non-violence). The Yājñavalkya Smṛti warns, "the wicked person who kills animals which are protected has to live in hell fire for the days equal to the number of hairs on the body of that animal". The essential aspect of this doctrine is the belief that the Supreme Being incarnates into the forms of various species. The Hindu belief in Saṃsāra (the cycle of life, death and rebirth) encompasses reincarnation into non-human forms. It is believed that one lives 8,400,000 lifetimes before one becomes a human. Each species is in this process of samsara until one attains moksha (liberation).
Another doctrinal source for the equal treatment of all life is found in the Rigveda. The Rigveda states that trees and plants possess divine healing properties. It is still popularly believed that every tree has a Vriksa-devata (a tree deity). Trees are ritually worshiped through prayer, offerings, and the sacred thread ceremony. The Vriksa-devata worshiped as manifestations of the Divine. Tree planting is considered a religious duty.
Jainism
In Jainism:
The Jaina tradition exists in tandem with Hinduism and shares many of its biocentric elements.
Ahimsa (non-violence), the central teaching of Jainism, means more than not hurting other humans. It means intending not to cause physical, mental or spiritual harm to any part of nature. In the words of Mahavira: 'You are that which you wish to harm.' Compassion is a pillar of non-violence. Jainism encourages people to practice an attitude of compassion towards all life.
The principle of interdependence is also very important in Jainism. This states that all of nature is bound together, and that "if one does not care for nature one does not care for oneself.".
Another essential Jain teaching is self-restraint. Jainism discourages wasting the gifts of nature, and encourages its practitioners to reduce their needs as far as possible. Gandhi, a great proponent of Jainism, once stated "There is enough in this world for human needs, but not for human wants."
Buddhism
In Buddhism:
The Buddha's teachings encourage people "to live simply, to cherish tranquility, to appreciate the natural cycle of life". Buddhism emphasizes that everything in the universe affects everything else. "Nature is an ecosystem in which trees affect climate, the soil, and the animals, just as the climate affects the trees, the soil, the animals and so on. The ocean, the sky, the air are all interrelated, and interdependent—water is life and air is life."
Although this holistic approach is more ecocentric than biocentric, it is also biocentric, as it maintains that all living things are important and that humans are not above other creatures or nature. Buddhism teaches that "once we treat nature as our friend, to cherish it, then we can see the need to change from the attitude of dominating nature to an attitude of working with nature—we are an intrinsic part of all existence rather than seeing ourselves as in control of it."
Christianity
Within the Catholic tradition of Christian thought, Pope Benedict XVI noted that "the Church’s magisterium expresses grave misgivings about notions of the environment inspired by ecocentrism and biocentrism". This, he stated, was because "such notions eliminate the difference of identity and worth between the human person and other living things. In the name of a supposedly egalitarian vision of the "dignity" of all living creatures, such notions end up abolishing the distinctiveness and superior role of human beings."
Criticism
Biocentrism has faced criticism for a number of reasons. Some of this criticism grows out of the concern that biocentrism is an anti-human paradigm and that it will not hesitate to sacrifice human well-being for the greater good. Biocentrism has also been criticized for its individualism; emphasizing too much on the importance of individual life and neglecting the importance of collective groups, such as an ecosystem.
A more complex form of criticism focuses on the contradictions of biocentrism. Opposed to anthropocentrism, which sees humans as having a higher status than other species, biocentrism puts humans on a par with the rest of nature, and not above it. In his essay A Critique of Anti-Anthropocentric Biocentrism Richard Watson suggests that if this is the case, then "Human ways—human culture—and human actions are
as natural as the ways in which any other species of animals behaves". He goes on to suggest that if humans must change their behavior to refrain from disturbing and damaging the natural environment, then that results in setting humans apart from other species and assigning more power to them. This then takes us back to the basic beliefs of anthropocentrism. Watson also claims that the extinction of species is "Nature's way" and that if humans were to instigate their own self-destruction by exploiting the rest of nature, then so be it. Therefore, he suggests that the real reason humans should reduce their destructive behavior in relation to other species is not because we are equals but because the destruction of other species will also result in our own destruction. This view also brings us back to an anthropocentric perspective.
See also
Anarcho-primitivism
Animal cognition
Biodiversity
Biophilia hypothesis
Biotic ethics
Deep ecology
Earth jurisprudence
Ecoauthoritarianism
Ecocentrism
Eco-nationalism
Environmental philosophy
Gaia hypothesis
Gaia philosophy
Green anarchism
Green conservatism
Green libertarianism
Intrinsic value (animal ethics)
Neo-luddite
Painism
Primitivism
Religion and environmentalism
Sentiocentrism
Speciesism
Stewardship (theology)
References
Further reading
Coghlan et al (2021). A bolder One Health: expanding the moral circle to optimize health for all. One Health Outlook.
Deep ecology
Environmental ethics | 0.778977 | 0.99224 | 0.772932 |
Pragmatic ethics | Pragmatic ethics is a theory of normative philosophical ethics and meta-ethics. Ethical pragmatists such as John Dewey believe that some societies have progressed morally in much the way they have attained progress in science. Scientists can pursue inquiry into the truth of a hypothesis and accept the hypothesis, in the sense that they act as though the hypothesis were true; nonetheless, they think that future generations can advance science, and thus future generations can refine or replace (at least some of) their accepted hypotheses. Similarly, ethical pragmatists think that norms, principles, and moral criteria are likely to be improved as a result of inquiry.
Martin Benjamin used Neurath's boat as an analogy for pragmatic ethics, likening the gradual change of ethical norms to the reconstruction of a ship at sea by its sailors.
Contrast with other normative theories
Much as it is appropriate for scientists to act as though a hypothesis were true despite expecting future inquiry to supplant it, ethical pragmatists acknowledge that it can be appropriate to practice a variety of other normative approaches (e.g. consequentialism, deontological ethics, and virtue ethics), yet acknowledge the need for mechanisms that allow people to advance beyond such approaches, a freedom for discourse which does not take any such theory as assumed. Thus, aimed at social innovation, the practice of pragmatic ethics supplements the practice of other normative approaches with what John Stuart Mill called "experiments in living".
Pragmatic ethics also differs from other normative approaches theoretically, according to Hugh LaFollette:
It focuses on society, rather than on lone individuals, as the entity that achieves morality. In Dewey's words, "all conduct is ... social".
It does not hold any known moral criteria as beyond potential for revision. Pragmatic ethics may be misunderstood as relativist, as failing to be objective, but pragmatists object to this critique on grounds that the same could be said of science, yet inductive and hypothetico-deductive science is our epistemological standard. Ethical pragmatists can maintain that their endeavor, like inquiry in science, is objective on the grounds that it converges towards something objective (a thesis called Peircean realism named after C. S. Peirce).
It allows that a moral judgment may be accepted in one age of a given society, even though it will cease to be accepted after that society morally progresses (or may already be rejected in another society). The change in moral judgments about slavery that led to the abolition of slavery is an example of the improvement of moral judgments through moral inquiry and advocacy.
LaFollette based his account of pragmatic ethics in the writings of John Dewey, but he also found aspects of pragmatic ethics in the texts of Aristotle, John Stuart Mill, and Martha Nussbaum.
Barry Kroll, commenting on the pragmatic ethics of Anthony Weston, noted that pragmatic ethics emphasizes the complexity of problems and the many different values that may be involved in an ethical issue or situation, without suppressing the conflicts between such values.
Criticisms
Pragmatic ethics has been criticized for conflating descriptive ethics with normative ethics, as describing the way people do make moral judgments rather than the way they should make them, or in other words for lacking normative standards. While some ethical pragmatists may have avoided the distinction between normative and descriptive truth, the theory of pragmatic ethics itself does not conflate them any more than science conflates truth about its subject matter with current opinion about it; in pragmatic ethics as in science, "truth emerges from the self-correction of error through a sufficiently long process of inquiry". A normative criterion that many pragmatists emphasize is the degree to which the process of social learning is deliberatively democratic: "while deontologists focus on moral duties and obligations and utilitarians on the greatest happiness of the greatest number, pragmatists concentrate on coexistence and cooperation".
Moral ecology
In Tim Dean's account, moral ecology is a variation of pragmatic ethics that additionally supposes that morality evolves like an ecosystem, and ethical practice should therefore include strategies analogous to those of ecosystem management, such as protecting a degree of moral diversity. The term "moral ecology" has been used since at least 1985 to imply a symbiosis whereby the viability of any existing moral approach would be diminished by the destruction of all alternative approaches. Dean theorized that humans take diverse approaches to morality, and such polymorphism gives humanity resilience against a wider range of situations and environments, which makes moral diversity a natural consequence of frequency-dependent selection.
See also
Applied ethics
Good reasons approach
Moral constructivism
Notes
References
Further reading
Metaethics
Normative ethics
Pragmatism
Ethical theories | 0.786311 | 0.98273 | 0.772732 |
Conventionalism | Conventionalism is the philosophical attitude that fundamental principles of a certain kind are grounded on (explicit or implicit) agreements in society, rather than on external reality. Unspoken rules play a key role in the philosophy's structure. Although this attitude is commonly held with respect to the rules of grammar, its application to the propositions of ethics, law, science, biology, mathematics, and logic is more controversial.
Linguistics
The debate on linguistic conventionalism goes back to Plato's Cratylus and the philosophy of Kumārila Bhaṭṭa.
It has been the standard position of modern linguistics since Ferdinand de Saussure's l'arbitraire du signe, but there have always been dissenting positions of phonosemantics, recently defended by Margaret Magnus and Vilayanur S. Ramachandran.
Philosophy of mathematics
The French mathematician Henri Poincaré was among the first to articulate a conventionalist view. Poincaré's use of non-Euclidean geometries in his work on differential equations convinced him that Euclidean geometry should not be regarded as an a priori truth. He held that axioms in geometry should be chosen for the results they produce, not for their apparent coherence with – possibly flawed – human intuitions about the physical world.
Epistemology
Conventionalism was adopted by logical positivists, chiefly A. J. Ayer and Carl Hempel, and extended to both mathematics and logic. To deny rationalism, Ayer sees two options for empiricism regarding the necessity of the truth of formal logic (and mathematics): 1) deny that they actually are necessary, and then account for why they only appear so, or 2) claim that the truths of logic and mathematics lack factual content – they are not "truths about the world" – and then explain how they are nevertheless true and informative. John Stuart Mill adopted the former, which Ayer criticized, opting himself for the latter. Ayer's argument relies primarily on the analytic/synthetic distinction.
The French philosopher Pierre Duhem espoused a broader conventionalist view encompassing all of science. Duhem was skeptical that human perceptions are sufficient to understand the "true," metaphysical nature of reality and argued that scientific laws should be valued mainly for their predictive power and correspondence with observations.
Karl Popper broadened the meaning of conventionalism still more. In The Logic of Scientific Discovery, he defined a "conventionalist stratagem" as any technique that is used by a theorist to evade the consequences of a falsifying observation or experiment. Popper identified four such stratagems:
introducing an ad hoc hypothesis that makes the refuting evidence seem irrelevant;
modifying the ostensive definitions so as to alter the content of a theory;
doubting the reliability of the experimenter; declaring that the observations that threaten the tested theory are irrelevant;
casting doubt on the acumen of the theorist when he does not produce ideas that can save the theory.
Popper argued that it was crucial to avoid conventionalist stratagems if falsifiability of a theory was to be preserved. It has been argued that the standard model of cosmology is built upon a set of conventionalist stratagems.
In the 1930s, a Polish philosopher Kazimierz Ajdukiewicz proposed a view that he called radical conventionalism – as opposed to moderate conventionalism developed by Henri Poincaré and Pierre Duhem. Radical conventionalism was originally outlined in The World-Picture and the Conceptual Apparatus, an article published in “Erkenntnis” in 1934. The theory can be characterized by the following theses: (1) there are languages or – as Ajdukiewicz used to say – conceptual apparatuses (schemes) which are not intertranslatable, (2) any knowledge must be articulate in one of those languages, (3) the choice of a language is arbitrary, and it is possible to change from one language to another. Therefore, there is a conventional or decisional element in all knowledge (including perceptual). In his later writings – under the influence of Alfred Tarski – Ajdukiewicz rejected radical conventionalism in favour of a semantic epistemology.
Legal philosophy
Conventionalism, as applied to legal philosophy is one of the three rival conceptions of law constructed by American legal philosopher Ronald Dworkin in his work Law's Empire. The other two conceptions of law are legal pragmatism and law as integrity.
According to conventionalism as defined by Dworkin, a community's legal institutions should contain clear social conventions relied upon which rules are promulgated. Such rules will serve as the sole source of information for all the community members because they demarcate clearly all the circumstances in which state coercion will and will not be exercised.
Dworkin nonetheless has argued that this justification fails to fit with facts as there are many occasions wherein clear applicable legal rules are absent. It follows that, as he maintained, conventionalism can provide no valid ground for state coercion. Dworkin himself favored law as integrity as the best justification of state coercion.
One famous criticism of Dworkin's idea comes from Stanley Fish who argues that Dworkin, like the Critical Legal Studies movement, Marxists and adherents of feminist jurisprudence, was guilty of a false 'Theory Hope'. Fish claims that such mistake stems from their mistaken belief that there exists a general or higher 'theory' that explains or constrains all fields of activity like state coercion.
Another criticism is based on Dworkin's assertion that positivists' claims amount to conventionalism. H. L. A. Hart, as a soft positivist, denies such claim as he had pointed out that citizens cannot always discover the law as plain matter of fact. It is however unclear as to whether Joseph Raz, an avowed hard positivist, can be classified as conventionalist as Raz has claimed that law is composed "exclusively" of social facts, which could be complex, and thus difficult to be discovered.
In particular, Dworkin has characterized law as having the main function of restraining state's coercion. Nigel Simmonds has rejected Dworkin's disapproval of conventionalism, claiming that his characterization of law is too narrow.
See also
French historical epistemology
Émile Boutroux
Consensus theory of truth
References
Sources
The Internet Encyclopedia of Philosophy entry on Henri Poincaré
"Pierre Duhem". Notes by David Huron
Mary Jo Nye, "The Boutroux Circle and Poincare's Conventionalism," Journal of the History of Ideas, Vol. 40, No. 1. (Jan. – Mar., 1979), pp. 107–120.
Metatheory of science
Metaethics
Theories of deduction
Theories of law
Ethical theories
Relativism | 0.790686 | 0.977215 | 0.77267 |
Materialism | Materialism is a form of philosophical monism which holds that matter is the fundamental substance in nature, and that all things, including mental states and consciousness, are results of material interactions of material things. According to philosophical materialism, mind and consciousness are caused by physical processes, such as the neurochemistry of the human brain and nervous system, without which they cannot exist. Materialism directly contrasts with monistic idealism, according to which consciousness is the fundamental substance of nature.
Materialism is closely related to physicalism—the view that all that exists is ultimately physical. Philosophical physicalism has evolved from materialism with the theories of the physical sciences to incorporate forms of physicality in addition to ordinary matter (e.g. spacetime, physical energies and forces, and exotic matter). Thus, some prefer the term physicalism to materialism, while others use the terms as if they were synonymous.
Discoveries of neural correlates between consciousness and the brain are taken as empirical support for materialism, but some philosophers of mind find that association fallacious or consider it compatible with non-materialist ideas. Alternative philosophies opposed or alternative to materialism or physicalism include idealism, pluralism, dualism, panpsychism, and other forms of monism. Epicureanism is a philosophy of materialism from classical antiquity that was a major forerunner of modern science. Though ostensibly a deist, Epicurus affirmed the literal existence of the Greek gods in either some type of celestial "heaven" cognate from which they ruled the universe (if not on a literal Mount Olympus), and his philosophy promulgated atomism, while Platonism taught roughly the opposite, despite Plato's teaching of Zeus as God.
Overview
Materialism belongs to the class of monist ontology, and is thus different from ontological theories based on dualism or pluralism. For singular explanations of the phenomenal reality, materialism is in contrast to idealism, neutral monism, and spiritualism. It can also contrast with phenomenalism, vitalism, and dual-aspect monism. Its materiality can, in some ways, be linked to the concept of determinism, as espoused by Enlightenment thinkers.
Despite the large number of philosophical schools and their nuances, all philosophies are said to fall into one of two primary categories, defined in contrast to each other: idealism and materialism. The basic proposition of these two categories pertains to the nature of reality: the primary difference between them is how they answer two fundamental questions—what reality consists of, and how it originated. To idealists, spirit or mind or the objects of mind (ideas) are primary, and matter secondary. To materialists, matter is primary, and mind or spirit or ideas are secondary—the product of matter acting upon matter.
The materialist view is perhaps best understood in its opposition to the doctrines of immaterial substance applied to the mind historically by René Descartes; by itself, materialism says nothing about how material substance should be characterized. In practice, it is frequently assimilated to one variety of physicalism or another.
Modern philosophical materialists extend the definition of other scientifically observable entities such as energy, forces, and the spacetime continuum; some philosophers, such as Mary Midgley, suggest that the concept of "matter" is elusive and poorly defined.
During the 19th century, Karl Marx and Friedrich Engels extended the concept of materialism to elaborate a materialist conception of history centered on the roughly empirical world of human activity (practice, including labor) and the institutions created, reproduced or destroyed by that activity. They also developed dialectical materialism, by taking Hegelian dialectics, stripping them of their idealist aspects, and fusing them with materialism (see Modern philosophy).
Non-reductive materialism
Materialism is often associated with reductionism, according to which the objects or phenomena individuated at one level of description, if they are genuine, must be explicable in terms of the objects or phenomena at some other level of description—typically, at a more reduced level.
Non-reductive materialism explicitly rejects this notion, taking the material constitution of all particulars to be consistent with the existence of real objects, properties or phenomena not explicable in the terms canonically used for the basic material constituents. Jerry Fodor held this view, according to which empirical laws and explanations in "special sciences" like psychology or geology are invisible from the perspective of basic physics.
History
Early history
Before Common Era
Materialism developed, possibly independently, in several geographically separated regions of Eurasia during what Karl Jaspers termed the Axial Age ( 800–200 BC).
In ancient Indian philosophy, materialism developed around 600 BC with the works of Ajita Kesakambali, Payasi, Kanada and the proponents of the Cārvāka school of philosophy. Kanada became one of the early proponents of atomism. The Nyaya–Vaisesika school (c. 600–100 BC) developed one of the earliest forms of atomism (although their proofs of God and their positing that consciousness was not material precludes labelling them as materialists). Buddhist atomism and the Jaina school continued the atomic tradition.
Ancient Greek atomists like Leucippus, Democritus and Epicurus prefigure later materialists. The Latin poem De Rerum Natura by Lucretius (99 – c. 55 BC) reflects the mechanistic philosophy of Democritus and Epicurus. According to this view, all that exists is matter and void, and all phenomena result from different motions and conglomerations of base material particles called atoms (literally "indivisibles"). De Rerum Natura provides mechanistic explanations for phenomena such as erosion, evaporation, wind, and sound. Famous principles like "nothing can touch body but body" first appeared in Lucretius's work. Democritus and Epicurus did not espouse a monist ontology, instead espousing the ontological separation of matter and space (i.e. that space is "another kind" of being).
Early Common Era
Wang Chong (27 – c. 100 AD) was a Chinese thinker of the early Common Era said to be a materialist. Later Indian materialist Jayaraashi Bhatta (6th century) in his work Tattvopaplavasimha (The Upsetting of All Principles) refuted the Nyāya Sūtra epistemology. The materialistic Cārvāka philosophy appears to have died out some time after 1400; when Madhavacharya compiled Sarva-darśana-samgraha (A Digest of All Philosophies) in the 14th century, he had no Cārvāka (or Lokāyata) text to quote from or refer to.
In early 12th-century al-Andalus, Arabian philosopher Ibn Tufail ( Abubacer) discussed materialism in his philosophical novel, Hayy ibn Yaqdhan (Philosophus Autodidactus), while vaguely foreshadowing historical materialism.
Modern philosophy
In France, Pierre Gassendi (1592–1665) represented the materialist tradition in opposition to the attempts of René Descartes (1596–1650) to provide the natural sciences with dualist foundations. There followed the materialist and atheist abbé Jean Meslier (1664–1729), along with the French materialists: Julien Offray de La Mettrie (1709–1751), Denis Diderot (1713–1784), Étienne Bonnot de Condillac (1714–1780), Claude Adrien Helvétius (1715–1771), German-French Baron d'Holbach (1723–1789), and other French Enlightenment thinkers.
In England, materialism was developed in the philosophies of Francis Bacon (1561–1626), Thomas Hobbes (1588–1679), and John Locke (1632–1704). Scottish Enlightenment philosopher David Hume (1711–1776) became one of the most important materialist philosophers in the 18th century. John "Walking" Stewart (1747–1822) believed matter has a moral dimension, which had a major impact on the philosophical poetry of William Wordsworth (1770–1850).
In late modern philosophy, German atheist anthropologist Ludwig Feuerbach signaled a new turn in materialism in his 1841 book The Essence of Christianity, which presented a humanist account of religion as the outward projection of man's inward nature. Feuerbach introduced anthropological materialism, a version of materialism that views materialist anthropology as the universal science.
Feuerbach's variety of materialism heavily influenced Karl Marx, who in the late 19th century elaborated the concept of historical materialism—the basis for what Marx and Friedrich Engels outlined as scientific socialism:
Through his Dialectics of Nature (1883), Engels later developed a "materialist dialectic" philosophy of nature, a worldview that Georgi Plekhanov, the father of Russian Marxism, called dialectical materialism. In early 20th-century Russian philosophy, Vladimir Lenin further developed dialectical materialism in his 1909 book Materialism and Empirio-criticism, which connects his opponents' political conceptions to their anti-materialist philosophies.
A more naturalist-oriented materialist school of thought that developed in the mid-19th century was German materialism, which included Ludwig Büchner (1824–1899), the Dutch-born Jacob Moleschott (1822–1893), and Carl Vogt (1817–1895), even though they had different views on core issues such as the evolution and the origins of life.
Contemporary history
Analytic philosophy
Contemporary analytic philosophers (e.g. Daniel Dennett, Willard Van Orman Quine, Donald Davidson, and Jerry Fodor) operate within a broadly physicalist or scientific materialist framework, producing rival accounts of how best to accommodate the mind, including functionalism, anomalous monism, and identity theory.
Scientific materialism is often synonymous with, and has typically been described as, a reductive materialism. In the early 21st century, Paul and Patricia Churchland advocated a radically contrasting position (at least in regard to certain hypotheses): eliminative materialism. Eliminative materialism holds that some mental phenomena simply do not exist at all, and that talk of such phenomena reflects a spurious "folk psychology" and introspection illusion. A materialist of this variety might believe that a concept like "belief" has no basis in fact (e.g. the way folk science speaks of demon-caused illnesses).
With reductive materialism at one end of a continuum (our theories will reduce to facts) and eliminative materialism at the other (certain theories will need to be eliminated in light of new facts), revisionary materialism is somewhere in the middle.
Continental philosophy
Contemporary continental philosopher Gilles Deleuze has attempted to rework and strengthen classical materialist ideas. Contemporary theorists such as Manuel DeLanda, working with this reinvigorated materialism, have come to be classified as new materialists. New materialism has become its own subfield, with courses on it at major universities, as well as numerous conferences, edited collections and monographs devoted to it.
Jane Bennett's 2010 book Vibrant Matter has been particularly instrumental in bringing theories of monist ontology and vitalism back into a critical theoretical fold dominated by poststructuralist theories of language and discourse. Scholars such as Mel Y. Chen and Zakiyyah Iman Jackson have critiqued this body of new materialist literature for neglecting to consider the materiality of race and gender in particular.
Métis scholar Zoe Todd, as well as Mohawk (Bear Clan, Six Nations) and Anishinaabe scholar Vanessa Watts, query the colonial orientation of the race for a "new" materialism. Watts in particular describes the tendency to regard matter as a subject of feminist or philosophical care as a tendency too invested in the reanimation of a Eurocentric tradition of inquiry at the expense of an Indigenous ethic of responsibility. Other scholars, such as Helene Vosters, echo their concerns and have questioned whether there is anything particularly "new" about "new materialism", as Indigenous and other animist ontologies have attested to what might be called the "vibrancy of matter" for centuries. Others, such as Thomas Nail, have critiqued "vitalist" versions of new materialism for depoliticizing "flat ontology" and being ahistorical.
Quentin Meillassoux proposed speculative materialism, a post-Kantian return to David Hume also based on materialist ideas.
Defining "matter"
The nature and definition of matter—like other key concepts in science and philosophy—have occasioned much debate:
Is there a single kind of matter (hyle) that everything is made of, or are there multiple kinds?
Is matter a continuous substance capable of expressing multiple forms (hylomorphism) or a number of discrete, unchanging constituents (atomism)?
Does matter have intrinsic properties (substance theory) or lack them (prima materia)?
One challenge to the conventional concept of matter as tangible "stuff" came with the rise of field physics in the 19th century. Relativity shows that matter and energy (including the spatially distributed energy of fields) are interchangeable. This enables the ontological view that energy is prima materia and matter is one of its forms. In contrast, the Standard Model of particle physics uses quantum field theory to describe all interactions. On this view it could be said that fields are prima materia and the energy is a property of the field.
According to the dominant cosmological model, the Lambda-CDM model, less than 5% of the universe's energy density is made up of the "matter" the Standard Model describes, and most of the universe is composed of dark matter and dark energy, with little agreement among scientists about what these are made of.
With the advent of quantum physics, some scientists believed the concept of matter had merely changed, while others believed the conventional position could no longer be maintained. Werner Heisenberg said: "The ontology of materialism rested upon the illusion that the kind of existence, the direct 'actuality' of the world around us, can be extrapolated into the atomic range. This extrapolation, however, is impossible...atoms are not things."
The concept of matter has changed in response to new scientific discoveries. Thus materialism has no definite content independent of the particular theory of matter on which it is based. According to Noam Chomsky, any property can be considered material, if one defines matter such that it has that property.
The philosophical materialist Gustavo Bueno uses a more precise term than matter, the stroma.
In Materialism and Empirio-Criticism, Lenin argues that the truth of dialectical materialism is unrelated to any particular understanding of matter. To him, such changes actually confirm the dialectical form of materialism.
Physicalism
George Stack distinguishes between materialism and physicalism:
But not all conceptions of physicalism are tied to verificationist theories of meaning or direct realist accounts of perception. Rather, physicalists believe that no "element of reality" is missing from the mathematical formalism of our best description of the world. "Materialist" physicalists also believe that the formalism describes fields of insentience. In other words, the intrinsic nature of the physical is non-experiential.
Religious and spiritual views
Christianity
Hinduism and Transcendental Club
Most Hindus and transcendentalists regard all matter as an illusion, or maya, blinding humans from the truth. Transcendental experiences like the perception of Brahman are considered to destroy the illusion.
Criticism and alternatives
From contemporary physicists
Rudolf Peierls, a physicist who played a major role in the Manhattan Project, rejected materialism: "The premise that you can describe in terms of physics the whole function of a human being... including knowledge and consciousness, is untenable. There is still something missing."
Erwin Schrödinger said, "Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else."
Werner Heisenberg wrote: "The ontology of materialism rested upon the illusion that the kind of existence, the direct 'actuality' of the world around us, can be extrapolated into the atomic range. This extrapolation, however, is impossible... Atoms are not things."
Quantum mechanics
Some 20th-century physicists (e.g., Eugene Wigner and Henry Stapp), and some modern physicists and science writers (e.g., Stephen Barr, Paul Davies, and John Gribbin) have argued that materialism is flawed due to certain recent findings in physics, such as quantum mechanics and chaos theory. According to Gribbin and Davies (1991):
Digital physics
The objections of Davies and Gribbin are shared by proponents of digital physics, who view information rather than matter as fundamental. The physicist and proponent of digital physics John Archibald Wheeler wrote, "all matter and all things physical are information-theoretic in origin and this is a participatory universe." Some founders of quantum theory, such as Max Planck, shared their objections. He wrote:
James Jeans concurred with Planck, saying, "The Universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter."
Philosophical objections
In the Critique of Pure Reason, Immanuel Kant argued against materialism in defending his transcendental idealism (as well as offering arguments against subjective idealism and mind–body dualism). But Kant argues that change and time require an enduring substrate.
Postmodern/poststructuralist thinkers also express skepticism about any all-encompassing metaphysical scheme. Philosopher Mary Midgley argues that materialism is a self-refuting idea, at least in its eliminative materialist form.
During the 20th century, several other philosophers also offered specific criticisms related to the fundamental concepts underlying scientific materialism. Among them was the Australian scholar Colin Murray Turbayne, who in his The Myth of Metaphor analyzes the limitations associated with several metaphors routinely incorporated as literal constructs in the "mechanistic" explanations of the universe first outlined by Isaac Newton and Descartes's mind-body dualism, such as "substance" and "substratum", which according to Turbayne have little if any meaning. He further argues that such physicalist theories of the universe generally rely upon mechanistic metaphors drawn through the use of deductive logic for the synthesis of their respective hypotheses. Turbayne observes that modern man has become victimized by the metaphors underlying these hypotheses, which have been unintentionally interpreted as examples of literal truth despite their limitations.
Varieties of idealism
Arguments for idealism, such as those of Hegel and Berkeley, often take the form of an argument against materialism; indeed, Berkeley's idealism was called immaterialism. Now, matter can be argued to be redundant, as in bundle theory, and mind-independent properties can, in turn, be reduced to subjective percepts. Berkeley gives an example of the latter by pointing out that it is impossible to gather direct evidence of matter, as there is no direct experience of matter; all that is experienced is perception, whether internal or external. As such, matter's existence can only be inferred from the apparent (perceived) stability of perceptions; it finds absolutely no evidence in direct experience.
If matter and energy are seen as necessary to explain the physical world, but incapable of explaining mind, dualism results. Emergence, holism and process philosophy seek to ameliorate the perceived shortcomings of traditional (especially mechanistic) materialism without abandoning materialism entirely.
Materialism as methodology
Some critics object to materialism as part of an overly skeptical, narrow or reductivist approach to theorizing, rather than to the ontological claim that matter is the only substance. Particle physicist and Anglican theologian John Polkinghorne objects to what he calls promissory materialism—claims that materialistic science will eventually succeed in explaining phenomena it has not so far been able to explain. Polkinghorne prefers "dual-aspect monism" to materialism.
Some scientific materialists have been criticized for failing to provide clear definitions of matter, leaving the term materialism without any definite meaning. Noam Chomsky states that since the concept of matter may be affected by new scientific discoveries, as has happened in the past, scientific materialists are being dogmatic in assuming the opposite.
See also
Aleatory materialism
Antimaterialism beliefs:
Gnosticism
Idealism
Immaterialism
Maya (religion)
Mind–body dualism
Platonic realism
Supernaturalism
Transcendentalism
Cārvāka
Christian materialism
Critical realism
Cultural materialism
Dialectical materialism
Economic materialism
Existence
French materialism
Grotesque body
Historical materialism
Hyle
Incorporeality
Madhyamaka, a philosophy of Middle Way
Marxist philosophy of nature
Materialist feminism
Metaphysical naturalism
Model-dependent realism
Naturalism (philosophy)
Philosophical materialism
Philosophy of mind
Physicalism
Postmaterialism
Quantum energy
Rational egoism
Reality in Buddhism
Scientistic materialism
Substance theory
Transcendence (religion)
Notes
a. Indeed, it has been noted it is difficult if not impossible to define one category without contrasting it with the other.
References
Further reading
Buchner, L. (1920). Force and Matter. New York, Peter Eckler Publishing Co.
Churchland, Paul (1981). Eliminative Materialism and the Propositional Attitudes. The Philosophy of Science. Boyd, Richard; P. Gasper; J. D. Trout. Cambridge, Massachusetts, MIT Press.
Fodor, J.A. (1974). "Special Sciences", Synthese, Vol. 28.
Gunasekara, Victor A. (2001). "Buddhism and the Modern World". Basic Buddhism: A Modern Introduction to the Buddha's Teaching". 18 January 2008
Kim, J. (1994) Multiple Realization and the Metaphysics of Reduction, Philosophy and Phenomenological Research, Vol. 52.
La Mettrie, La Mettrie, Julien Offray de (1748). L'Homme Machine (Man a Machine)
Lange, Friedrich A. (1925) The History of Materialism. New York, Harcourt, Brace, & Co.
Alternative
Schopenhauer, Arthur (1969). The World as Will and Representation. New York, Dover Publications, Inc.
Seidner, Stanley S. (10 June 2009). "A Trojan Horse: Logotherapeutic Transcendence and its Secular Implications for Theology". Mater Dei Institute
Vitzthum, Richard C. (1995) Materialism: An Affirmative History and Definition. Amherst, New York, Prometheus Books.
External linksStanford Encyclopedia'':
Physicalism
Eliminative Materialism
Philosophical Materialism (by Richard C. Vitzthum) from infidels.org
Dictionary of the Philosophy of Mind on Materialism from the University of Waterloo
A new theory of ideomaterialism being a synthesis of idealism and materialism
Metaphysical theories
Ontology | 0.773805 | 0.998361 | 0.772536 |
Solipsism | Solipsism ( ; ) is the philosophical idea that only one's mind is sure to exist. As an epistemological position, solipsism holds that knowledge of anything outside one's own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind.
Varieties
There are varying degrees of solipsism that parallel the varying degrees of skepticism:
Metaphysical
Metaphysical solipsism is a variety of solipsism based on a philosophy of subjective idealism. Metaphysical solipsists maintain that the self is the only existing reality and that all other realities, including the external world and other persons, are representations of that self, having no independent existence. There are several versions of metaphysical solipsism, such as Caspar Hare's egocentric presentism (or perspectival realism), in which other people are conscious, but their experiences are simply not present.
Epistemological
Epistemological solipsism is the variety of idealism according to which only the directly accessible mental contents of the solipsistic philosopher can be known. The existence of an external world is regarded as an unresolvable question rather than actually false. Further, one cannot also be certain as to what extent the external world exists independently of one's mind. For instance, it may be that a God-like being controls the sensations received by the mind, making it appear as if there is an external world when most of it (excluding the God-like being and oneself) is false. However, the point remains that epistemological solipsists consider this an "unresolvable" question.
Methodological
Methodological solipsism is an agnostic variant of solipsism. It exists in opposition to the strict epistemological requirements for "knowledge" (e.g. the requirement that knowledge must be certain). It still entertains the points that any induction is fallible. Methodological solipsism sometimes goes even further to say that even what we perceive as the brain is actually part of the external world, for it is only through our senses that we can see or feel the mind. Only the existence of thoughts is known for certain.
Methodological solipsists do not intend to conclude that the stronger forms of solipsism are actually true. They simply emphasize that justifications of an external world must be founded on indisputable facts about their own consciousness. The methodological solipsist believes that subjective impressions (empiricism) or innate knowledge (rationalism) are the sole possible or proper starting point for philosophical construction. Often methodological solipsism is not held as a belief system, but rather used as a thought experiment to assist skepticism (e.g. René Descartes' Cartesian skepticism).
Main points
Mere denial of material existence, in itself, does not necessarily constitute solipsism.
Philosophers generally try to build knowledge on more than an inference or analogy. Well-known frameworks such as Descartes' epistemological enterprise brought to popularity the idea that all certain knowledge may go no further than "I think; therefore I exist." However, Descartes' view does not provide any details about the nature of the "I" that has been proven to exist.
The theory of solipsism also merits close examination because it relates to three widely held philosophical presuppositions, each itself fundamental and wide-ranging in importance:
One's most certain knowledge is the content of one's own mind—my thoughts, experiences, affects, etc.
There is no conceptual or logically necessary link between mental and physical—between, for example, the occurrence of certain conscious experience or mental states and the "possession" and behavioral dispositions of a "body" of a particular kind.
The experience of a given person is necessarily private to that person.
To expand on the second point, the conceptual problem is that the previous point assumes mind or consciousness (which are attributes) can exist independent of some entity having this attribute (a capability in this case), i.e., that an attribute of an existent can exist apart from the existent itself. If one admits to the existence of an independent entity (e.g., the brain) having that attribute, the door is open to an independent reality. (See Brain in a vat)
Some philosophers hold that, while it cannot be proven that anything independent of one's mind exists, the point that solipsism makes is irrelevant. This is because, whether the world as we perceive it exists independently or not, we cannot escape this perception, hence it is best to act assuming that the world is independent of our minds. (See Falsifiability and testability below)
History
Origins of solipsist thought are found in Greece and later Enlightenment thinkers such as Thomas Hobbes and Descartes.
Gorgias
Solipsism was first recorded by the Greek presocratic sophist, Gorgias (–375 BC), who is quoted by the Roman sceptic Sextus Empiricus as having stated:
Nothing exists.
Even if something exists, nothing can be known about it.
Even if something could be known about it, knowledge about it cannot be communicated to others.
Much of the point of the sophists was to show that objective knowledge was a literal impossibility.
René Descartes
The foundations of solipsism are in turn the foundations of the view that the individual's understanding of any and all psychological concepts (thinking, willing, perceiving, etc.) is accomplished by making an analogy with their own mental states; i.e., by abstraction from inner experience. And this view, or some variant of it, has been influential in philosophy since René Descartes elevated the search for incontrovertible certainty to the status of the primary goal of epistemology, whilst also elevating epistemology to "first philosophy".
Berkeley
George Berkeley's arguments against materialism in favour of idealism provide the solipsist with a number of arguments not found in Descartes. While Descartes defends ontological dualism, thus accepting the existence of a material world (res extensa) as well as immaterial minds (res cogitans) and God, Berkeley denies the existence of matter but not minds, of which God is one.
Relation to other ideas
Idealism and materialism
One of the most fundamental debates in philosophy concerns the "true" nature of the world—whether it is some ethereal plane of ideas or a reality of atomic particles and energy. Materialism posits a real "world out there", as well as in and through us, that can be sensed—seen, heard, tasted, touched and felt, sometimes with prosthetic technologies corresponding to human sensing organs. (Materialists do not claim that human senses or even their prosthetics can, even when collected, sense the totality of the universe; simply that they collectively cannot sense what cannot in any way be known to us.) Materialists do not find this a useful way of thinking about the ontology and ontogeny of ideas, but we might say that from a materialist perspective pushed to a logical extreme communicable to an idealist, ideas are ultimately reducible to a physically communicated, organically, socially and environmentally embedded 'brain state'. While reflexive existence is not considered by materialists to be experienced on the atomic level, the individual's physical and mental experiences are ultimately reducible to the unique tripartite combination of environmentally determined, genetically determined, and randomly determined interactions of firing neurons and atomic collisions.
For materialists, ideas have no primary reality as essences separate from our physical existence. From a materialist perspective, ideas are social (rather than purely biological), and formed and transmitted and modified through the interactions between social organisms and their social and physical environments. This materialist perspective informs scientific methodology, insofar as that methodology assumes that humans have no access to omniscience and that therefore human knowledge is an ongoing, collective enterprise that is best produced via scientific and logical conventions adjusted specifically for material human capacities and limitations.
Modern idealists believe that the mind and its thoughts are the only true things that exist. This is the reverse of what is sometimes called "classical idealism" or, somewhat confusingly, "Platonic idealism" due to the influence of Plato's theory of forms (εἶδος eidos or ἰδέα idea) which were not products of our thinking. The material world is ephemeral, but a perfect triangle or "beauty" is eternal. Religious thinking tends to be some form of idealism, as God usually becomes the highest ideal (such as neoplatonism). On this scale, solipsism can be classed as idealism. Thoughts and concepts are all that exist, and furthermore, only the solipsist's own thoughts and consciousness exist. The so-called "reality" is nothing more than an idea that the solipsist has (perhaps unconsciously) created.
Cartesian dualism
There is another option: the belief that both ideals and "reality" exist. Dualists commonly argue that the distinction between the mind (or 'ideas') and matter can be proven by employing Leibniz's principle of the identity of indiscernibles, which states that if two things share exactly the same qualities, then they must be identical, as in indistinguishable from each other and therefore one and the same thing. Dualists then attempt to identify attributes of mind that are lacked by matter (such as privacy or intentionality) or vice versa (such as having a certain temperature or electrical charge). One notable application of the identity of indiscernibles was by René Descartes in his Meditations on First Philosophy. Descartes concluded that he could not doubt the existence of himself (the famous cogito ergo sum argument), but that he could doubt the (separate) existence of his body. From this, he inferred that the person Descartes must not be identical to the Descartes body since one possessed a characteristic that the other did not: namely, it could be known to exist. Solipsism agrees with Descartes in this aspect, and goes further: only things that can be known to exist for sure should be considered to exist. The Descartes body could only exist as an idea in the mind of the person Descartes. Descartes and dualism aim to prove the actual existence of reality as opposed to a phantom existence (as well as the existence of God in Descartes' case), using the realm of ideas merely as a starting point, but solipsism usually finds those further arguments unconvincing. The solipsist instead proposes that their own unconscious is the author of all seemingly "external" events from "reality".
Philosophy of Schopenhauer
The World as Will and Representation is the central work of Arthur Schopenhauer. Schopenhauer saw the human will as our one window to the world behind the representation, the Kantian thing-in-itself. He believed, therefore, that we could gain knowledge about the thing-in-itself, something Kant said was impossible, since the rest of the relationship between representation and thing-in-itself could be understood by analogy as the relationship between human will and human body.
Idealism
The idealist philosopher George Berkeley argued that physical objects do not exist independently of the mind that perceives them. An item truly exists only as long as it is observed; otherwise, it is not only meaningless but simply nonexistent. Berkeley does attempt to show things can and do exist apart from the human mind and our perception, but only because there is an all-encompassing Mind in which all "ideas" are perceived – in other words, God, who observes all. Solipsism agrees that nothing exists outside of perception, but would argue that Berkeley falls prey to the egocentric predicament – he can only make his own observations, and thus cannot be truly sure that this God or other people exist to observe "reality". The solipsist would say it is better to disregard the unreliable observations of alleged other people and rely upon the immediate certainty of one's own perceptions.
Rationalism
Rationalism is the philosophical position that truth is best discovered by the use of reasoning and logic rather than by the use of the senses (see Plato's theory of forms). Solipsism is also skeptical of sense-data.
Philosophical zombie
The theory of solipsism crosses over with the theory of the philosophical zombie in that other seemingly conscious beings may actually lack true consciousness, instead they only display traits of consciousness to the observer, who may be the only conscious being there is.
Falsifiability and testability
Solipsism is not a falsifiable hypothesis as described by Karl Popper: there does not seem to be an imaginable disproof. According to Popper: a hypothesis that cannot be falsified is not scientific, and a solipsist can observe "the success of sciences" (see also no miracles argument). One critical test is nevertheless to consider the induction from experience that the externally observable world does not seem, at first approach, to be directly manipulable purely by mental energies alone. One can indirectly manipulate the world through the medium of the physical body, but it seems impossible to do so through pure thought (psychokinesis). It might be argued that if the external world were merely a construct of a single consciousness, i.e. the self, it could then follow that the external world should be somehow directly manipulable by that consciousness, and if it is not, then solipsism is false. An argument against this states that this argument is circular and incoherent. It assumes at the beginning a "construct of a single consciousness" meaning something false, and then tries to manipulate the external world that it just assumed was false. Of course this is an impossible task, but it does not disprove solipsism. It is simply poor reasoning when considering pure idealized logic and that is why David Deutsch states that when also other scientific methods are used (not only logic) solipsism is "indefensible", also when using the simplest explanations:
"If, according to the simplest explanation, an entity is complex and autonomous, then that entity is real."
The method of the typical scientist is naturalist: they first assume that the external world exists and can be known. But the scientific method, in the sense of a predict-observe-modify loop, does not require the assumption of an external world. A solipsist may perform a psychological test on themselves, to discern the nature of the reality in their mind – however Deutsch uses this fact to counter-argue: "outer parts" of solipsist, behave independently so they are independent for "narrowly" defined (conscious) self. A solipsist's investigations may not be proper science, however, since it would not include the co-operative and communitarian aspects of scientific inquiry that normally serve to diminish bias.
Minimalism
Solipsism is a form of logical minimalism. Many people are intuitively unconvinced of the nonexistence of the external world from the basic arguments of solipsism, but a solid proof of its existence is not available at present. The central assertion of solipsism rests on the nonexistence of such a proof, and strong solipsism (as opposed to weak solipsism) asserts that no such proof can be made. In this sense, solipsism is logically related to agnosticism in religion: the distinction between believing you do not know, and believing you could not have known.
However, minimality (or parsimony) is not the only logical virtue. A common misapprehension of Occam's razor has it that the simpler theory is always the best. In fact, the principle is that the simpler of two theories of equal explanatory power is to be preferred. In other words: additional "entities" can pay their way with enhanced explanatory power. So the naturalist can claim that, while their world view is more complex, it is more satisfying as an explanation.
In infants
Some developmental psychologists believe that infants are solipsistic, and that eventually children infer that others have experiences much like theirs and reject solipsism.
Hinduism
The earliest reference to solipsism is found in the ideas in Hindu philosophy in the Brihadaranyaka Upanishad, dated to early 1st millennium BC. The Upanishad holds the mind to be the only god and all actions in the universe are thought to be a result of the mind assuming infinite forms. After the development of distinct schools of Indian philosophy, Advaita Vedanta and Samkhya schools are thought to have originated concepts similar to solipsism.
Advaita Vedanta
Advaita is one of the six most known Hindu philosophical systems and literally means "non-duality". Its first great consolidator was Adi Shankaracharya, who continued the work of some of the Upanishadic teachers, and that of his teacher's teacher Gaudapada. By using various arguments, such as the analysis of the three states of experience—wakefulness, dream, and deep sleep, he established the singular reality of Brahman, in which Brahman, the universe and the Atman or the Self, were one and the same.
The concept of the Self in the philosophy of Advaita could be interpreted as solipsism. However, the theological definition of the Self in Advaita protect it from true solipsism as found in the west. Similarly, the Vedantic text Yogavasistha, escapes charge of solipsism because the real "I" is thought to be nothing but the absolute whole looked at through a particular unique point of interest.
It is mentioned in Yoga Vasistha that “…..according to them (we can safely assume that them are present Solipsists) this world is mental in nature. There is no reality other than the ideas of one’s own mind. This view is incorrect, because the world cannot be the content of an individual’s mind. If it were so, an individual would have created and destroyed the world according to his whims. This theory is called atma khyati – the pervasion of the little self (intellect). Yoga Vasistha - Nirvana Prakarana - Uttarardha (Volume - 6) Page 107 by Swami Jyotirmayananda
Samkhya and Yoga
Samkhya philosophy, which is sometimes seen as the basis of Yogic thought, adopts a view that matter exists independently of individual minds. Representation of an object in an individual mind is held to be a mental approximation of the object in the external world. Therefore, Samkhya chooses representational realism over epistemological solipsism. Having established this distinction between the external world and the mind, Samkhya posits the existence of two metaphysical realities Prakriti (matter) and Purusha (consciousness).
Buddhism
Some interpretations of Buddhism assert that external reality is an illusion, and sometimes this position is [mis]understood as metaphysical solipsism. Buddhist philosophy, though, generally holds that the mind and external phenomena are both equally transient, and that they arise from each other. The mind cannot exist without external phenomena, nor can external phenomena exist without the mind. This relation is known as "dependent arising" (pratityasamutpada).
The Buddha stated, "Within this fathom long body is the world, the origin of the world, the cessation of the world and the path leading to the cessation of the world". Whilst not rejecting the occurrence of external phenomena, the Buddha focused on the illusion created within the mind of the perceiver by the process of ascribing permanence to impermanent phenomena, satisfaction to unsatisfying experiences, and a sense of reality to things that were effectively insubstantial.
Mahayana Buddhism also challenges the illusion of the idea that one can experience an 'objective' reality independent of individual perceiving minds.
From the standpoint of Prasangika (a branch of Madhyamaka thought), external objects do exist, but are devoid of any type of inherent identity: "Just as objects of mind do not exist [inherently], mind also does not exist [inherently]". In other words, even though a chair may physically exist, individuals can only experience it through the medium of their own mind, each with their own literal point of view. Therefore, an independent, purely 'objective' reality could never be experienced.
The Yogacara (sometimes translated as "Mind only") school of Buddhist philosophy contends that all human experience is constructed by mind. Some later representatives of one Yogacara subschool (Prajñakaragupta, Ratnakīrti) propounded a form of idealism that has been interpreted as solipsism. A view of this sort is contained in the 11th-century treatise of Ratnakirti, "Refutation of the existence of other minds" (Santanantara dusana), which provides a philosophical refutation of external mind-streams from the Buddhist standpoint of ultimate truth (as distinct from the perspective of everyday reality).
In addition to this, the Bardo Thodol, Tibet's famous book of the dead, repeatedly states that all of reality is a figment of one's perception, although this occurs within the "Bardo" realm (post-mortem). For instance, within the sixth part of the section titled "The Root Verses of the Six Bardos", there appears the following line: "May I recognize whatever appeareth as being mine own thought-forms"; there are many lines in similar ideal.
Criticism
Solipsism as radical subjective idealism has often been criticized by well-known philosophers ("solipsism can only succeed in a madhouse" — A. Schopenhauer, "solipsism is madness" — M. Gardner.)
Bertrand Russell wrote that it was "psychologically impossible" to believe, "I once received a letter from an eminent logician, Mrs. Christine Ladd-Franklin, saying that she was a solipsist, and was surprised that there were no others. Coming from a logician and a solipsist, her surprise surprised me". He also argues that the logic of solipsism compels you to believe in 'solipsism of the moment' where only the presently existing moment can be said to exist.
John Stuart Mill wrote that one can know of others' minds because "First, they have bodies like me, which I know in my own case, to be the antecedent condition of feelings; and because, secondly, they exhibit the acts, and outward signs, which in my own case I know by experience to be caused by feelings".
See also
Heinlein's World as Myth
Alfred Binet – The mind and the brain
Anathema
Antiscience
Aseity
Boltzmann brain
Cartesian doubt
Centered world
Cognitive closure (philosophy)
Consensus reality
Dream argument
Ethical solipsism
Existential nihilism
Externism
Immaterialism
LaVeyan Satanism
Metaphysical nihilism
Mind over matter
Model-dependent realism
Object permanence
Objective idealism
Open individualism
Panpsychism
Personal horizon
Phaneron
Phenomenalism
Philosophical realism
Primary/secondary quality distinction – John Locke's response to solipsism
Problem of other minds
Protagoras of Abdera
Qualia
Solipsism syndrome
Standpoint theory
Stream of consciousness
Subjectivity
The Egg
The Truman Show delusion
Vertiginous question
References
Further reading
This book presents an intriguing and scientifically based updating of solipsism involving the latest findings in quantum physics, neurology and consciousness studies.
External links
Egoism
Epistemological theories
Idealism
Metaphysics of mind
Theory of mind | 0.77274 | 0.99927 | 0.772176 |
Moral relativism | Moral relativism or ethical relativism (often reformulated as relativist ethics or relativist morality) is used to describe several philosophical positions concerned with the differences in moral judgments across different peoples and cultures. An advocate of such ideas is often referred to as a relativist.
Descriptive moral relativism holds that people do, in fact, disagree fundamentally about what is moral, without passing any evaluative or normative judgments about this disagreement. Meta-ethical moral relativism holds that moral claims contain an implicit indexical, such that some aspect of the claim—say, its truth-value—varies with individuals or groups. Normative moral relativism holds that everyone ought to tolerate the behavior of others even when large disagreements about morality exist. Though often intertwined, these are distinct positions. Each can be held independently of the others.
American philosopher Richard Rorty in particular has argued that the label of being a "relativist" has become warped and turned into a sort of pejorative. He has written specifically that thinkers labeled as such usually simply believe "that the grounds for choosing between such [philosophical] opinions is less algorithmic than had been thought", not that every single conceptual idea is as valid as any other. In this spirit, Rorty has lamented that "philosophers have... become increasingly isolated from the rest of culture."
Moral relativism has been debated for thousands of years across a variety of contexts during the history of civilization. Arguments of particular notability have been made in areas such as ancient Greece and historical India while discussions have continued to the present day. Besides the material created by philosophers, the concept has additionally attracted attention in diverse fields including art, religion, and science.
Variations
Descriptive
Descriptive moral relativism is merely the positive or descriptive position that there exist, in fact, fundamental disagreements about the right course of action even when the same facts hold true and the same consequences seem likely to arise. It is the observation that different cultures have different moral standards.
Descriptive relativists do not necessarily advocate the tolerance of all behavior in light of such disagreement; that is to say, they are not necessarily normative relativists. Likewise, they do not necessarily make any commitments to the semantics, ontology, or epistemology of moral judgement; that is, not all descriptive relativists are meta-ethical relativists.
Descriptive relativism is a widespread position in academic fields such as anthropology and sociology, which simply admit that it is incorrect to assume that the same moral or ethical frameworks are always in play in all historical and cultural circumstances.
Meta-ethical
Meta-ethical moral relativists believe not only that people disagree about moral issues, but that terms such as "good", "bad", "right" and "wrong" do not stand subject to universal truth conditions at all; rather, they are relative to the traditions, convictions, or practices of an individual or a group of people. The American anthropologist William Graham Sumner was an influential advocate of this view. He argues in his 1906 work Folkways that what people consider right and wrong is shaped entirely—not primarily—by the traditions, customs, and practices of their culture. Moreover, since in his analysis of human understanding there cannot be any higher moral standard than that provided by the local morals of a culture, no trans-cultural judgement about the rightness or wrongness of a culture's morals could possibly be justified.
Meta-ethical relativists are, first, descriptive relativists: they believe that, given the same set of facts, some societies or individuals will have a fundamental disagreement about what a person ought to do or prefer (based on societal or individual norms). What's more, they argue that one cannot adjudicate these disagreements using any available independent standard of evaluation—any appeal to a relevant standard would always be merely personal or at best societal.
This view contrasts with moral universalism, which argues that, even though well-intentioned persons disagree, and some may even remain unpersuadable (e.g. someone who is closed-minded), there is still a meaningful sense in which an action could be more "moral" (morally preferable) than another; that is, they believe there are objective standards of evaluation that seem worth calling "moral facts"—regardless of whether they are universally accepted.
Normative
Normative moral relativists believe not only the meta-ethical thesis, but that it has normative implications on what we ought to do. Normative moral relativists argue that meta-ethical relativism implies that we ought to tolerate the behavior of others even when it runs counter to our personal or cultural moral standards. Most philosophers do not agree, partially because of the challenges of arriving at an "ought" from relativistic premises. Meta-ethical relativism seems to eliminate the normative relativist's ability to make prescriptive claims. In other words, normative relativism may find it difficult to make a statement like "we think it is moral to tolerate behaviour" without always adding "other people think intolerance of certain behaviours is moral". Philosophers like Russell Blackford even argue that intolerance is, to some degree, important. As he puts it, "we need not adopt a quietism about moral traditions that cause hardship and suffering. Nor need we passively accept the moral norms of our own respective societies, to the extent that they are ineffective or counterproductive or simply unnecessary". That is, it is perfectly reasonable (and practical) for a person or group to defend their subjective values against others, even if there is no universal prescription or morality. We can also criticize other cultures for failing to pursue even their own goals effectively.
The moral relativists may also still try to make sense of non-universal statements like "in this country, it is wrong to do X" or even "to me, it is right to do Y".
Moral universalists argue further that their system often does justify tolerance, and that disagreement with moral systems does not always demand interference, and certainly not aggressive interference. For example, the utilitarian might call another society's practice 'ignorant' or 'less moral', but there would still be much debate about courses of action (e.g. whether to focus on providing better education, or technology, etc.).
History
Moral relativism encompasses views and arguments that people in various cultures have held over several thousand years. For example, the ancient Jaina Anekantavada principle of Mahavira (c. 599–527 BC) states that truth and reality are perceived differently from diverse points of view, and that no single point of view is the complete truth; and the Greek philosopher Protagoras (c. 481–420 BC) famously asserted that "man is the measure of all things". The Greek historian Herodotus (c. 484–420 BC) observed that each society regards its own belief system and way of doing things as better than all others. Sextus Empiricus and other ancient Pyrrhonist philosophers denied the existence of objective morality.
In the early modern era Baruch Spinoza (1632–1677) notably held that nothing is inherently good or evil. The 18th-century Enlightenment philosopher David Hume (1711–1776) serves in several important respects as the father both of modern emotivism and of moral relativism, though Hume himself did not espouse relativism. He distinguished between matters of fact and matters of value, and suggested that moral judgments consist of the latter, for they do not deal with verifiable facts obtained in the world, but only with our sentiments and passions. But Hume regarded some of our sentiments as universal. He famously denied that morality has any objective standard, and suggested that the universe remains indifferent to our preferences and our troubles.
Friedrich Nietzsche (1844–1900) believed that we have to assess the value of our values since values are relative to one's goals and one's self. He emphasized the need to analyze our moral values and how much impact they may have on us. The problem with morality, according to Nietzsche, is that those who were considered "good" were the powerful nobles who had more education, and considered themselves better than anyone below their rank. Thus, what is considered good is relative. A "good man" is not questioned on whether or not there is a "bad", such as temptations, lingering inside him and he is considered to be more important than a man who is considered "bad" who is considered useless to making the human race better because of the morals we have subjected ourselves to. But since what is considered good and bad is relative, the importance and value we place on them should also be relative. He proposed that morality itself could be a danger. Nietzsche believed that morals should be constructed actively, making them relative to who we are and what we, as individuals, consider to be true, equal, good and bad, etc. instead of reacting to moral laws made by a certain group of individuals in power.
One scholar, supporting an anti-realist interpretation, concludes that "Nietzsche's central argument for anti-realism about value is explanatory: moral facts don't figure in the 'best explanation' of experience, and so are not real constituents of the objective world. Moral values, in short, can be 'explained away.
It is certain that Nietzsche criticizes Plato's prioritization of transcendence as the Forms. The Platonist view holds that what is 'true', or most real, is something which is other-worldly while the (real) world of experience is like a mere 'shadow' of the Forms, most famously expressed in Plato's allegory of the cave. Nietzsche believes that this transcendence also had a parallel growth in Christianity, which prioritized life-denying moral qualities such as humility and obedience through the church. (See Beyond Good and Evil, On the Genealogy of Morals, The Twilight of the Idols, The Antichrist, etc.)
Anthropologists such as Ruth Benedict (1887–1948) have cautioned observers against ethnocentricism—using the standards of their own culture to evaluate their subjects of study. Benedict said that transcendent morals do not exist—only socially constructed customs do (see cultural relativism); and that in comparing customs, the anthropologist "insofar as he remains an anthropologist ... is bound to avoid any weighting of one in favor of the other". To some extent, the increasing body of knowledge of great differences in belief among societies caused both social scientists and philosophers to question whether any objective, absolute standards pertaining to values could exist. This led some to posit that differing systems have equal validity, with no standard for adjudicating among conflicting beliefs. The Finnish philosopher-anthropologist Edward Westermarck (1862–1939) ranks as one of the first to formulate a detailed theory of moral relativism. He portrayed all moral ideas as subjective judgments that reflect one's upbringing. He rejected G.E. Moore's (1873–1958) ethical intuitionism—in vogue during the early part of the 20th century, and which identified moral propositions as true or false, and known to us through a special faculty of intuition—because of the obvious differences in beliefs among societies, which he said provided evidence of the lack of any innate, intuitive power.
Arguments for meta-ethical relativism
Scientific
Morality and evolution
Research within evolutionary biology, cognitive psychology, ethology, and evolutionary anthropology has claimed that morality is a natural phenomenon that was shaped by evolutionary mechanisms. In this case, morality is defined as the set of relative social practices that promote the survival and successful reproduction of the species, or even multiple cooperating species.
Literary
The literary perspectivism begins at the different versions of the Greek myths. Symbolism created multiple suggestions for a verse. Structuralism teaches us the polysemy of the poems.
Examples of relativistic literary works: Gogol's Dead Souls; The Alexandria Quartet by Lawrence Durrell; Raymond Queneau's Zazie dans le métro.
Criticisms of meta-ethical relativism
Philosophical
R. M. Hare
Some philosophers, for example R. M. Hare (1919–2002), argue that moral propositions remain subject to human logical rules, notwithstanding the absence of any factual content, including those subject to cultural or religious standards or norms. Thus, for example, they contend that one cannot hold contradictory ethical judgments. This allows for moral discourse with shared standards, notwithstanding the descriptive properties or truth conditions of moral terms. They do not affirm or deny that moral facts exist, only that human logic applies to our moral assertions; consequently, they postulate an objective and preferred standard of moral justification, albeit in a very limited sense. Nevertheless, according to Hare, human logic shows the error of relativism in one very important sense (see Hare's Sorting out Ethics). Hare and other philosophers also point out that, aside from logical constraints, all systems treat certain moral terms alike in an evaluative sense. This parallels our treatment of other terms such as less or more, which meet with universal understanding and do not depend upon independent standards (for example, one can convert measurements). It applies to good and bad when used in their non-moral sense, too; for example, when we say, "this is a good wrench" or "this is a bad wheel". This evaluative property of certain terms also allows people of different beliefs to have meaningful discussions on moral questions, even though they may disagree about certain "facts".
Walter Terence Stace
"Ethical Relativity" is the topic of the first two chapters in The Concept of Morals, in which Walter Terence Stace argues against moral absolutism, but for moral universalism.
Philosophical poverty
Critics propose that moral relativism fails because it rejects basic premises of discussions on morality, or because it cannot arbitrate disagreement. Many critics, including Ibn Warraq and Eddie Tabash, have suggested that meta-ethical relativists essentially take themselves out of any discussion of normative morality, since they seem to be rejecting an assumption of such discussions: the premise that there are right and wrong answers that can be discovered through reason. Practically speaking, such critics will argue that meta-ethical relativism may amount to moral nihilism, or else incoherence.
These critics argue specifically that the moral relativists reduce the extent of their input in normative moral discussions to either rejecting the very having of the discussion, or else deeming both disagreeing parties to be correct. For instance, the moral relativist can only appeal to preference to object to the practice of murder or torture by individuals for hedonistic pleasure. This accusation that relativists reject widely held terms of discourse is similar to arguments used against other "discussion-stoppers" like some forms of solipsism or the rejection of induction.
Philosopher Simon Blackburn made a similar criticism, and explains that moral relativism fails as a moral system simply because it cannot arbitrate disagreements.
Other criticism
Some arguments come when people question which moral justifications or truths are said to be relative. Because people belong to many groups based on culture, race, religion, etc., it is difficult to claim that the values of the group have authority for the members. A part of meta-ethical relativism is identifying which group of people those truths are relative to. Another component is that many people belong to more than one group. The beliefs of the groups that a person belongs to may be fundamentally different, and so it is hard to decide which are relative and which win out. A person practicing meta-ethical relativism would not necessarily object to either view, but develop an opinion and argument.
Religious
Protestantism
Roman Catholicism
Catholic and some secular intellectuals attribute the perceived post-war decadence of Europe to the displacement of absolute values by moral relativism. Pope Benedict XVI, Marcello Pera and others have argued that after about 1960, Europeans massively abandoned many traditional norms rooted in Christianity and replaced them with continuously evolving relative moral rules. In this view, sexual activity has become separated from procreation, which led to a decline in the importance of families and to depopulation. The most authoritative response to moral relativism from the Catholic perspective can be found in Veritatis Splendor, an encyclical by Pope John Paul II. Many of the main criticisms of moral relativism by the Catholic Church relate largely to modern controversies, such as elective abortion.
Buddhism
Bhikkhu Bodhi, an American Buddhist monk, has written:
By assigning value and spiritual ideals to private subjectivity, the materialistic world view ... threatens to undermine any secure objective foundation for morality. The result is the widespread moral degeneration that we witness today. To counter this tendency, mere moral exhortation is insufficient. If morality is to function as an efficient guide to conduct, it cannot be propounded as a self-justifying scheme but must be embedded in a more comprehensive spiritual system which grounds morality in a transpersonal order. Religion must affirm, in the clearest terms, that morality and ethical values are not mere decorative frills of personal opinion, not subjective superstructure, but intrinsic laws of the cosmos built into the heart of reality.
Views commonly confused with moral relativism
Moral relativism vs ethical subjectivism
Moral relativism is a distinct position from ethical subjectivism (the view that the truth of ethical claims are not mind independent). While these views are often held together, they do not entail each other. For example, someone who claims "something is morally right for me to do because the people in my culture think it is right" is both a moral relativist (because what is right and wrong depends on who is doing it), and an ethical subjectivist (because what is right and wrong is determined by mental states, i.e. what people think is right and wrong).
However, someone who thinks that what is right and wrong is whatever a deity thinks is right or wrong would be a subjectivist (morality is based on mental states), but not a relativist (morality is the same for everyone). In contrast, someone who claims that to act ethically you must follow the laws of your country would be a relativist (morality is dependent on who you are), but not a subjectivist (morality is based on facts about the world, not mental states).
Moral relativism vs moral anti-realism
Depending on how a moral relativist position is constructed, it may or may not be independent of moral realism. Moral realists are committed to some version of the following three claims:
Semantic thesis: Moral statements have meaning, they express propositions, or are the kind of things that can be true or false.
Alethic thesis: Some moral propositions are true.
Metaphysical thesis: The metaphysical status of moral facts is robust and ordinary, not importantly different from other facts about the world.
While many moral relativists deny one or more of these claims, and therefore could be moral anti-realists, a denial is not required. A moral relativist who claims that you should act according to the laws in whatever country you are a citizen of, accepts all three claims: moral facts express propositions that can be true or false (you can see if a given action is against the law or not), some moral propositions are true (some actions abide by the laws in someone's country), and moral facts are ordinary (laws are not mental states, they are physical objects in the world). However, this view is a relativist one as it is dependent on the country you are a citizen of.
See also
References
Bibliography
Guy Ankerl, Global Communication without Universal Civilization. vol I: Coexisting Contemporary Civilizations: Arabo-Muslim, Bharati, Chinese, and Western. (Geneva, INUPRESS, 2000. )
Joxe Azurmendi 1998: "The violence and the search for new values" in Euskal Herria krisian, (Elkar, 1999), pp. 11–116.
Kurt Baier, "Difficulties in the Emotive-Imperative Theory" in Paul W Taylor (editor): The Moral Judgement: Readings in Contemporary Meta-Ethics Englewood Cliffs, N.J.: Prentice-Hall, 1963
Ruth Benedict, Patterns of Culture (mentor)
Panayot Butchvarov, "Skepticism in Ethics" (Bloomington and Indianapolis, Indiana University Press, 1989).
Ronald F. Duska, "What's the Point of a Business Ethics Course?", 1 Business Ethics Quarterly 335–352(1991), reprinted in Sterling Harwood, ed., Business as Ethical and Business as Usual (Belmont, CA: Wadsworth Publishing Co., 1996), pp. 11–21.
R.M. Hare, Sorting out Ethics (Oxford University Press)
Gilbert Harman & Judith Jarvis Thomson, Moral Relativism and Moral Objectivity (Blackwell Publishing), 1996.
Sterling Harwood, "Taking Ethics Seriously -- Moral Relativism versus Moral Realism" in Sterling Harwood, ed., Business as Ethical and Business as Usual (Belmont, CA: Wadsworth Publishing Co., 1996), pp. 2–4.
Sterling Harwood, "Against MacIntyre's Relativistic Communitarianism" in Sterling Harwood, ed., Business as Ethical and Business as Usual (Belmont, CA: Wadsworth Publishing Co., 1996), pp. 5–10.
David Hume, An Enquiry Concerning the Principles of Morals, ed. Tom L. Beauchamp (Oxford University Press)
Steven Lukes, Moral Relativism, Picador, 2008.
G.E. Moore, Principia Ethica (Cambridge University Press)
Jean-Paul Sartre, "Existentialism is a Humanism" in Existentialism From Dostoevsky to Sartre, ed. by Walter Kaufmann (World Publishing Company)
Walter Terence Stace, The Concept of Morals, (The MacMillan Company, 1937, reprinted, 1975 by Permission of Macmillan Publishing Co., Inc., (Macmillan Publishers), ), See Chapters 1 and 2 entitled "Ethical Relativity", pp 1–68.
Leo Strauss, The Rebirth of Classical Political Rationalism, ed. Thomas L. Pangle (University of Chicago Press)
Edward Westermarck, The Origin and Development of the Moral Ideas Macmillan, 1906.
Bernard Williams, Ethics and the Limits of Philosophy (Harvard University Press)
David B. Wong, Moral Relativity (Berkeley, CA: University of California Press, 1986), 248 pages.
Paul Julian. Minimal Truth, Moral Conflict and Metaethical Relativism. 2006.
External links
Moral Relativism entry in the Internet Encyclopedia of Philosophy
Living with Relativism
Relativism
Metaethics
Relativism
Postmodernism
Ethical theories | 0.774325 | 0.997062 | 0.77205 |
Normative science | In the applied sciences, normative science is a type of information that is developed, presented, or interpreted based on an assumed, usually unstated, preference for a particular outcome, policy or class of policies or outcomes. Regular or traditional science does not presuppose a policy preference, but normative science, by definition, does. Common examples of such policy preferences are arguments that pristine ecosystems are preferable to human altered ones, that native species are preferable to nonnative species, and that higher biodiversity is preferable to lower biodiversity.
In more general philosophical terms, normative science is a form of inquiry, typically involving a community of inquiry and its accumulated body of provisional knowledge, that seeks to discover good ways of achieving recognized aims, ends, goals, objectives, or purposes. Many political debates revolve around arguments over which of the many "good ways" shall be selected. For example, when presented as scientific information, words such as ecosystem health, biological integrity, and environmental degradation are typically examples of normative science because they each presuppose a policy preference and are therefore a type of policy advocacy.
See also
Descriptive science
Environmental policy
Fact–value distinction
Is–ought problem
Normative economics
Normative ethics
Policy advocacy
Truth
References
Philosophy of science | 0.798943 | 0.966083 | 0.771846 |
Sociology | Sociology is the scientific study of human society that focuses on society, human social behavior, patterns of social relationships, social interaction, and aspects of culture associated with everyday life. Regarded as a part of both the social sciences and humanities, sociology uses various methods of empirical investigation and critical analysis to develop a body of knowledge about social order and social change. Sociological subject matter ranges from micro-level analyses of individual interaction and agency to macro-level analyses of social systems and social structure. Applied sociological research may be applied directly to social policy and welfare, whereas theoretical approaches may focus on the understanding of social processes and phenomenological method.
Traditional focuses of sociology include social stratification, social class, social mobility, religion, secularization, law, sexuality, gender, and deviance. Recent studies have added socio-technical aspects of the digital divide as a new focus. As all spheres of human activity are affected by the interplay between social structure and individual agency, sociology has gradually expanded its focus to other subjects and institutions, such as health and the institution of medicine; economy; military; punishment and systems of control; the Internet; sociology of education; social capital; and the role of social activity in the development of scientific knowledge.
The range of social scientific methods has also expanded, as social researchers draw upon a variety of qualitative and quantitative techniques. The linguistic and cultural turns of the mid-20th century, especially, have led to increasingly interpretative, hermeneutic, and philosophical approaches towards the analysis of society. Conversely, the turn of the 21st century has seen the rise of new analytically, mathematically, and computationally rigorous techniques, such as agent-based modelling and social network analysis.
Social research has influence throughout various industries and sectors of life, such as among politicians, policy makers, and legislators; educators; planners; administrators; developers; business magnates and managers; social workers; non-governmental organizations; and non-profit organizations, as well as individuals interested in resolving social issues in general.
History
Sociological reasoning predates the foundation of the discipline itself. Social analysis has origins in the common stock of universal, global knowledge and philosophy, having been carried out from as far back as the time of old comic poetry which features social and political criticism, and ancient Greek philosophers Socrates, Plato, and Aristotle. For instance, the origin of the survey can be traced back to at least the Domesday Book in 1086, while ancient philosophers such as Confucius wrote about the importance of social roles.
Medieval Arabic writings encompass a rich tradition that unveils early insights into the field of sociology. Some sources consider Ibn Khaldun, a 14th-century Muslim scholar from Tunisia, to have been the father of sociology, although there is no reference to his work in the writings of European contributors to modern sociology. Khaldun's Muqaddimah was considered to be amongst the first works to advance social-scientific reasoning on social cohesion and social conflict.
Etymology
The word sociology derives part of its name from the Latin word socius ('companion' or 'fellowship'). The suffix -logy ('the study of') comes from that of the Greek -λογία, derived from λόγος (, 'word' or 'knowledge').
The term sociology was first coined in 1780 by the French essayist Emmanuel-Joseph Sieyès in an unpublished manuscript. Sociology was later defined independently by French philosopher of science Auguste Comte (1798–1857) in 1838 as a new way of looking at society. Comte had earlier used the term social physics, but it had been subsequently appropriated by others, most notably the Belgian statistician Adolphe Quetelet. Comte endeavored to unify history, psychology, and economics through the scientific understanding of social life. Writing shortly after the malaise of the French Revolution, he proposed that social ills could be remedied through sociological positivism, an epistemological approach outlined in the Course in Positive Philosophy (1830–1842), later included in A General View of Positivism (1848). Comte believed a positivist stage would mark the final era in the progression of human understanding, after conjectural theological and metaphysical phases. In observing the circular dependence of theory and observation in science, and having classified the sciences, Comte may be regarded as the first philosopher of science in the modern sense of the term.
Marx
Both Comte and Karl Marx set out to develop scientifically justified systems in the wake of European industrialization and secularization, informed by various key movements in the philosophies of history and science. Marx rejected Comtean positivism but in attempting to develop a "science of society" nevertheless came to be recognized as a founder of sociology as the word gained wider meaning. For Isaiah Berlin, even though Marx did not consider himself to be a sociologist, he may be regarded as the "true father" of modern sociology, "in so far as anyone can claim the title."To have given clear and unified answers in familiar empirical terms to those theoretical questions which most occupied men's minds at the time, and to have deduced from them clear practical directives without creating obviously artificial links between the two, was the principal achievement of Marx's theory. The sociological treatment of historical and moral problems, which Comte and after him, Spencer and Taine, had discussed and mapped, became a precise and concrete study only when the attack of militant Marxism made its conclusions a burning issue, and so made the search for evidence more zealous and the attention to method more intense.
Spencer
Herbert Spencer was one of the most popular and influential 19th-century sociologists. It is estimated that he sold one million books in his lifetime, far more than any other sociologist at the time. So strong was his influence that many other 19th-century thinkers, including Émile Durkheim, defined their ideas in relation to his. Durkheim's Division of Labour in Society is to a large extent an extended debate with Spencer from whose sociology Durkheim borrowed extensively.
Also a notable biologist, Spencer coined the term survival of the fittest. While Marxian ideas defined one strand of sociology, Spencer was a critic of socialism, as well as a strong advocate for a laissez-faire style of government. His ideas were closely observed by conservative political circles, especially in the United States and England.
Foundations of the academic discipline
The first formal Department of Sociology in the world was established in 1892 by Albion Small—from the invitation of William Rainey Harper—at the University of Chicago. The American Journal of Sociology was founded shortly thereafter in 1895 by Small as well.
The institutionalization of sociology as an academic discipline, however, was chiefly led by Émile Durkheim, who developed positivism as a foundation for practical social research. While Durkheim rejected much of the detail of Comte's philosophy, he retained and refined its method, maintaining that the social sciences are a logical continuation of the natural ones into the realm of human activity, and insisting that they may retain the same objectivity, rationalism, and approach to causality. Durkheim set up the first European department of sociology at the University of Bordeaux in 1895, publishing his Rules of the Sociological Method (1895). For Durkheim, sociology could be described as the "science of institutions, their genesis and their functioning."
Durkheim's monograph Suicide (1897) is considered a seminal work in statistical analysis by contemporary sociologists. Suicide is a case study of variations in suicide rates among Catholic and Protestant populations, and served to distinguish sociological analysis from psychology or philosophy. It also marked a major contribution to the theoretical concept of structural functionalism. By carefully examining suicide statistics in different police districts, he attempted to demonstrate that Catholic communities have a lower suicide rate than that of Protestants, something he attributed to social (as opposed to individual or psychological) causes. He developed the notion of objective social facts to delineate a unique empirical object for the science of sociology to study. Through such studies he posited that sociology would be able to determine whether any given society is healthy or pathological, and seek social reform to negate organic breakdown, or "social anomie".
Sociology quickly evolved as an academic response to the perceived challenges of modernity, such as industrialization, urbanization, secularization, and the process of rationalization. The field predominated in continental Europe, with British anthropology and statistics generally following on a separate trajectory. By the turn of the 20th century, however, many theorists were active in the English-speaking world. Few early sociologists were confined strictly to the subject, interacting also with economics, jurisprudence, psychology and philosophy, with theories being appropriated in a variety of different fields. Since its inception, sociological epistemology, methods, and frames of inquiry, have significantly expanded and diverged.
Durkheim, Marx, and the German theorist Max Weber are typically cited as the three principal architects of sociology. Herbert Spencer, William Graham Sumner, Lester F. Ward, W. E. B. Du Bois, Vilfredo Pareto, Alexis de Tocqueville, Werner Sombart, Thorstein Veblen, Ferdinand Tönnies, Georg Simmel, Jane Addams and Karl Mannheim are often included on academic curricula as founding theorists. Curricula also may include Charlotte Perkins Gilman, Marianne Weber, Harriet Martineau, and Friedrich Engels as founders of the feminist tradition in sociology. Each key figure is associated with a particular theoretical perspective and orientation.
Further developments
The first college course entitled "Sociology" was taught in the United States at Yale in 1875 by William Graham Sumner. In 1883, Lester F. Ward, who later became the first president of the American Sociological Association (ASA), published Dynamic Sociology—Or Applied social science as based upon statical sociology and the less complex sciences, attacking the laissez-faire sociology of Herbert Spencer and Sumner. Ward's 1,200-page book was used as core material in many early American sociology courses. In 1890, the oldest continuing American course in the modern tradition began at the University of Kansas, lectured by Frank W. Blackmar. The Department of Sociology at the University of Chicago was established in 1892 by Albion Small, who also published the first sociology textbook: An introduction to the study of society. George Herbert Mead and Charles Cooley, who had met at the University of Michigan in 1891 (along with John Dewey), moved to Chicago in 1894. Their influence gave rise to social psychology and the symbolic interactionism of the modern Chicago School. The American Journal of Sociology was founded in 1895, followed by the ASA in 1905.
The sociological canon of classics with Durkheim and Max Weber at the top owes its existence in part to Talcott Parsons, who is largely credited with introducing both to American audiences. Parsons consolidated the sociological tradition and set the agenda for American sociology at the point of its fastest disciplinary growth. Sociology in the United States was less historically influenced by Marxism than its European counterpart, and to this day broadly remains more statistical in its approach.
The first sociology department established in the United Kingdom was at the London School of Economics and Political Science (home of the British Journal of Sociology) in 1904. Leonard Trelawny Hobhouse and Edvard Westermarck became the lecturers in the discipline at the University of London in 1907. Harriet Martineau, an English translator of Comte, has been cited as the first female sociologist. In 1909, the German Sociological Association was founded by Ferdinand Tönnies and Max Weber, among others. Weber established the first department in Germany at the Ludwig Maximilian University of Munich in 1919, having presented an influential new antipositivist sociology. In 1920, Florian Znaniecki set up the first department in Poland. The Institute for Social Research at the University of Frankfurt (later to become the Frankfurt School of critical theory) was founded in 1923. International co-operation in sociology began in 1893, when René Worms founded the Institut International de Sociologie, an institution later eclipsed by the much larger International Sociological Association (ISA), founded in 1949.
Theoretical traditions
Positivism and anti-positivism
Positivism
The overarching methodological principle of positivism is to conduct sociology in broadly the same manner as natural science. An emphasis on empiricism and the scientific method is sought to provide a tested foundation for sociological research based on the assumption that the only authentic knowledge is scientific knowledge, and that such knowledge can only arrive by positive affirmation through scientific methodology.
The term has long since ceased to carry this meaning; there are no fewer than twelve distinct epistemologies that are referred to as positivism. Many of these approaches do not self-identify as "positivist", some because they themselves arose in opposition to older forms of positivism, and some because the label has over time become a pejorative term by being mistakenly linked with a theoretical empiricism. The extent of antipositivist criticism has also diverged, with many rejecting the scientific method and others only seeking to amend it to reflect 20th-century developments in the philosophy of science. However, positivism (broadly understood as a scientific approach to the study of society) remains dominant in contemporary sociology, especially in the United States.
Loïc Wacquant distinguishes three major strains of positivism: Durkheimian, Logical, and Instrumental. None of these are the same as that set forth by Comte, who was unique in advocating such a rigid (and perhaps optimistic) version. While Émile Durkheim rejected much of the detail of Comte's philosophy, he retained and refined its method. Durkheim maintained that the social sciences are a logical continuation of the natural ones into the realm of human activity, and insisted that they should retain the same objectivity, rationalism, and approach to causality. He developed the notion of objective sui generis "social facts" to serve as unique empirical objects for the science of sociology to study.
The variety of positivism that remains dominant today is termed instrumental positivism. This approach eschews epistemological and metaphysical concerns (such as the nature of social facts) in favour of methodological clarity, replicability, reliability and validity. This positivism is more or less synonymous with quantitative research, and so only resembles older positivism in practice. Since it carries no explicit philosophical commitment, its practitioners may not belong to any particular school of thought. Modern sociology of this type is often credited to Paul Lazarsfeld, who pioneered large-scale survey studies and developed statistical techniques for analysing them. This approach lends itself to what Robert K. Merton called middle-range theory: abstract statements that generalize from segregated hypotheses and empirical regularities rather than starting with an abstract idea of a social whole.
Antipositivism
The German philosopher Hegel criticised traditional empiricist epistemology, which he rejected as uncritical, and determinism, which he viewed as overly mechanistic. Karl Marx's methodology borrowed from Hegelian dialecticism but also a rejection of positivism in favour of critical analysis, seeking to supplement the empirical acquisition of "facts" with the elimination of illusions. He maintained that appearances need to be critiqued rather than simply documented. Early hermeneuticians such as Wilhelm Dilthey pioneered the distinction between natural and social science ('Geisteswissenschaft'). Various neo-Kantian philosophers, phenomenologists and human scientists further theorized how the analysis of the social world differs to that of the natural world due to the irreducibly complex aspects of human society, culture, and being.
In the Italian context of development of social sciences and of sociology in particular, there are oppositions to the first foundation of the discipline, sustained by speculative philosophy in accordance with the antiscientific tendencies matured by critique of positivism and evolutionism, so a tradition Progressist struggles to establish itself.
At the turn of the 20th century, the first generation of German sociologists formally introduced methodological anti-positivism, proposing that research should concentrate on human cultural norms, values, symbols, and social processes viewed from a resolutely subjective perspective. Max Weber argued that sociology may be loosely described as a science as it is able to identify causal relationships of human "social action"—especially among "ideal types", or hypothetical simplifications of complex social phenomena. As a non-positivist, however, Weber sought relationships that are not as "historical, invariant, or generalisable" as those pursued by natural scientists. Fellow German sociologist, Ferdinand Tönnies, theorised on two crucial abstract concepts with his work on "gemeinschaft and gesellschaft". Tönnies marked a sharp line between the realm of concepts and the reality of social action: the first must be treated axiomatically and in a deductive way ("pure sociology"), whereas the second empirically and inductively ("applied sociology").
Both Weber and Georg Simmel pioneered the "Verstehen" (or 'interpretative') method in social science; a systematic process by which an outside observer attempts to relate to a particular cultural group, or indigenous people, on their own terms and from their own point of view. Through the work of Simmel, in particular, sociology acquired a possible character beyond positivist data-collection or grand, deterministic systems of structural law. Relatively isolated from the sociological academy throughout his lifetime, Simmel presented idiosyncratic analyses of modernity more reminiscent of the phenomenological and existential writers than of Comte or Durkheim, paying particular concern to the forms of, and possibilities for, social individuality. His sociology engaged in a neo-Kantian inquiry into the limits of perception, asking 'What is society?' in a direct allusion to Kant's question 'What is nature?'
Classical theory
The contemporary discipline of sociology is theoretically multi-paradigmatic in line with the contentions of classical social theory. Randall Collins' well-cited survey of sociological theory retroactively labels various theorists as belonging to four theoretical traditions: Functionalism, Conflict, Symbolic Interactionism, and Utilitarianism.
Accordingly, modern sociological theory predominantly descends from functionalist (Durkheim) and conflict (Marx and Weber) approaches to social structure, as well as from symbolic-interactionist approaches to social interaction, such as micro-level structural (Simmel) and pragmatist (Mead, Cooley) perspectives. Utilitarianism (also known as rational choice or social exchange), although often associated with economics, is an established tradition within sociological theory.
Lastly, as argued by Raewyn Connell, a tradition that is often forgotten is that of Social Darwinism, which applies the logic of Darwinian biological evolution to people and societies. This tradition often aligns with classical functionalism, and was once the dominant theoretical stance in American sociology, from , associated with several founders of sociology, primarily Herbert Spencer, Lester F. Ward, and William Graham Sumner.
Contemporary sociological theory retains traces of each of these traditions and they are by no means mutually exclusive.
Functionalism
A broad historical paradigm in both sociology and anthropology, functionalism addresses the social structure—referred to as "social organization" by the classical theorists—with respect to the whole as well as the necessary function of the whole's constituent elements. A common analogy (popularized by Herbert Spencer) is to regard norms and institutions as 'organs' that work towards the proper functioning of the entire 'body' of society. The perspective was implicit in the original sociological positivism of Comte but was theorized in full by Durkheim, again with respect to observable, structural laws.
Functionalism also has an anthropological basis in the work of theorists such as Marcel Mauss, Bronisław Malinowski, and Radcliffe-Brown. It is in the latter's specific usage that the prefix "structural" emerged. Classical functionalist theory is generally united by its tendency towards biological analogy and notions of social evolutionism, in that the basic form of society would increase in complexity and those forms of social organization that promoted solidarity would eventually overcome social disorganization. As Giddens states:Functionalist thought, from Comte onwards, has looked particularly towards biology as the science providing the closest and most compatible model for social science. Biology has been taken to provide a guide to conceptualizing the structure and the function of social systems and to analyzing processes of evolution via mechanisms of adaptation. Functionalism strongly emphasizes the pre-eminence of the social world over its individual parts (i.e. its constituent actors, human subjects).
Conflict theory
Functionalist theories emphasize "cohesive systems" and are often contrasted with "conflict theories", which critique the overarching socio-political system or emphasize the inequality between particular groups. The following quotes from Durkheim and Marx epitomize the political, as well as theoretical, disparities, between functionalist and conflict thought respectively:
Symbolic interactionism
Symbolic interaction—often associated with interactionism, phenomenology, dramaturgy, interpretivism—is a sociological approach that places emphasis on subjective meanings and the empirical unfolding of social processes, generally accessed through micro-analysis. This tradition emerged in the Chicago School of the 1920s and 1930s, which, prior to World War II, "had been the center of sociological research and graduate study." The approach focuses on creating a framework for building a theory that sees society as the product of the everyday interactions of individuals. Society is nothing more than the shared reality that people construct as they interact with one another. This approach sees people interacting in countless settings using symbolic communications to accomplish the tasks at hand. Therefore, society is a complex, ever-changing mosaic of subjective meanings. Some critics of this approach argue that it only looks at what is happening in a particular social situation, and disregards the effects that culture, race or gender (i.e. social-historical structures) may have in that situation. Some important sociologists associated with this approach include Max Weber, George Herbert Mead, Erving Goffman, George Homans, and Peter Blau. It is also in this tradition that the radical-empirical approach of ethnomethodology emerges from the work of Harold Garfinkel.
Utilitarianism
Utilitarianism is often referred to as exchange theory or rational choice theory in the context of sociology. This tradition tends to privilege the agency of individual rational actors and assumes that within interactions individuals always seek to maximize their own self-interest. As argued by Josh Whitford, rational actors are assumed to have four basic elements:
"a knowledge of alternatives;"
"a knowledge of, or beliefs about the consequences of the various alternatives;"
"an ordering of preferences over outcomes;" and
"a decision rule, to select among the possible alternatives"
Exchange theory is specifically attributed to the work of George C. Homans, Peter Blau and Richard Emerson. Organizational sociologists James G. March and Herbert A. Simon noted that an individual's rationality is bounded by the context or organizational setting. The utilitarian perspective in sociology was, most notably, revitalized in the late 20th century by the work of former ASA president James Coleman.
20th-century social theory
Following the decline of theories of sociocultural evolution in the United States, the interactionist thought of the Chicago School dominated American sociology. As Anselm Strauss describes, "we didn't think symbolic interaction was a perspective in sociology; we thought it was sociology." Moreover, philosophical and psychological pragmatism grounded this tradition. After World War II, mainstream sociology shifted to the survey-research of Paul Lazarsfeld at Columbia University and the general theorizing of Pitirim Sorokin, followed by Talcott Parsons at Harvard University. Ultimately, "the failure of the Chicago, Columbia, and Wisconsin [sociology] departments to produce a significant number of graduate students interested in and committed to general theory in the years 1936–45 was to the advantage of the Harvard department." As Parsons began to dominate general theory, his work primarily referenced European sociology—almost entirely omitting citations of both the American tradition of sociocultural-evolution as well as pragmatism. In addition to Parsons' revision of the sociological canon (which included Marshall, Pareto, Weber and Durkheim), the lack of theoretical challenges from other departments nurtured the rise of the Parsonian structural-functionalist movement, which reached its crescendo in the 1950s, but by the 1960s was in rapid decline.
By the 1980s, most functionalist perspectives in Europe had broadly been replaced by conflict-oriented approaches, and to many in the discipline, functionalism was considered "as dead as a dodo:" According to Giddens:The orthodox consensus terminated in the late 1960s and 1970s as the middle ground shared by otherwise competing perspectives gave way and was replaced by a baffling variety of competing perspectives. This third 'generation' of social theory includes phenomenologically inspired approaches, critical theory, ethnomethodology, symbolic interactionism, structuralism, post-structuralism, and theories written in the tradition of hermeneutics and ordinary language philosophy.
Pax Wisconsana
While some conflict approaches also gained popularity in the United States, the mainstream of the discipline instead shifted to a variety of empirically oriented middle-range theories with no single overarching, or "grand", theoretical orientation. John Levi Martin refers to this "golden age of methodological unity and theoretical calm" as the Pax Wisconsana, as it reflected the composition of the sociology department at the University of Wisconsin–Madison: numerous scholars working on separate projects with little contention. Omar Lizardo describes the pax wisconsana as "a Midwestern flavored, Mertonian resolution of the theory/method wars in which [sociologists] all agreed on at least two working hypotheses: (1) grand theory is a waste of time; [and] (2) good theory has to be good to think with or goes in the trash bin." Despite the aversion to grand theory in the latter half of the 20th century, several new traditions have emerged that propose various syntheses: structuralism, post-structuralism, cultural sociology and systems theory. Some sociologists have called for a return to 'grand theory' to combat the rise of scientific and pragmatist influences within the tradition of sociological thought (see Duane Rousselle).
Structuralism
The structuralist movement originated primarily from the work of Durkheim as interpreted by two European scholars: Anthony Giddens, a sociologist, whose theory of structuration draws on the linguistic theory of Ferdinand de Saussure; and Claude Lévi-Strauss, an anthropologist. In this context, 'structure' does not refer to 'social structure', but to the semiotic understanding of human culture as a system of signs. One may delineate four central tenets of structuralism:
Structure is what determines the structure of a whole.
Structuralists believe that every system has a structure.
Structuralists are interested in 'structural' laws that deal with coexistence rather than changes.
Structures are the 'real things' beneath the surface or the appearance of meaning.
The second tradition of structuralist thought, contemporaneous with Giddens, emerges from the American School of social network analysis in the 1970s and 1980s, spearheaded by the Harvard Department of Social Relations led by Harrison White and his students. This tradition of structuralist thought argues that, rather than semiotics, social structure is networks of patterned social relations. And, rather than Levi-Strauss, this school of thought draws on the notions of structure as theorized by Levi-Strauss' contemporary anthropologist, Radcliffe-Brown. Some refer to this as "network structuralism", and equate it to "British structuralism" as opposed to the "French structuralism" of Levi-Strauss.
Post-structuralism
Post-structuralist thought has tended to reject 'humanist' assumptions in the construction of social theory. Michel Foucault provides an important critique in his Archaeology of the Human Sciences, though Habermas (1986) and Rorty (1986) have both argued that Foucault merely replaces one such system of thought with another. The dialogue between these intellectuals highlights a trend in recent years for certain schools of sociology and philosophy to intersect. The anti-humanist position has been associated with "postmodernism", a term used in specific contexts to describe an era or phenomena, but occasionally construed as a method.
Central theoretical problems
Overall, there is a strong consensus regarding the central problems of sociological theory, which are largely inherited from the classical theoretical traditions. This consensus is: how to link, transcend or cope with the following "big three" dichotomies:
subjectivity and objectivity, which deal with knowledge;
structure and agency, which deal with action;
and synchrony and diachrony, which deal with time.
Lastly, sociological theory often grapples with the problem of integrating or transcending the divide between micro, meso, and macro-scale social phenomena, which is a subset of all three central problems.
Subjectivity and objectivity
The problem of subjectivity and objectivity can be divided into two parts: a concern over the general possibilities of social actions, and the specific problem of social scientific knowledge. In the former, the subjective is often equated (though not necessarily) with the individual, and the individual's intentions and interpretations of the objective. The objective is often considered any public or external action or outcome, on up to society writ large. A primary question for social theorists, then, is how knowledge reproduces along the chain of subjective-objective-subjective, that is to say: how is intersubjectivity achieved? While, historically, qualitative methods have attempted to tease out subjective interpretations, quantitative survey methods also attempt to capture individual subjectivities. Qualitative methods take an approach to objective description known as in situ, meaning that descriptions must have appropriate contextual information to understand the information.
The latter concern with scientific knowledge results from the fact that a sociologist is part of the very object they seek to explain, as Bourdieu explains:
Structure and agency
Structure and agency, sometimes referred to as determinism versus voluntarism, form an enduring ontological debate in social theory: "Do social structures determine an individual's behaviour or does human agency?" In this context, agency refers to the capacity of individuals to act independently and make free choices, whereas structure relates to factors that limit or affect the choices and actions of individuals (e.g. social class, religion, gender, ethnicity, etc.). Discussions over the primacy of either structure or agency relate to the core of sociological epistemology (i.e. "what is the social world made of?", "what is a cause in the social world, and what is an effect?"). A perennial question within this debate is that of "social reproduction": how are structures (specifically, structures producing inequality) reproduced through the choices of individuals?
Synchrony and diachrony
Synchrony and diachrony (or statics and dynamics) within social theory are terms that refer to a distinction that emerged through the work of Levi-Strauss who inherited it from the linguistics of Ferdinand de Saussure. Synchrony slices moments of time for analysis, thus it is an analysis of static social reality. Diachrony, on the other hand, attempts to analyse dynamic sequences. Following Saussure, synchrony would refer to social phenomena as a static concept like a language, while diachrony would refer to unfolding processes like actual speech. In Anthony Giddens' introduction to Central Problems in Social Theory, he states that, "in order to show the interdependence of action and structure…we must grasp the time space relations inherent in the constitution of all social interaction." And like structure and agency, time is integral to discussion of social reproduction.
In terms of sociology, historical sociology is often better positioned to analyse social life as diachronic, while survey research takes a snapshot of social life and is thus better equipped to understand social life as synchronized. Some argue that the synchrony of social structure is a methodological perspective rather than an ontological claim. Nonetheless, the problem for theory is how to integrate the two manners of recording and thinking about social data.
Research methodology
Sociological research methods may be divided into two broad, though often supplementary, categories:
Qualitative designs emphasize understanding of social phenomena through direct observation, communication with participants, or analysis of texts, and may stress contextual and subjective accuracy over generality.
Quantitative designs approach social phenomena through quantifiable evidence, and often rely on statistical analysis of many cases (or across intentionally designed treatments in an experiment) to establish valid and reliable general claims.
Sociologists are often divided into camps of support for particular research techniques. These disputes relate to the epistemological debates at the historical core of social theory. While very different in many aspects, both qualitative and quantitative approaches involve a systematic interaction between theory and data. Quantitative methodologies hold the dominant position in sociology, especially in the United States. In the discipline's two most cited journals, quantitative articles have historically outnumbered qualitative ones by a factor of two. (Most articles published in the largest British journal, on the other hand, are qualitative.) Most textbooks on the methodology of social research are written from the quantitative perspective, and the very term "methodology" is often used synonymously with "statistics". Practically all sociology PhD programmes in the United States require training in statistical methods. The work produced by quantitative researchers is also deemed more 'trustworthy' and 'unbiased' by the general public, though this judgment continues to be challenged by antipositivists.
The choice of method often depends largely on what the researcher intends to investigate. For example, a researcher concerned with drawing a statistical generalization across an entire population may administer a survey questionnaire to a representative sample population. By contrast, a researcher who seeks full contextual understanding of an individual's social actions may choose ethnographic participant observation or open-ended interviews. Studies will commonly combine, or 'triangulate', quantitative and qualitative methods as part of a 'multi-strategy' design. For instance, a quantitative study may be performed to obtain statistical patterns on a target sample, and then combined with a qualitative interview to determine the play of agency.
Sampling
Quantitative methods are often used to ask questions about a population that is very large, making a census or a complete enumeration of all the members in that population infeasible. A 'sample' then forms a manageable subset of a population. In quantitative research, statistics are used to draw inferences from this sample regarding the population as a whole. The process of selecting a sample is referred to as 'sampling'. While it is usually best to sample randomly, concern with differences between specific subpopulations sometimes calls for stratified sampling. Conversely, the impossibility of random sampling sometimes necessitates nonprobability sampling, such as convenience sampling or snowball sampling.
Methods
The following list of research methods is neither exclusive nor exhaustive:
Archival research (or the Historical method): Draws upon the secondary data located in historical archives and records, such as biographies, memoirs, journals, and so on.
Content analysis: The content of interviews and other texts is systematically analysed. Often data is 'coded' as a part of the 'grounded theory' approach using qualitative data analysis (QDA) software, such as Atlas.ti, MAXQDA, NVivo, or QDA Miner.
Experimental research: The researcher isolates a single social process and reproduces it in a laboratory (for example, by creating a situation where unconscious sexist judgements are possible), seeking to determine whether or not certain social variables can cause, or depend upon, other variables (for instance, seeing if people's feelings about traditional gender roles can be manipulated by the activation of contrasting gender stereotypes). Participants are randomly assigned to different groups that either serve as controls—acting as reference points because they are tested with regard to the dependent variable, albeit without having been exposed to any independent variables of interest—or receive one or more treatments. Randomization allows the researcher to be sure that any resulting differences between groups are the result of the treatment.
Longitudinal study: An extensive examination of a specific person or group over a long period of time.
Observation: Using data from the senses, the researcher records information about social phenomenon or behaviour. Observation techniques may or may not feature participation. In participant observation, the researcher goes into the field (e.g. a community or a place of work), and participates in the activities of the field for a prolonged period of time in order to acquire a deep understanding of it. Data acquired through these techniques may be analysed either quantitatively or qualitatively. In the observation research, a sociologist might study global warming in some part of the world that is less populated.
Program Evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency. In both the public and private sectors, stakeholders often want to know whether the programs they are funding, implementing, voting for, or objecting to are producing the intended effect. While program evaluation first focuses on this definition, important considerations often include how much the program costs per participant, how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful.
Survey research: The researcher gathers data using interviews, questionnaires, or similar feedback from a set of people sampled from a particular population of interest. Survey items from an interview or questionnaire may be open-ended or closed-ended. Data from surveys is usually analysed statistically on a computer.
Computational sociology
Sociologists increasingly draw upon computationally intensive methods to analyse and model social phenomena. Using computer simulations, artificial intelligence, text mining, complex statistical methods, and new analytic approaches like social network analysis and social sequence analysis, computational sociology develops and tests theories of complex social processes through bottom-up modelling of social interactions.
Although the subject matter and methodologies in social science differ from those in natural science or computer science, several of the approaches used in contemporary social simulation originated from fields such as physics and artificial intelligence. By the same token, some of the approaches that originated in computational sociology have been imported into the natural sciences, such as measures of network centrality from the fields of social network analysis and network science. In relevant literature, computational sociology is often related to the study of social complexity. Social complexity concepts such as complex systems, non-linear interconnection among macro and micro process, and emergence, have entered the vocabulary of computational sociology. A practical and well-known example is the construction of a computational model in the form of an "artificial society", by which researchers can analyse the structure of a social system.
Subfields
Culture
Sociologists' approach to culture can be divided into "sociology of culture" and "cultural sociology"—terms which are similar, though not entirely interchangeable. Sociology of culture is an older term, and considers some topics and objects as more or less "cultural" than others. Conversely, cultural sociology sees all social phenomena as inherently cultural. Sociology of culture often attempts to explain certain cultural phenomena as a product of social processes, while cultural sociology sees culture as a potential explanation of social phenomena.
For Simmel, culture referred to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history." While early theorists such as Durkheim and Mauss were influential in cultural anthropology, sociologists of culture are generally distinguished by their concern for modern (rather than primitive or ancient) society. Cultural sociology often involves the hermeneutic analysis of words, artefacts and symbols, or ethnographic interviews. However, some sociologists employ historical-comparative or quantitative techniques in the analysis of culture, Weber and Bourdieu for instance. The subfield is sometimes allied with critical theory in the vein of Theodor W. Adorno, Walter Benjamin, and other members of the Frankfurt School. Loosely distinct from the sociology of culture is the field of cultural studies. Birmingham School theorists such as Richard Hoggart and Stuart Hall questioned the division between "producers" and "consumers" evident in earlier theory, emphasizing the reciprocity in the production of texts. Cultural Studies aims to examine its subject matter in terms of cultural practices and their relation to power. For example, a study of a subculture (e.g. white working class youth in London) would consider the social practices of the group as they relate to the dominant class. The "cultural turn" of the 1960s ultimately placed culture much higher on the sociological agenda.
Art, music and literature
Sociology of literature, film, and art is a subset of the sociology of culture. This field studies the social production of artistic objects and its social implications. A notable example is Pierre Bourdieu's Les Règles de L'Art: Genèse et Structure du Champ Littéraire (1992). None of the founding fathers of sociology produced a detailed study of art, but they did develop ideas that were subsequently applied to literature by others. Marx's theory of ideology was directed at literature by Pierre Macherey, Terry Eagleton and Fredric Jameson. Weber's theory of modernity as cultural rationalization, which he applied to music, was later applied to all the arts, literature included, by Frankfurt School writers such as Theodor Adorno and Jürgen Habermas. Durkheim's view of sociology as the study of externally defined social facts was redirected towards literature by Robert Escarpit. Bourdieu's own work is clearly indebted to Marx, Weber and Durkheim.
Criminality, deviance, law and punishment
Criminologists analyse the nature, causes, and control of criminal activity, drawing upon methods across sociology, psychology, and the behavioural sciences. The sociology of deviance focuses on actions or behaviours that violate norms, including both infringements of formally enacted rules (e.g., crime) and informal violations of cultural norms. It is the remit of sociologists to study why these norms exist; how they change over time; and how they are enforced. The concept of social disorganization is when the broader social systems leads to violations of norms. For instance, Robert K. Merton produced a typology of deviance, which includes both individual and system level causal explanations of deviance.
Sociology of law
The study of law played a significant role in the formation of classical sociology. Durkheim famously described law as the "visible symbol" of social solidarity. The sociology of law refers to both a sub-discipline of sociology and an approach within the field of legal studies. Sociology of law is a diverse field of study that examines the interaction of law with other aspects of society, such as the development of legal institutions and the effect of laws on social change and vice versa. For example, an influential recent work in the field relies on statistical analyses to argue that the increase in incarceration in the US over the last 30 years is due to changes in law and policing and not to an increase in crime; and that this increase has significantly contributed to the persistence of racial stratification.
Communications and information technologies
The sociology of communications and information technologies includes "the social aspects of computing, the Internet, new media, computer networks, and other communication and information technologies."
Internet and digital media
The Internet is of interest to sociologists in various ways, most practically as a tool for research and as a discussion platform. The sociology of the Internet in the broad sense concerns the analysis of online communities (e.g. newsgroups, social networking sites) and virtual worlds, meaning that there is often overlap with community sociology. Online communities may be studied statistically through network analysis or interpreted qualitatively through virtual ethnography. Moreover, organizational change is catalysed through new media, thereby influencing social change at-large, perhaps forming the framework for a transformation from an industrial to an informational society. One notable text is Manuel Castells' The Internet Galaxy—the title of which forms an inter-textual reference to Marshall McLuhan's The Gutenberg Galaxy. Closely related to the sociology of the Internet is digital sociology, which expands the scope of study to address not only the internet but also the impact of the other digital media and devices that have emerged since the first decade of the twenty-first century.
Media
As with cultural studies, media study is a distinct discipline that owes to the convergence of sociology and other social sciences and humanities, in particular, literary criticism and critical theory. Though neither the production process nor the critique of aesthetic forms is in the remit of sociologists, analyses of socializing factors, such as ideological effects and audience reception, stem from sociological theory and method. Thus the 'sociology of the media' is not a subdiscipline per se, but the media is a common and often indispensable topic.
Economic sociology
The term "economic sociology" was first used by William Stanley Jevons in 1879, later to be coined in the works of Durkheim, Weber, and Simmel between 1890 and 1920. Economic sociology arose as a new approach to the analysis of economic phenomena, emphasizing class relations and modernity as a philosophical concept. The relationship between capitalism and modernity is a salient issue, perhaps best demonstrated in Weber's The Protestant Ethic and the Spirit of Capitalism (1905) and Simmel's The Philosophy of Money (1900). The contemporary period of economic sociology, also known as new economic sociology, was consolidated by the 1985 work of Mark Granovetter titled "Economic Action and Social Structure: The Problem of Embeddedness". This work elaborated the concept of embeddedness, which states that economic relations between individuals or firms take place within existing social relations (and are thus structured by these relations as well as the greater social structures of which those relations are a part). Social network analysis has been the primary methodology for studying this phenomenon. Granovetter's theory of the strength of weak ties and Ronald Burt's concept of structural holes are two of the best known theoretical contributions of this field.
Work, employment, and industry
The sociology of work, or industrial sociology, examines "the direction and implications of trends in technological change, globalization, labour markets, work organization, managerial practices and employment relations to the extent to which these trends are intimately related to changing patterns of inequality in modern societies and to the changing experiences of individuals and families the ways in which workers challenge, resist and make their own contributions to the patterning of work and shaping of work institutions."
Education
The sociology of education is the study of how educational institutions determine social structures, experiences, and other outcomes. It is particularly concerned with the schooling systems of modern industrial societies. A classic 1966 study in this field by James Coleman, known as the "Coleman Report", analysed the performance of over 150,000 students and found that student background and socioeconomic status are much more important in determining educational outcomes than are measured differences in school resources (i.e. per pupil spending). The controversy over "school effects" ignited by that study has continued to this day. The study also found that socially disadvantaged black students profited from schooling in racially mixed classrooms, and thus served as a catalyst for desegregation busing in American public schools.
Environment
Environmental sociology is the study of human interactions with the natural environment, typically emphasizing human dimensions of environmental problems, social impacts of those problems, and efforts to resolve them. As with other sub-fields of sociology, scholarship in environmental sociology may be at one or multiple levels of analysis, from global (e.g. world-systems) to local, societal to individual. Attention is paid also to the processes by which environmental problems become defined and known to humans. As argued by notable environmental sociologist John Bellamy Foster, the predecessor to modern environmental sociology is Marx's analysis of the metabolic rift, which influenced contemporary thought on sustainability. Environmental sociology is often interdisciplinary and overlaps with the sociology of risk, rural sociology and the sociology of disaster.
Human ecology
Human ecology deals with interdisciplinary study of the relationship between humans and their natural, social, and built environments. In addition to Environmental sociology, this field overlaps with architectural sociology, urban sociology, and to some extent visual sociology. In turn, visual sociology—which is concerned with all visual dimensions of social life—overlaps with media studies in that it uses photography, film and other technologies of media.
Social pre-wiring
Social pre-wiring deals with the study of fetal social behavior and social interactions in a multi-fetal environment. Specifically, social pre-wiring refers to the ontogeny of social interaction. Also informally referred to as, "wired to be social". The theory questions whether there is a propensity to socially oriented action already present before birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social.
Circumstantial evidence supporting the social pre-wiring hypothesis can be revealed when examining newborns' behavior. Newborns, not even hours after birth, have been found to display a preparedness for social interaction. This preparedness is expressed in ways such as their imitation of facial gestures. This observed behavior cannot be attributed to any current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behavior and identity through genetics.
Principal evidence of this theory is uncovered by examining Twin pregnancies. The main argument is, if there are social behaviors that are inherited and developed before birth, then one should expect twin foetuses to engage in some form of social interaction before they are born. Thus, ten foetuses were analyzed over a period of time using ultrasound techniques. Using kinematic analysis, the results of the experiment were that the twin foetuses would interact with each other for longer periods and more often as the pregnancies went on. Researchers were able to conclude that the performance of movements between the co-twins were not accidental but specifically aimed.
The social pre-wiring hypothesis was proved correct: The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin foetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behavior: when the context enables it, as in the case of twin foetuses, other-directed actions are not only possible but predominant over self-directed actions.
Family, gender, and sexuality
Family, gender and sexuality form a broad area of inquiry studied in many sub-fields of sociology. A family is a group of people who are related by kinship ties :- Relations of blood / marriage / civil partnership or adoption. The family unit is one of the most important social institutions found in some form in nearly all known societies. It is the basic unit of social organization and plays a key role in socializing children into the culture of their society. The sociology of the family examines the family, as an institution and unit of socialization, with special concern for the comparatively modern historical emergence of the nuclear family and its distinct gender roles. The notion of "childhood" is also significant. As one of the more basic institutions to which one may apply sociological perspectives, the sociology of the family is a common component on introductory academic curricula. Feminist sociology, on the other hand, is a normative sub-field that observes and critiques the cultural categories of gender and sexuality, particularly with respect to power and inequality. The primary concern of feminist theory is the patriarchy and the systematic oppression of women apparent in many societies, both at the level of small-scale interaction and in terms of the broader social structure. Feminist sociology also analyses how gender interlocks with race and class to produce and perpetuate social inequalities. "How to account for the differences in definitions of femininity and masculinity and in sex role across different societies and historical periods" is also a concern.
Health, illness, and the body
The sociology of health and illness focuses on the social effects of, and public attitudes toward, illnesses, diseases, mental health and disabilities. This sub-field also overlaps with gerontology and the study of the ageing process. Medical sociology, by contrast, focuses on the inner-workings of the medical profession, its organizations, its institutions and how these can shape knowledge and interactions. In Britain, sociology was introduced into the medical curriculum following the Goodenough Report (1944).
The sociology of the body and embodiment takes a broad perspective on the idea of "the body" and includes "a wide range of embodied dynamics including human and non-human bodies, morphology, human reproduction, anatomy, body fluids, biotechnology, genetics". This often intersects with health and illness, but also theories of bodies as political, social, cultural, economic and ideological productions. The ISA maintains a Research Committee devoted to "the Body in the Social Sciences".
Death, dying, bereavement
A subfield of the sociology of health and illness that overlaps with cultural sociology is the study of death, dying and bereavement, sometimes referred to broadly as the sociology of death. This topic is exemplified by the work of Douglas Davies and Michael C. Kearl.
Knowledge and science
The sociology of knowledge is the study of the relationship between human thought and the social context within which it arises, and of the effects prevailing ideas have on societies. The term first came into widespread use in the 1920s, when a number of German-speaking theorists, most notably Max Scheler, and Karl Mannheim, wrote extensively on it. With the dominance of functionalism through the middle years of the 20th century, the sociology of knowledge tended to remain on the periphery of mainstream sociological thought. It was largely reinvented and applied much more closely to everyday life in the 1960s, particularly by Peter L. Berger and Thomas Luckmann in The Social Construction of Reality (1966) and is still central for methods dealing with qualitative understanding of human society (compare socially constructed reality). The "archaeological" and "genealogical" studies of Michel Foucault are of considerable contemporary influence.
The sociology of science involves the study of science as a social activity, especially dealing "with the social conditions and effects of science, and with the social structures and processes of scientific activity." Important theorists in the sociology of science include Robert K. Merton and Bruno Latour. These branches of sociology have contributed to the formation of science and technology studies. Both the ASA and the BSA have sections devoted to the subfield of Science, Knowledge and Technology. The ISA maintains a Research Committee on Science and Technology.
Leisure
Sociology of leisure is the study of how humans organize their free time. Leisure includes a broad array of activities, such as sport, tourism, and the playing of games. The sociology of leisure is closely tied to the sociology of work, as each explores a different side of the work–leisure relationship. More recent studies in the field move away from the work–leisure relationship and focus on the relation between leisure and culture. This area of sociology began with Thorstein Veblen's Theory of the Leisure Class.
Peace, war, and conflict
This subfield of sociology studies, broadly, the dynamics of war, conflict resolution, peace movements, war refugees, conflict resolution and military institutions. As a subset of this subfield, military sociology aims towards the systematic study of the military as a social group rather than as an organization. It is a highly specialized sub-field which examines issues related to service personnel as a distinct group with coerced collective action based on shared interests linked to survival in vocation and combat, with purposes and values that are more defined and narrower than within civil society. Military sociology also concerns civilian-military relations and interactions between other groups or governmental agencies. Topics include the dominant assumptions held by those in the military, changes in military members' willingness to fight, military unionization, military professionalism, the increased utilization of women, the military industrial-academic complex, the military's dependence on research, and the institutional and organizational structure of military.
Political sociology
Historically, political sociology concerned the relations between political organization and society. A typical research question in this area might be: "Why do so few American citizens choose to vote?" In this respect questions of political opinion formation brought about some of the pioneering uses of statistical survey research by Paul Lazarsfeld. A major subfield of political sociology developed in relation to such questions, which draws on comparative history to analyse socio-political trends. The field developed from the work of Max Weber and Moisey Ostrogorsky.
Contemporary political sociology includes these areas of research, but it has been opened up to wider questions of power and politics. Today political sociologists are as likely to be concerned with how identities are formed that contribute to structural domination by one group over another; the politics of who knows how and with what authority; and questions of how power is contested in social interactions in such a way as to bring about widespread cultural and social change. Such questions are more likely to be studied qualitatively. The study of social movements and their effects has been especially important in relation to these wider definitions of politics and power.
Political sociology has also moved beyond methodological nationalism and analysed the role of non-governmental organizations, the diffusion of the nation-state throughout the Earth as a social construct, and the role of stateless entities in the modern world society. Contemporary political sociologists also study inter-state interactions and human rights.
Population and demography
Demographers or sociologists of population study the size, composition and change over time of a given population. Demographers study how these characteristics impact, or are impacted by, various social, economic or political systems. The study of population is also closely related to human ecology and environmental sociology, which studies a population's relationship with the surrounding environment and often overlaps with urban or rural sociology. Researchers in this field may study the movement of populations: transportation, migrations, diaspora, etc., which falls into the subfield known as mobilities studies and is closely related to human geography. Demographers may also study spread of disease within a given population or epidemiology.
Public sociology
Public sociology refers to an approach to the discipline which seeks to transcend the academy in order to engage with wider audiences. It is perhaps best understood as a style of sociology rather than a particular method, theory, or set of political values. This approach is primarily associated with Michael Burawoy who contrasted it with professional sociology, a form of academic sociology that is concerned primarily with addressing other professional sociologists. Public sociology is also part of the broader field of science communication or science journalism.
Race and ethnic relations
The sociology of race and of ethnic relations is the area of the discipline that studies the social, political, and economic relations between races and ethnicities at all levels of society. This area encompasses the study of racism, residential segregation, and other complex social processes between different racial and ethnic groups. This research frequently interacts with other areas of sociology such as stratification and social psychology, as well as with postcolonial theory. At the level of political policy, ethnic relations are discussed in terms of either assimilationism or multiculturalism. Anti-racism forms another style of policy, particularly popular in the 1960s and 1970s.
Religion
The sociology of religion concerns the practices, historical backgrounds, developments, universal themes and roles of religion in society. There is particular emphasis on the recurring role of religion in all societies and throughout recorded history. The sociology of religion is distinguished from the philosophy of religion in that sociologists do not set out to assess the validity of religious truth-claims, instead assuming what Peter L. Berger has described as a position of "methodological atheism". It may be said that the modern formal discipline of sociology began with the analysis of religion in Durkheim's 1897 study of suicide rates among Roman Catholic and Protestant populations. Max Weber published four major texts on religion in a context of economic sociology and social stratification: The Protestant Ethic and the Spirit of Capitalism (1905), The Religion of China: Confucianism and Taoism (1915), The Religion of India: The Sociology of Hinduism and Buddhism (1915), and Ancient Judaism (1920). Contemporary debates often centre on topics such as secularization, civil religion, the intersection of religion and economics and the role of religion in a context of globalization and multiculturalism.
Social change and development
The sociology of change and development attempts to understand how societies develop and how they can be changed. This includes studying many different aspects of society, for example demographic trends, political or technological trends, or changes in culture. Within this field, sociologists often use macrosociological methods or historical-comparative methods. In contemporary studies of social change, there are overlaps with international development or community development. However, most of the founders of sociology had theories of social change based on their study of history. For instance, Marx contended that the material circumstances of society ultimately caused the ideal or cultural aspects of society, while Weber argued that it was in fact the cultural mores of Protestantism that ushered in a transformation of material circumstances. In contrast to both, Durkheim argued that societies moved from simple to complex through a process of sociocultural evolution. Sociologists in this field also study processes of globalization and imperialism. Most notably, Immanuel Wallerstein extends Marx's theoretical frame to include large spans of time and the entire globe in what is known as world systems theory. Development sociology is also heavily influenced by post-colonialism. In recent years, Raewyn Connell issued a critique of the bias in sociological research towards countries in the Global North. She argues that this bias blinds sociologists to the lived experiences of the Global South, specifically, so-called, "Northern Theory" lacks an adequate theory of imperialism and colonialism.
There are many organizations studying social change, including the Fernand Braudel Center for the Study of Economies, Historical Systems, and Civilizations, and the Global Social Change Research Project.
Social networks
A social network is a social structure composed of individuals (or organizations) called "nodes", which are tied (connected) by one or more specific types of interdependency, such as friendship, kinship, financial exchange, dislike, sexual relationships, or relationships of beliefs, knowledge or prestige. Social networks operate on many levels, from families up to the level of nations, and play a critical role in determining the way problems are solved, organizations are run, and the degree to which individuals succeed in achieving their goals. An underlying theoretical assumption of social network analysis is that groups are not necessarily the building blocks of society: the approach is open to studying less-bounded social systems, from non-local communities to networks of exchange. Drawing theoretically from relational sociology, social network analysis avoids treating individuals (persons, organizations, states) as discrete units of analysis, it focuses instead on how the structure of ties affects and constitutes individuals and their relationships. In contrast to analyses that assume that socialization into norms determines behaviour, network analysis looks to see the extent to which the structure and composition of ties affect norms. On the other hand, recent research by Omar Lizardo also demonstrates that network ties are shaped and created by previously existing cultural tastes. Social network theory is usually defined in formal mathematics and may include integration of geographical data into sociomapping.
Social psychology
Sociological social psychology focuses on micro-scale social actions. This area may be described as adhering to "sociological miniaturism", examining whole societies through the study of individual thoughts and emotions as well as behaviour of small groups. One special concern to psychological sociologists is how to explain a variety of demographic, social, and cultural facts in terms of human social interaction. Some of the major topics in this field are social inequality, group dynamics, prejudice, aggression, social perception, group behaviour, social change, non-verbal behaviour, socialization, conformity, leadership, and social identity. Social psychology may be taught with psychological emphasis. In sociology, researchers in this field are the most prominent users of the experimental method (however, unlike their psychological counterparts, they also frequently employ other methodologies). Social psychology looks at social influences, as well as social perception and social interaction.
Stratification, poverty and inequality
Social stratification is the hierarchical arrangement of individuals into social classes, castes, and divisions within a society. Modern Western societies stratification traditionally relates to cultural and economic classes arranged in three main layers: upper class, middle class, and lower class, but each class may be further subdivided into smaller classes (e.g. occupational). Social stratification is interpreted in radically different ways within sociology. Proponents of structural functionalism suggest that, since the stratification of classes and castes is evident in all societies, hierarchy must be beneficial in stabilizing their existence. Conflict theorists, by contrast, critique the inaccessibility of resources and lack of social mobility in stratified societies.
Karl Marx distinguished social classes by their connection to the means of production in the capitalist system: the bourgeoisie own the means, but this effectively includes the proletariat itself as the workers can only sell their own labour power (forming the material base of the cultural superstructure). Max Weber critiqued Marxist economic determinism, arguing that social stratification is not based purely on economic inequalities, but on other status and power differentials (e.g. patriarchy). According to Weber, stratification may occur among at least three complex variables:
Property (class): A person's economic position in a society, based on birth and individual achievement. Weber differs from Marx in that he does not see this as the supreme factor in stratification. Weber noted how managers of corporations or industries control firms they do not own; Marx would have placed such a person in the proletariat.
Prestige (status): A person's prestige, or popularity in a society. This could be determined by the kind of job this person does or wealth.
Power (political party): A person's ability to get their way despite the resistance of others. For example, individuals in state jobs, such as an employee of the Federal Bureau of Investigation, or a member of the United States Congress, may hold little property or status but they still hold immense power.
Pierre Bourdieu provides a modern example in the concepts of cultural and symbolic capital. Theorists such as Ralf Dahrendorf have noted the tendency towards an enlarged middle-class in modern Western societies, particularly in relation to the necessity of an educated work force in technological or service-based economies. Perspectives concerning globalization, such as dependency theory, suggest this effect owes to the shift of workers to the developing countries.
Urban and rural sociology
Urban sociology involves the analysis of social life and human interaction in metropolitan areas. It is a discipline seeking to provide advice for planning and policy making. After the Industrial Revolution, works such as Georg Simmel's The Metropolis and Mental Life (1903) focused on urbanization and the effect it had on alienation and anonymity. In the 1920s and 1930s The Chicago School produced a major body of theory on the nature of the city, important to both urban sociology and criminology, utilizing symbolic interactionism as a method of field research. Contemporary research is commonly placed in a context of globalization, for instance, in Saskia Sassen's study of the "global city". Rural sociology, by contrast, is the analysis of non-metropolitan areas. As agriculture and wilderness tend to be a more prominent social fact in rural regions, rural sociologists often overlap with environmental sociologists.
Community sociology
Often grouped with urban and rural sociology is that of community sociology or the sociology of community. Taking various communities—including online communities—as the unit of analysis, community sociologists study the origin and effects of different associations of people. For instance, German sociologist Ferdinand Tönnies distinguished between two types of human association: gemeinschaft (usually translated as "community") and gesellschaft ("society" or "association"). In his 1887 work, Gemeinschaft und Gesellschaft, Tönnies argued that Gemeinschaft is perceived to be a tighter and more cohesive social entity, due to the presence of a "unity of will". The 'development' or 'health' of a community is also a central concern of community sociologists also engage in development sociology, exemplified by the literature surrounding the concept of social capital.
Other academic disciplines
Sociology overlaps with a variety of disciplines that study society, in particular social anthropology, political science, economics, social work and social philosophy. Many comparatively new fields such as communication studies, cultural studies, demography and literary theory, draw upon methods that originated in sociology. The terms "social science" and "social research" have both gained a degree of autonomy since their origination in classical sociology. The distinct field of social anthropology or anthroposociology is the dominant constituent of anthropology throughout the United Kingdom and Commonwealth and much of Europe (France in particular), where it is distinguished from cultural anthropology. In the United States, social anthropology is commonly subsumed within cultural anthropology (or under the relatively new designation of sociocultural anthropology).
Sociology and applied sociology are connected to the professional and academic discipline of social work. Both disciplines study social interactions, community and the effect of various systems (i.e. family, school, community, laws, political sphere) on the individual. However, social work is generally more focused on practical strategies to alleviate social dysfunctions; sociology in general provides a thorough examination of the root causes of these problems. For example, a sociologist might study why a community is plagued with poverty. The applied sociologist would be more focused on practical strategies on what needs to be done to alleviate this burden. The social worker would be focused on action; implementing theses strategies "directly" or "indirectly" by means of mental health therapy, counselling, advocacy, community organization or community mobilization.
Social anthropology is the branch of anthropology that studies how contemporary living human beings behave in social groups. Practitioners of social anthropology, like sociologists, investigate various facets of social organization. Traditionally, social anthropologists analyzed non-industrial and non-Western societies, whereas sociologists focused on industrialized societies in the Western world. In recent years, however, social anthropology has expanded its focus to modern Western societies, meaning that the two disciplines increasingly converge.
Sociocultural anthropology, which includes linguistic anthropology, is concerned with the problem of difference and similarity within and between human populations. The discipline arose concomitantly with the expansion of European colonial empires, and its practices and theories have been questioned and reformulated along with processes of decolonization. Such issues have re-emerged as transnational processes have challenged the centrality of the nation-state to theorizations about culture and power. New challenges have emerged as public debates about multiculturalism, and the increasing use of the culture concept outside of the academy and among peoples studied by anthropology. These times are not "business-as-usual" in the academy, in anthropology, or in the world, if ever there were such times.
Irving Louis Horowitz, in his The Decomposition of Sociology (1994), has argued that the discipline, while arriving from a "distinguished lineage and tradition", is in decline due to deeply ideological theory and a lack of relevance to policy making: "The decomposition of sociology began when this great tradition became subject to ideological thinking, and an inferior tradition surfaced in the wake of totalitarian triumphs." Furthermore: "A problem yet unmentioned is that sociology's malaise has left all the social sciences vulnerable to pure positivism—to an empiricism lacking any theoretical basis. Talented individuals who might, in an earlier time, have gone into sociology are seeking intellectual stimulation in business, law, the natural sciences, and even creative writing; this drains sociology of much needed potential." Horowitz cites the lack of a 'core discipline' as exacerbating the problem. Randall Collins, the Dorothy Swaine Thomas Professor in Sociology at the University of Pennsylvania and a member of the Advisory Editors Council of the Social Evolution & History journal, has voiced similar sentiments: "we have lost all coherence as a discipline, we are breaking up into a conglomerate of specialities, each going on its own way and with none too high regard for each other."
In 2007, The Times Higher Education Guide published a list of 'The most cited authors of books in the Humanities' (including philosophy and psychology). Seven of the top ten are listed as sociologists: Michel Foucault (1), Pierre Bourdieu (2), Anthony Giddens (5), Erving Goffman (6), Jürgen Habermas (7), Max Weber (8), and Bruno Latour (10).
Journals
The most highly ranked general journals which publish original research in the field of sociology are the American Journal of Sociology and the American Sociological Review. The Annual Review of Sociology, which publishes original review essays, is also highly ranked. Many other generalist and specialized journals exist.
See also
Bibliography of sociology
Critical juncture theory
Cultural theory
Engaged theory
Historic recurrence
History of the social sciences
List of sociologists
Outline of sociology
Political sociology
Post-industrial society
Social theory
Social psychology
Sociological Francoism
Notes
References
Citations
{{Reflist
|refs =
<ref name="transformation325">Harriss, John. The Second Great Transformation? Capitalism at the End of the Twentieth Century in Allen, T. and Thomas, Alan (eds) Poverty and Development in the 21st Century', Oxford University Press, Oxford. p. 325.</ref>
}}
Sources
Aby, Stephen H. 2005. Sociology: A Guide to Reference and Information Sources (3rd ed.). Littleton, CO: Libraries Unlimited Inc.
Babbie, Earl R. 2003. The Practice of Social Research (10th ed.). Wadsworth: Thomson Learning.
C. Wright Mills, Intellectual Craftsmanship Advices how to Work for young Sociologist Collins, Randall. 1994. Four Sociological Traditions. Oxford: Oxford University Press.
Coser, Lewis A. 1971. Masters of Sociological Thought: Ideas in Historical and Social Context. New York: Harcourt Brace Jovanovich. .
Giddens, Anthony. 2006. Sociology (5th ed.). Cambridge: Polity Press.
House, J. S., & Mortimer, J. (1990). Social structure and the individual: Emerging themes and new directions. Social Psychology Quarterly, 71–80.
Lipset, Seymour Martin and Everett Carll Ladd. "The Politics of American Sociologists", American Journal of Sociology (1972) 78#1 pp. 67–104
Merton, Robert K. 1959. Social Theory and Social Structure. Toward the codification of theory and research (revised & enlarged ed.). Glencoe, IL.
Mills, C. Wright. 1959. The Sociological Imagination
Nisbet, Robert A. 1967. The Sociological Tradition, London, Heinemann Educational Books.
Ritzer, George, and Douglas J. Goodman. 2004. Sociological Theory (6th ed.). McGraw-Hill.
Scott, John, and Gordon Marshall, eds. 2005. A Dictionary of Sociology (3rd ed.). Oxford University Press. ,
Wallace, Ruth A., and Alison Wolf. 1995. Contemporary Sociological Theory: Continuing the Classical Tradition (4th ed.). Prentice-Hall.
White, Harrison C. 2008. Identity and Control. How Social Formations Emerge (2nd ed.). Princeton: Princeton University Press.
Willis, Evan. 1996. The Sociological Quest: An introduction to the study of social life''. New Brunswick, NJ: Rutgers University Press.
External links
American Sociological Association (ASA)
Eastern Sociological Society (ESS)
Australian Sociological Association (TASA)
Bangladesh Sociological Society (BSS)
British Sociological Association (BSA)
Canadian Association of French-speaking Sociologists and Anthropologists
Canadian Sociological Association (CSA)
European Sociological Association (ESA)
French Sociological Association
German Sociological Association (DGS)
Guide to the University of Chicago Department of Sociology Interviews 1972 at the University of Chicago Special Collections Research Center
Guide to the University of Chicago Department of Sociology Records 1924-2001 at the University of Chicago Special Collections Research Center
Indian Sociological Society (ISS)
International Institute of Sociology (IIS)
International Sociological Association (ISA)
Latin American Sociological Association (ALAS)
Observatory of International Research (OOIR): Latest Papers and Trends in Sociology
Portuguese Sociological Association (APS)
Sociological Association of Ireland (SAI)
The Nordic Sociological Association (NSA)
The Swedish Sociological Association(in swedish) | 0.772034 | 0.999715 | 0.771814 |
Humanities | Humanities are academic disciplines that study aspects of human society and culture, including certain fundamental questions asked by humans. During the Renaissance, the term 'humanities' referred to the study of classical literature and language, as opposed to the study of religion or 'divinity.' The study of the humanities was a key part of the secular curriculum in universities at the time. Today, the humanities are more frequently defined as any fields of study outside of natural sciences, social sciences, formal sciences (like mathematics), and applied sciences (or professional training). They use methods that are primarily critical, speculative, or interpretative and have a significant historical element—as distinguished from the mainly empirical approaches of science.
The humanities include the studies of philosophy, religion, history, language arts (literature, writing, oratory, rhetoric, poetry, etc.), performing arts (theater, music, dance, etc.), and visual arts (painting, sculpture, photography, filmmaking, etc.).
Some definitions of the humanities encompass law and religion due to their shared characteristics, such as the study of language and culture. However, these definitions are not universally accepted, as law and religion are often considered professional subjects rather than humanities subjects. Professional subjects, like some social sciences, are sometimes classified as being part of both the liberal arts and professional development education, whereas humanities subjects are generally confined to the traditional liberal arts education. Although sociology, anthropology, archaeology, linguistics and psychology share some similarities with the humanities, these are often considered social sciences. Similarly, disciplines such as finance, business administration, political science, economics, and global studies have closer ties to the social sciences rather than the humanities.
Scholars in the humanities are called humanities scholars or sometimes humanists. The term humanist also describes the philosophical position of humanism, which antihumanist scholars in the humanities reject. Renaissance scholars and artists are also known as humanists. Some secondary schools offer humanities classes usually consisting of literature, history, foreign language, and art.
Human disciplines like history and language mainly use the comparative method and comparative research. Other methods used in the humanities include hermeneutics, source criticism, esthetic interpretation, and speculative reason.
Etymology
The word humanities comes from the Renaissance Latin phrase studia humanitatis, which translates to study of humanity. This phrase was used to refer to the study of classical literature and language, which was seen as an important aspect of a refined education in the Renaissance. In its usage in the early 15th century, the studia humanitatis was a course of studies that consisted of grammar, poetry, rhetoric, history, and moral philosophy, primarily derived from the study of Latin and Greek classics. The word humanitas also gave rise to the Renaissance Italian neologism umanisti, whence "humanist", "Renaissance humanism".
Fields
Classics
Classics, in the Western academic tradition, refers to the studies of the cultures of classical antiquity, namely Ancient Greek and Latin and the Ancient Greek and Roman cultures. Classical studies is considered one of the cornerstones of the humanities; however, its popularity declined during the 20th century. Nevertheless, the influence of classical ideas on many humanities disciplines, such as philosophy and literature, remains strong.
History
History is systematically collected information about the past. When used as the name of a field of study, history refers to the study and interpretation of the record of humans, societies, institutions, and any topic that has changed over time.
Traditionally, the study of history has been considered a part of the humanities. In modern academia, history can occasionally be classified as a social science, though this definition is contested.
Language
While the scientific study of language is known as linguistics and is generally considered a social science, a natural science or a cognitive science, the study of languages is also central to the humanities. A good deal of twentieth- and twenty-first-century philosophy has been devoted to the analysis of language and to the question of whether, as Wittgenstein claimed, many of our philosophical confusions derive from the vocabulary we use; literary theory has explored the rhetorical, associative, and ordering features of language; and historical linguists have studied the development of languages across time. Literature, covering a variety of uses of language including prose forms (such as the novel), poetry and drama, also lies at the heart of the modern humanities curriculum. College-level programs in a foreign language usually include study of important works of the literature in that language, as well as the language itself.
Law
In everyday language, law refers to a rule that is enforced by a governing institution, as opposed to a moral or ethical rule that is not subject to formal enforcement. The study of law can be seen as either a social science or a humanities discipline, depending on one's perspective. Some see it as a social science because of its objective and measurable nature, while others view it as a humanities discipline because of its focus on values and interpretation. Law is not always enforceable, especially in the international relations context. Law has been defined in various ways, such as "a system of rules", "an interpretive concept" for achieving justice, "an authority" to mediate between people's interests, or "the command of a sovereign" backed by the threat of punishment.
However one likes to think of law, it is a completely central social institution. Legal policy is shaped by the practical application of ideas from many social science and humanities disciplines, including philosophy, history, political science, economics, anthropology, and sociology. Law is politics, because politicians create them. Law is philosophy, because moral and ethical persuasions shape their ideas. Law tells many of history's stories, because statutes, case law and codifications build up over time. Law is also economics, because any rule about contract, tort, property law, labour law, company law and many more can have long-lasting effects on how productivity is organised and the distribution of wealth. The noun law derives from the Old English word lagu, meaning something laid down or fixed, and the adjective legal comes from the Latin word LEX.
Literature
Literature is a term that does not have a universally accepted definition, but which has variably included all written work; writing that possesses literary merit; and language that emphasizes its own literary features, as opposed to ordinary language. Etymologically the term derives from the Latin word literatura/litteratura which means "writing formed with letters", although some definitions include spoken or sung texts. Literature can be classified as fiction or non-fiction; poetry or prose. It can be further distinguished according to major forms such as the novel, short story or drama; and works are often categorised according to historical periods, or according to their adherence to certain aesthetic features or expectations (genre).
Philosophy
Philosophy—etymologically, the "love of wisdom"—is generally the study of problems concerning matters such as existence, knowledge, justification, truth, justice, right and wrong, beauty, validity, mind, and language. Philosophy is distinguished from other ways of addressing these issues by its critical, generally systematic approach and its reliance on reasoned argument, rather than experiments (experimental philosophy being an exception).
Philosophy used to be a very comprehensive term, including what have subsequently become separate disciplines, such as physics. (As Immanuel Kant noted, "Ancient Greek philosophy was divided into three sciences: physics, ethics, and logic.") Today, the main fields of philosophy are logic, ethics, metaphysics, and epistemology. Still, it continues to overlap with other disciplines. The field of semantics, for example, brings philosophy into contact with linguistics.
Since the early twentieth century, philosophy in English-speaking universities has moved away from the humanities and closer to the formal sciences, becoming much more analytic. Analytic philosophy is marked by emphasis on the use of logic and formal methods of reasoning, conceptual analysis, and the use of symbolic and/or mathematical logic, as contrasted with the Continental style of philosophy. This method of inquiry is largely indebted to the work of philosophers such as Gottlob Frege, Bertrand Russell, G.E. Moore and Ludwig Wittgenstein.
Religion
Religious Studies is commonly regarded as a social science. Based on current knowledge, it seems that all known cultures, both in the past and present, have some form of belief system or religious practice. While there may be isolated individuals or groups who do not practice any form of religion, it is not known if there has ever been a society that was entirely devoid of religious belief. The definition of religion is not universal, and different cultures may have different ideas about what constitutes religion. Religion may be characterized with a community since humans are social animals. Rituals are used to bound the community together. Social animals require rules. Ethics is a requirement of society, but not a requirement of religion. Shinto, Daoism, and other folk or natural religions do not have ethical codes. While some religions do include the concept of deities, others do not. Therefore, the supernatural does not necessarily require the existence of deities. Rather, it can be broadly defined as any phenomena that cannot be explained by science or reason. Magical thinking creates explanations not available for empirical verification. Stories or myths are narratives being both didactic and entertaining. They are necessary for understanding the human predicament. Some other possible characteristics of religion are pollutions and purification, the sacred and the profane, sacred texts, religious institutions and organizations, and sacrifice and prayer. Some of the major problems that religions confront, and attempts to answer are chaos, suffering, evil, and death.
The non-founder religions are Hinduism, Shinto, and native or folk religions. Founder religions are Judaism, Christianity, Islam, Confucianism, Daoism, Mormonism, Jainism, Zoroastrianism, Buddhism, Sikhism, and the Baháʼí Faith. Religions must adapt and change through the generations because they must remain relevant to the adherents. When traditional religions fail to address new concerns, then new religions will emerge.
Performing arts
The performing arts differ from the visual arts in that the former uses the artist's own body, face, and presence as a medium, and the latter uses materials such as clay, metal, or paint, which can be molded or transformed to create some art object. Performing arts include acrobatics, busking, comedy, dance, film, magic, music, opera, juggling, marching arts, such as brass bands, and theatre.
Artists who participate in these arts in front of an audience are called performers, including actors, comedians, dancers, musicians, and singers. Performing arts are also supported by workers in related fields, such as songwriting and stagecraft. Performers often adapt their appearance, such as with costumes and stage makeup, etc. There is also a specialized form of fine art in which the artists perform their work live to an audience. This is called Performance art. Most performance art also involves some form of plastic art, perhaps in the creation of props. Dance was often referred to as a plastic art during the Modern dance era.
Musicology
Musicology as an academic discipline can take a number of different paths, including historical musicology, music literature, ethnomusicology and music theory. Undergraduate music majors generally take courses in all of these areas, while graduate students focus on a particular path. In the liberal arts tradition, musicology is also used to broaden skills of non-musicians by teaching skills, including concentration and listening.
Theatre
Theatre (or theater) (Greek "theatron", θέατρον) is the branch of the performing arts concerned with acting out stories in front of an audience using combinations of speech, gesture, music, dance, sound and spectacle — indeed any one or more elements of the other performing arts. In addition to the standard narrative dialogue style, theatre takes such forms as opera, ballet, mime, kabuki, classical Indian dance, Chinese opera, mummers' plays, and pantomime.
Dance
Dance (from Old French dancier, perhaps from Frankish) generally refers to human movement either used as a form of expression or presented in a social, spiritual or performance setting. Dance is also used to describe methods of non-verbal communication (see body language) between humans or animals (bee dance, mating dance), and motion in inanimate objects (the leaves danced in the wind). Choreography is the process of creating dances, and the people who create choreography are known as choreographers. Choreographers use movement, music, and other elements to create expressive and artistic dances. They may work alone or with other artists to create new works, and their work can be presented in a variety of settings, from small dance studios to large theaters.
Definitions of what constitutes dance are dependent on social, cultural, aesthetic, artistic, and moral constraints and range from functional movement (such as Folk dance) to codified, virtuoso techniques such as ballet.
Visual art
History of visual arts
The great traditions in art have a foundation in the art of one of the ancient civilizations, such as Ancient Japan, Greece and Rome, China, India, Greater Nepal, Mesopotamia and Mesoamerica.
Ancient Greek art saw a veneration of the human physical form and the development of equivalent skills to show musculature, poise, beauty and anatomically correct proportions. Ancient Roman art depicted gods as idealized humans, shown with characteristic distinguishing features (e.g., Zeus' thunderbolt).
The emphasis on spiritual and religious themes in Byzantine and Gothic art of the Middle Ages reflected the dominance of the church. However, in the Renaissance, a renewed focus on the physical world was reflected in art forms that depicted the human body and landscape in a more naturalistic and three-dimensional way.
Eastern art has generally worked in a style akin to Western medieval art, namely a concentration on surface patterning and local colour (meaning the plain colour of an object, such as basic red for a red robe, rather than the modulations of that colour brought about by light, shade and reflection). A characteristic of this style is that the local colour is often defined by an outline (a contemporary equivalent is the cartoon). This is evident in, for example, the art of India, Tibet and Japan.
Religious Islamic art forbids iconography, and expresses religious ideas through geometry instead. The physical and rational certainties depicted by the 19th-century Enlightenment were shattered not only by new discoveries of relativity by Einstein and of unseen psychology by Freud, but also by unprecedented technological development. Increasing global interaction during this time saw an equivalent influence of other cultures into Western art.
Media types
Drawing
Drawing is a means of making a picture, using a wide variety of tools and techniques. It generally involves making marks on a surface by applying pressure from a tool, or moving a tool across a surface. Common tools are graphite pencils, pen and ink, inked brushes, wax color pencils, crayons, charcoals, pastels, and markers. Digital tools that simulate the effects of these are also used. The main techniques used in drawing are: line drawing, hatching, crosshatching, random hatching, scribbling, stippling, and blending. A computer aided designer who excels in technical drawing is referred to as a draftsman or draughtsman.
Painting
Literally, painting is the practice of applying pigment suspended in a carrier (or medium) and a binding agent (a glue) to a surface (support) such as paper, canvas or a wall. However, when used in an artistic sense, it means the use of this activity in combination with drawing, composition and other aesthetic considerations in order to manifest the expressive and conceptual intention of the practitioner. Painting has been used throughout history to express spiritual and religious ideas, from mythological scenes on pottery to the frescoes of the Sistine Chapel, to body art.
Colour is highly subjective, but has observable psychological effects, although these can differ from one culture to the next. Black is associated with mourning in the West, but elsewhere white may be. Some painters, theoreticians, writers and scientists, including Goethe, Kandinsky, Isaac Newton, have written their own colour theories. Moreover, the use of language is only a generalization for a colour equivalent. The word "red", for example, can cover a wide range of variations on the pure red of the spectrum. Unlike music, where notes such as C or C# are universally accepted, there is no formalized register of colors. However, the Pantone system is widely used in the printing and design industry to standardize color reproduction.
Modern artists have extended the practice of painting considerably to include, for example, collage. This began with cubism and is not painting in strict sense. Some modern painters incorporate different materials such as sand, cement, straw or wood for their texture. Examples of these are the works of Jean Dubuffet or Anselm Kiefer. Modern and contemporary art has moved away from the historic value of craft in favour of concept (conceptual art); this has led some e.g. Joseph Kosuth to say that painting, as a serious art form, is dead, although this has not deterred the majority of artists from continuing to practise it either as whole or part of their work.
Sculpture involves creating three-dimensional forms out of various materials. These typically include malleable substances like clay and metal but may also extend to material that is cut or shaved down to the desired form, like stone and wood.
History
In the West, the history of the humanities can be traced to ancient Greece, as the basis of a broad education for citizens. During Roman times, the concept of the seven liberal arts evolved, involving grammar, rhetoric and logic (the trivium), along with arithmetic, geometry, astronomy and music (the quadrivium). These subjects formed the bulk of medieval education, with the emphasis being on the humanities as skills or "ways of doing".
A major shift occurred with the Renaissance humanism of the fifteenth century, when the humanities began to be regarded as subjects to study rather than practice, with a corresponding shift away from traditional fields into areas such as literature and history (studia humaniora). In the 20th century, this view was in turn challenged by the postmodernist movement, which sought to redefine the humanities in more egalitarian terms suitable for a democratic society since the Greek and Roman societies in which the humanities originated were elitist and aristocratic.
A distinction is usually drawn between the social sciences and the humanities. Classicist Allan Bloom writes in The Closing of the American Mind (1987):
Today
Education and employment
For many decades, there has been a growing public perception that a humanities education inadequately prepares graduates for employment. The common belief is that graduates from such programs face underemployment and incomes too low for a humanities education to be worth the investment.
Humanities graduates find employment in a wide variety of management and professional occupations. In Britain, for example, over 11,000 humanities majors found employment in the following occupations:
Education (25.8%)
Management (19.8%)
Media/Literature/Arts (11.4%)
Law (11.3%)
Finance (10.4%)
Civil service (5.8%)
Not-for-profit (5.2%)
Marketing (2.3%)
Medicine (1.7%)
Other (6.4%)
Many humanities graduates may find themselves with no specific career goals upon graduation, which can lead to lower incomes in the early stages of their career. On the other hand, graduates from more career-oriented programs often find jobs more quickly. However, the long-term career prospects of humanities graduates may be similar to those of other graduates, as research shows that by five years after graduation, they generally find a career path that appeals to them.
There is empirical evidence that graduates from humanities programs earn less than graduates from other university programs. However, the empirical evidence also shows that humanities graduates still earn notably higher incomes than workers with no postsecondary education, and have job satisfaction levels comparable to their peers from other fields. Humanities graduates also earn more as their careers progress; ten years after graduation, the income difference between humanities graduates and graduates from other university programs is no longer statistically significant. Humanities graduates can boost their incomes if they obtain advanced or professional degrees.
Humanities majors are sought after in many areas of business, specifically for their critical thinking and problem solving skills. While often considered "soft skills", Humanities majors gain skills such as, "include persuasive written and oral communication, creative problem-solving, teamwork, decision-making, self-management, and critical analysis".
In the United States
The Humanities Indicators
The Humanities Indicators, unveiled in 2009 by the American Academy of Arts and Sciences, are the first comprehensive compilation of data about the humanities in the United States, providing scholars, policymakers and the public with detailed information on humanities education from primary to higher education, the humanities workforce, humanities funding and research, and public humanities activities. Modeled after the National Science Board's Science and Engineering Indicators, Humanities Indicators are a source of reliable benchmarks to guide analysis of the state of the humanities in the United States.
The Humanities in American Life
The 1980 United States Rockefeller Commission on the Humanities described the humanities in its report, The Humanities in American Life:
Through the humanities we reflect on the fundamental question: What does it mean to be human? The humanities offer clues but never a complete answer. They reveal how people have tried to make moral, spiritual, and intellectual sense of a world where irrationality, despair, loneliness, and death are as conspicuous as birth, friendship, hope, and reason.
In liberal arts education
The Commission on the Humanities and Social Sciences 2013 report, The Heart of the Matter, supports the notion of a broad "liberal arts education", which includes study in disciplines from the natural sciences to the arts as well as the humanities.
Many colleges provide such an education; some require it. The University of Chicago and Columbia University were among the first schools to require an extensive core curriculum in philosophy, literature, and the arts for all students. Other colleges with nationally recognized, mandatory programs in the liberal arts are Fordham University, St. John's College, Saint Anselm College and Providence College. Prominent proponents of liberal arts in the United States have included Mortimer J. Adler and E. D. Hirsch, Jr.
As a major
In 1950, 1.2% of Americans aged 22 had earned a degree in the humanities. By 2010, this figure had risen to 2.6%. This represents a doubling of the number of Americans with degrees in the humanities over a 60-year period. The increase in the number of Americans with humanities degrees is in part due to the overall rise in college enrollment in the United States. In 1940, 4.6% of Americans had a four-year degree, but by 2016, this figure had risen to 33.4%. This means that the total number of Americans with college degrees has increased significantly, resulting in a greater number of people with degrees in the humanities as well. The proportion of degrees awarded in the humanities has declined in recent decades, even as the overall number of people with humanities degrees has increased. In 1954, 36 percent of Harvard undergraduates majored in the humanities, but in 2012, only 20 percent took that course of study. As recently as 1993, the humanities accounted for 15% of the bachelor's degrees awarded by colleges and universities in the United States. As of 2022, they accounted for less than 9%.
In the digital age
Researchers in the humanities have developed numerous large- and small-scale digital corporations, such as digitized collections of historical texts, along with the digital tools and methods to analyze them. Their aim is both to uncover new knowledge about corpora and to visualize research data in new and revealing ways. Much of this activity occurs in a field called the digital humanities.
STEM
Politicians in the United States currently espouse a need for increased funding of the STEM fields, science, technology, engineering, mathematics. Federal funding represents a much smaller fraction of funding for humanities than other fields such as STEM or medicine. The result was a decline of quality in both college and pre-college education in the humanities field.
Three-term Louisiana Governor, Edwin Edwards acknowledged the importance of the humanities in a 2014 video address to the academic conference, Revolutions in Eighteenth-Century Sociability. Edwards said:
Without the humanities to teach us how history has succeeded or failed in directing the fruits of technology and science to the betterment of our tribe of homo sapiens, without the humanities to teach us how to frame the discussion and to properly debate the uses-and the costs-of technology, without the humanities to teach us how to safely debate how to create a more just society with our fellow man and woman, technology and science would eventually default to the ownership of—and misuse by—the most influential, the most powerful, the most feared among us.
In Europe
The value of the humanities debate
The contemporary debate in the field of critical university studies centers around the declining value of the humanities. As in America, there is a perceived decline in interest within higher education policy in research that is qualitative and does not produce marketable products. This threat can be seen in a variety of forms across Europe, but much critical attention has been given to the field of research assessment in particular. For example, the UK [Research Excellence Framework] has been subject to criticism due to its assessment criteria from across the humanities, and indeed, the social sciences. In particular, the notion of "impact" has generated significant debate.
Philosophical history
Citizenship and self-reflection
Since the late 19th century, a central justification for the humanities has been that it aids and encourages self-reflection—a self-reflection that, in turn, helps develop personal consciousness or an active sense of civic duty.
Wilhelm Dilthey and Hans-Georg Gadamer centered the humanities' attempt to distinguish itself from the natural sciences in humankind's urge to understand its own experiences. This understanding, they claimed, ties like-minded people from similar cultural backgrounds together and provides a sense of cultural continuity with the philosophical past.
Scholars in the late 20th and early 21st centuries extended that "narrative imagination" to the ability to understand the records of lived experiences outside of one's own individual social and cultural context. Through that narrative imagination, it is claimed, humanities scholars and students develop a conscience more suited to the multicultural world we live in. That conscience might take the form of a passive one that allows more effective self-reflection or extend into active empathy that facilitates the dispensation of civic duties a responsible world citizen must engage in. There is disagreement, however, on the level of influence humanities study can have on an individual and whether or not the understanding produced in humanistic enterprise can guarantee an "identifiable positive effect on people".
Humanistic theories and practices
There are three major branches of knowledge: natural sciences, social sciences, and the humanities. Technology is the practical extension of the natural sciences, as politics is the extension of the social sciences. Similarly, the humanities have their own practical extension, sometimes called "transformative humanities" (transhumanities) or "culturonics" (Mikhail Epstein's term):
Nature – natural sciences – technology – transformation of nature
Society – social sciences – politics – transformation of society
Culture – human sciences – culturonics – transformation of culture
Technology, politics and culturonics are designed to transform what their respective disciplines study: nature, society, and culture. The field of transformative humanities includes various practicies and technologies, for example, language planning, the construction of new languages, like Esperanto, and invention of new artistic and literary genres and movements in the genre of manifesto, like Romanticism, Symbolism, or Surrealism.
Truth and meaning
The divide between humanistic study and natural sciences informs arguments of meaning in humanities as well. What distinguishes the humanities from the natural sciences is not a certain subject matter, but rather the mode of approach to any question. Humanities focuses on understanding meaning, purpose, and goals and furthers the appreciation of singular historical and social phenomena—an interpretive method of finding "truth"—rather than explaining the causality of events or uncovering the truth of the natural world. Apart from its societal application, narrative imagination is an important tool in the (re)production of understood meaning in history, culture and literature.
Imagination, as part of the tool kit of artists or scholars, helps create meaning that invokes a response from an audience. Since a humanities scholar is always within the nexus of lived experiences, no "absolute" knowledge is theoretically possible; knowledge is instead a ceaseless procedure of inventing and reinventing the context a text is read in. Poststructuralism has problematized an approach to the humanistic study based on questions of meaning, intentionality, and authorship. In the wake of the death of the author proclaimed by Roland Barthes, various theoretical currents such as deconstruction and discourse analysis seek to expose the ideologies and rhetoric operative in producing both the purportedly meaningful objects and the hermeneutic subjects of humanistic study. This exposure has opened up the interpretive structures of the humanities to criticism that humanities scholarship is "unscientific" and therefore unfit for inclusion in modern university curricula because of the very nature of its changing contextual meaning.
Pleasure, the pursuit of knowledge and scholarship
Some, like Stanley Fish, have claimed that the humanities can defend themselves best by refusing to make any claims of utility. (Fish may well be thinking primarily of literary study, rather than history and philosophy.) Any attempt to justify the humanities in terms of outside benefits such as social usefulness (say increased productivity) or in terms of ennobling effects on the individual (such as greater wisdom or diminished prejudice) is ungrounded, according to Fish, and simply places impossible demands on the relevant academic departments. Furthermore, critical thinking, while arguably a result of humanistic training, can be acquired in other contexts. And the humanities do not even provide any more the kind of social cachet (what sociologists sometimes call "cultural capital") that was helpful to succeed in Western society before the age of mass education following World War II.
Instead, scholars like Fish suggest that the humanities offer a unique kind of pleasure, a pleasure based on the common pursuit of knowledge (even if it is only disciplinary knowledge). Such pleasure contrasts with the increasing privatization of leisure and instant gratification characteristic of Western culture; it thus meets Jürgen Habermas' requirements for the disregard of social status and rational problematization of previously unquestioned areas necessary for an endeavor which takes place in the bourgeois public sphere. In this argument, then, only the academic pursuit of pleasure can provide a link between the private and the public realm in modern Western consumer society and strengthen that public sphere that, according to many theorists, is the foundation for modern democracy.
Others, like Mark Bauerlein, argue that professors in the humanities have increasingly abandoned proven methods of epistemology (I care only about the quality of your arguments, not your conclusions.) in favor of indoctrination (I care only about your conclusions, not the quality of your arguments.). The result is that professors and their students adhere rigidly to a limited set of viewpoints, and have little interest in, or understanding of, opposing viewpoints. Once they obtain this intellectual self-satisfaction, persistent lapses in learning, research, and evaluation are common.
Romanticization and rejection
Implicit in many of these arguments supporting the humanities are the makings of arguments against public support of the humanities. Joseph Carroll asserts that we live in a changing world, a world where "cultural capital" is replaced with scientific literacy, and in which the romantic notion of a Renaissance humanities scholar is obsolete. Such arguments appeal to judgments and anxieties about the essential uselessness of the humanities, especially in an age when it is seemingly vitally important for scholars of literature, history and the arts to engage in "collaborative work with experimental scientists or even simply to make "intelligent use of the findings from empirical science."
Despite many humanities based arguments against the humanities some within the exact sciences have called for their return. In 2017, Science popularizer Bill Nye retracted previous claims about the supposed 'uselessness' of philosophy. As Bill Nye states, "People allude to Socrates and Plato and Aristotle all the time, and I think many of us who make those references don't have a solid grounding," he said. "It's good to know the history of philosophy." Scholars, such as biologist Scott F. Gilbert, make the claim that it is in fact the increasing predominance, leading to exclusivity, of scientific ways of thinking that need to be tempered by historical and social context. Gilbert worries that the commercialization that may be inherent in some ways of conceiving science (pursuit of funding, academic prestige etc.) need to be examined externally. Gilbert argues:
See also
Art school
Discourse analysis
Outline of the humanities (humanities topics)
Great Books
Great Books programs in Canada
Liberal arts
Social sciences
Humanities, arts, and social sciences
Human science
The Two Cultures
List of academic disciplines
Public humanities
STEAM fields
Tinbergen's four questions
Environmental humanities
References
External links
Society for the History of the Humanities
Institute for Comparative Research in Human and Social Sciences (ICR) – Japan (archived 15 April 2016)
The American Academy of Arts and Sciences – US
Humanities Indicators – US
National Humanities Center – US (archived 7 July 2007)
The Humanities Association – UK
National Humanities Alliance
National Endowment for the Humanities – US
Australian Academy of the Humanities
National
American Academy Commission on the Humanities and Social Sciences
"Games and Historical Narratives" by Jeremy Antley – Journal of Digital Humanities
Film about the Value of the Humanities
Humans
Main topic articles
Society | 0.772604 | 0.998875 | 0.771735 |
Critical thinking | Critical thinking is the analysis of available facts, evidence, observations, and arguments in order to form a judgement by the application of rational, skeptical, and unbiased analyses and evaluation. In modern times, the use of the phrase critical thinking can be traced to John Dewey, who used the phrase reflective thinking. The application of critical thinking includes self-directed, self-disciplined, self-monitored, and self-corrective habits of the mind; thus, a critical thinker is a person who practices the skills of critical thinking or has been trained and educated in its disciplines. Philosopher Richard W. Paul said that the mind of a critical thinker engages the person's intellectual abilities and personality traits. Critical thinking presupposes assent to rigorous standards of excellence and mindful command of their use in effective communication and problem solving, and a commitment to overcome egocentrism and sociocentrism.
History
In the classical period (5th c.–4th c. BC) of Ancient Greece, the philosopher Plato (428–347 BC) indicated that the teachings of Socrates (470–399 BC) are the earliest records of what today is called critical thinking. In an early dialogue by Plato, the philosopher Socrates debates several speakers about the ethical matter of the rightness or wrongness of Socrates escaping from prison. Upon consideration, Plato concluded that to escape prison would violate everything he believes to be greater than himself: the laws of Athens and the guiding voice that Socrates claims to hear.
Socrates established the unreliability of Authority and of authority figures to possess knowledge and consequent insight; that for an individual man or woman to lead a good life that is worth living, that person must ask critical questions and possess an interrogative soul, which seeks evidence and then closely examines the available facts, and then follows the implications of the statement under analysis, thereby tracing the implications of thought and action.
As a form of co-operative argumentation, Socratic questioning requires the comparative judgment of facts, which answers then would reveal the person's irrational thinking and lack of verifiable knowledge. Socrates also demonstrated that Authority does not ensure accurate, verifiable knowledge; thus, Socratic questioning analyses beliefs, assumptions, and presumptions, by relying upon evidence and a sound rationale.
In modern times, the phrase critical thinking was coined by Pragmatist philosopher John Dewey in his book How We Think. As a type of intellectualism, the development of critical thinking is a means of critical analysis that applies rationality to develop a critique of the subject matter. According to the Foundation for Critical Thinking, in 1987 the U.S. National Council for Excellence in Critical Thinking defined critical thinking as the "intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action."
Etymology and origin of critical thinking
In the term critical thinking, the word critical, (Grk. κριτικός = kritikos = "critic") derives from the word critic and implies a critique; it identifies the intellectual capacity and the means "of judging", "of judgement", "for judging", and of being "able to discern". The intellectual roots of critical thinking are as ancient as its etymology, traceable, ultimately, to the critical reasoning of the Presocractic philosophers, as well as the teaching practice and vision of Socrates 2,500 years ago who discovered by a method of probing questioning that people could not rationally justify their confident claims to knowledge.
According to the Oxford English Dictionary, the exact term “critical thinking” first appeared in 1815, in the British literary journal The Critical Review, referring to critical analysis in the literary context. The meaning of "critical thinking" gradually evolved and expanded to mean a desirable general thinking skill by the end of the 19th century and early 20th century.
Definitions
Traditionally, critical thinking has been variously defined as follows:
"The intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action."
"Disciplined thinking that is clear, rational, open-minded, and informed by evidence"
"Purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based"
"Includes a commitment to using reason in the formulation of our beliefs"
The skill and propensity to engage in an activity with reflective scepticism (McPeck, 1981)
Thinking about one's thinking in a manner designed to organize and clarify, raise the efficiency of, and recognize errors and biases in one's own thinking. Critical thinking is not 'hard' thinking nor is it directed at solving problems (other than 'improving' one's own thinking). Critical thinking is inward-directed with the intent of maximizing the rationality of the thinker. One does not use critical thinking to solve problems—one uses critical thinking to improve one's process of thinking.
"An appraisal based on careful analytical evaluation"
"Critical thinking is a type of thinking pattern that requires people to be reflective, and pay attention to decision-making which guides their beliefs and actions. Critical thinking allows people to deduct with more logic, to process sophisticated information and look at various sides of an issue so they can produce more solid conclusions."
Critical thinking has seven critical features: being inquisitive and curious, being open-minded to different sides, being able to think systematically, being analytical, being persistent to truth, being confident about critical thinking itself, and lastly, being mature.
Although critical thinking could be defined in several different ways, there is a general agreement in its key component—the desire to reach for a satisfactory result, and this should be achieved by rational thinking and result-driven manner. Halpern thinks that critical thinking firstly involves learned abilities such as problem-solving, calculation and successful probability application. It also includes a tendency to engage the thinking process. In recent times, Stanovich believed that modern IQ testing could hardly measure the ability of critical thinking.
"Critical thinking is essentially a questioning, challenging approach to knowledge and perceived wisdom. It involves ideas and information from an objective position and then questioning this information in the light of our own values, attitudes and personal philosophy."
Contemporary critical thinking scholars have expanded these traditional definitions to include qualities, concepts, and processes such as creativity, imagination, discovery, reflection, empathy, connecting knowing, feminist theory, subjectivity, ambiguity, and inconclusiveness. Some definitions of critical thinking exclude these subjective practices.
According to Ennis, "Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action." This definition Ennis provided is highly agreed by Harvey Siegel, Peter Facione, and Deanna Kuhn.
According to Ennis' definition, critical thinking requires a lot of attention and brain function. When a critical thinking approach is applied to education, it helps the student's brain function better and understand texts differently.
Different fields of study may require different types of critical thinking. Critical thinking provides more angles and perspectives upon the same material.
Logic and rationality
The study of logical argumentation is relevant to the study of critical thinking. Logic is concerned with the analysis of arguments, including the appraisal of their correctness or incorrectness. In the field of epistemology, critical thinking is considered to be logically correct thinking, which allows for differentiation between logically true and logically false statements.
In "First wave" logical thinking, the thinker is removed from the train of thought, and the analysis of connections between concepts or points in thought is ostensibly free of any bias. In his essay Beyond Logicism in Critical Thinking Kerry S. Walters describes this ideology thus: "A logistic approach to critical thinking conveys the message to students that thinking is legitimate only when it conforms to the procedures of informal (and, to a lesser extent, formal) logic and that the good thinker necessarily aims for styles of examination and appraisal that are analytical, abstract, universal, and objective. This model of thinking has become so entrenched in conventional academic wisdom that many educators accept it as canon". Such principles are concomitant with the increasing dependence on a quantitative understanding of the world.
In the 'second wave' of critical thinking, authors consciously moved away from the logocentric mode of critical thinking characteristic of the 'first wave'. Although many scholars began to take a less exclusive view of what constitutes critical thinking, rationality and logic remain widely accepted as essential bases for critical thinking. Walters argues that exclusive logicism in the first wave sense is based on "the unwarranted assumption that good thinking is reducible to logical thinking".
Deduction, abduction and induction
There are three types of logical reasoning. Informally, two kinds of logical reasoning can be distinguished in addition to formal deduction, which are induction and abduction.
Deduction
Deduction is the conclusion drawn from the structure of an argument's premises, by use of rules of inference formally those of propositional calculus. For example: X is human and all humans have a face, so X has a face.
Induction
Induction is drawing a conclusion from a pattern that is guaranteed by the strictness of the structure to which it applies. For example: The sum of even integers is even. Let then are even by definition. If , then , which is even; so summing two even numbers results in an even number.
Abduction
Abduction is drawing a conclusion using a heuristic that is likely, but not inevitable given some foreknowledge. For example: I observe sheep in a field, and they appear white from my viewing angle, so sheep are white. Contrast with the deductive statement: Some sheep are white on at least one side.
Critical thinking and rationality
Kerry S. Walters, an emeritus philosophy professor from Gettysburg College, argues that rationality demands more than just logical or traditional methods of problem solving and analysis or what he calls the "calculus of justification" but also considers "cognitive acts such as imagination, conceptual creativity, intuition and insight". These "functions" are focused on discovery, on more abstract processes instead of linear, rules-based approaches to problem-solving. The linear and non-sequential mind must both be engaged in the rational mind.
The ability to critically analyze an argument — to dissect structure and components, thesis and reasons — is essential. But so is the ability to be flexible and consider non-traditional alternatives and perspectives. These complementary functions are what allow for critical thinking to be a practice encompassing imagination and intuition in cooperation with traditional modes of deductive inquiry.
Functions
The list of core critical thinking skills includes observation, interpretation, analysis, inference, evaluation, explanation, and metacognition. According to Reynolds (2011), an individual or group engaged in a strong way of critical thinking gives due consideration to establish for instance:
Evidence through reality
Context skills to isolate the problem from context
Relevant criteria for making the judgment well
Applicable methods or techniques for forming the judgment
Applicable theoretical constructs for understanding the problem and the question at hand
In addition to possessing strong critical-thinking skills, one must be disposed to engage problems and decisions using those skills. Critical thinking employs not only logic but broad intellectual criteria such as clarity, credibility, accuracy, precision, relevance, depth, breadth, significance, and fairness.
Critical thinking calls for the ability to:
Recognize problems, to find workable means for meeting those problems
Understand the importance of prioritization and order of precedence in problem-solving
Gather and marshal pertinent (relevant) information
Recognize unstated assumptions and values
Comprehend and use language with accuracy, clarity, and discernment
Interpret data, to appraise evidence and evaluate arguments
Recognize the existence (or non-existence) of logical relationships between propositions
Draw warranted conclusions and generalizations
Put to test the conclusions and generalizations at which one arrives
Reconstruct one's patterns of beliefs on the basis of wider experience
Render accurate judgments about specific things and qualities in everyday life
In sum:
"A persistent effort to examine any belief or supposed form of knowledge in the light of the evidence that supports or refutes it and the further conclusions to which it tends."
Habits or traits of the mind
The habits of mind that characterize a person strongly disposed toward critical thinking include a desire to follow reason and evidence wherever they may lead, a systematic approach to problem-solving, inquisitiveness, even-handedness, and confidence in reasoning.
According to a definition analysis by Kompf & Bond (2001), critical thinking involves problem-solving, decision making, metacognition, rationality, rational thinking, reasoning, knowledge, intelligence and also a moral component such as reflective thinking. Critical thinkers therefore need to have reached a level of maturity in their development, possess a certain attitude as well as a set of taught skills.
There is a postulation by some writers that the tendencies from habits of mind should be thought as virtues to demonstrate the characteristics of a critical thinker. These intellectual virtues are ethical qualities that encourage motivation to think in particular ways towards specific circumstances. However, these virtues have also been criticized by skeptics who argue that the evidence is lacking for a specific mental basis underpinning critical thinking.
Research in critical thinking
After undertaking research in schools, Edward M. Glaser proposed in 1941 that the ability to think critically involves three elements:
An attitude of being disposed to consider in a thoughtful way the problems and subjects that come within the range of one's experiences
Knowledge of the methods of logical inquiry and reasoning
Some skill in applying those methods.
Educational programs aimed at developing critical thinking in children and adult learners, individually or in group problem solving and decision making contexts, continue to address these same three central elements.
The Critical Thinking project at Human Science Lab, London, is involved in the scientific study of all major educational systems in prevalence today to assess how the systems are working to promote or impede critical thinking.
Contemporary cognitive psychology regards human reasoning as a complex process that is both reactive and reflective. This presents a problem that is detailed as a division of a critical mind in juxtaposition to sensory data and memory.
The psychological theory disposes of the absolute nature of the rational mind, in reference to conditions, abstract problems and discursive limitations. Where the relationship between critical-thinking skills and critical-thinking dispositions is an empirical question, the ability to attain causal domination exists, for which Socrates was known to be largely disposed against as the practice of Sophistry. Accounting for a measure of "critical-thinking dispositions" is the California Measure of Mental Motivation and the California Critical Thinking Dispositions Inventory. The Critical Thinking Toolkit is an alternative measure that examines student beliefs and attitudes about critical thinking.
Education
John Dewey is one of many educational leaders who recognized that a curriculum aimed at building thinking skills would benefit the individual learner, the community, and the entire democracy.
Critical thinking is significant in the learning process of internalization, in the construction of basic ideas, principles, and theories inherent in content. And critical thinking is significant in the learning process of application, whereby those ideas, principles, and theories are implemented effectively as they become relevant in learners' lives.
Each discipline adapts its use of critical-thinking concepts and principles. The core concepts are always there, but they are embedded in subject-specific content. For students to learn content, intellectual engagement is crucial. All students must do their own thinking, their own construction of knowledge. Good teachers recognize this and therefore focus on the questions, readings, activities that stimulate the mind to take ownership of key concepts and principles underlying the subject.
Historically, the teaching of critical thinking focused only on logical procedures such as formal and informal logic. This emphasized to students that good thinking is equivalent to logical thinking. However, a second wave of critical thinking, urges educators to value conventional techniques, meanwhile expanding what it means to be a critical thinker. In 1994, Kerry Walters compiled a conglomeration of sources surpassing this logical restriction to include many different authors' research regarding connected knowing, empathy, gender-sensitive ideals, collaboration, world views, intellectual autonomy, morality and enlightenment. These concepts invite students to incorporate their own perspectives and experiences into their thinking.
In the English and Welsh school systems, Critical Thinking is offered as a subject that 16- to 18-year-olds can take as an A-Level. Under the OCR exam board, students can sit two exam papers for the AS: "Credibility of Evidence" and "Assessing and Developing Argument". The full Advanced GCE is now available: in addition to the two AS units, candidates sit the two papers "Resolution of Dilemmas" and "Critical Reasoning". The A-level tests candidates on their ability to think critically about, and analyze, arguments on their deductive or inductive validity, as well as producing their own arguments. It also tests their ability to analyze certain related topics such as credibility and ethical decision-making. However, due to its comparative lack of subject content, many universities do not accept it as a main A-level for admissions. Nevertheless, the AS is often useful in developing reasoning skills, and the full Advanced GCE is useful for degree courses in politics, philosophy, history or theology, providing the skills required for critical analysis that are useful, for example, in biblical study.
There used to also be an Advanced Extension Award offered in Critical Thinking in the UK, open to any A-level student regardless of whether they have the Critical Thinking A-level. Cambridge International Examinations have an A-level in Thinking Skills.
From 2008, Assessment and Qualifications Alliance has also been offering an A-level Critical Thinking specification.
OCR exam board have also modified theirs for 2008. Many examinations for university entrance set by universities, on top of A-level examinations, also include a critical-thinking component, such as the LNAT, the UKCAT, the BioMedical Admissions Test and the Thinking Skills Assessment.
In Qatar, critical thinking was offered by Al-Bairaq - an outreach, non-traditional educational program that targeted high school students and focussed on a curriculum based on STEM fields. The idea behind this was to offer high school students the opportunity to connect with the research environment in the Center for Advanced Materials (CAM) at Qatar University. Faculty members train and mentor the students and help develop and enhance their critical thinking, problem-solving, and teamwork skills.
Effectiveness
In 1995, a meta-analysis of the literature on teaching effectiveness in higher education was undertaken.
The study noted concerns from higher education, politicians, and business that higher education was failing to meet society's requirements for well-educated citizens. It concluded that although faculty may aspire to develop students' thinking skills, in practice they have tended to aim at facts and concepts utilizing lowest levels of cognition, rather than developing intellect or values.
In a more recent meta-analysis, researchers reviewed 341 quasi- or true-experimental studies, all of which used some form of standardized critical-thinking measure to assess the outcome variable. The authors describe the various methodological approaches and attempt to categorize differing assessment tools, which include standardized tests (and second-source measures), tests developed by teachers, tests developed by researchers, and tests developed by teachers who also serve the role as the researcher. The results emphasized the need for exposing students to real-world problems and the importance of encouraging open dialogue within a supportive environment. Effective strategies for teaching critical thinking are thought to be possible in a wide variety of educational settings. One attempt to assess the humanities' role in teaching critical thinking and reducing belief in pseudoscientific claims was made at North Carolina State University. Some success was noted and the researchers emphasized the value of the humanities in providing the skills to evaluate current events and qualitative data in context.
Scott Lilienfeld notes that there is some evidence to suggest that basic critical-thinking skills might be successfully taught to children at a younger age than previously thought.
Importance in academics
Critical thinking is an important element of all professional fields and academic disciplines (by referencing their respective sets of permissible questions, evidence sources, criteria, etc.). Within the framework of scientific skepticism, the process of critical thinking involves the careful acquisition and interpretation of information and use of it to reach a well-justified conclusion. The concepts and principles of critical thinking can be applied to any context or case but only by reflecting upon the nature of that application. Critical thinking forms, therefore, a system of related, and overlapping, modes of thought such as anthropological thinking, sociological thinking, historical thinking, political thinking, psychological thinking, philosophical thinking, mathematical thinking, chemical thinking, biological thinking, ecological thinking, legal thinking, ethical thinking, musical thinking, thinking like a painter, sculptor, engineer, business person, etc. In other words, though critical-thinking principles are universal, their application to disciplines requires a process of reflective contextualization. Psychology offerings, for example, have included courses such as Critical Thinking about the Paranormal, in which students are subjected to a series of cold readings and tested on their belief of the "psychic", who is eventually announced to be a fake.
Critical thinking is considered important in the academic fields for enabling one to analyze, evaluate, explain, and restructure thinking, thereby ensuring the act of thinking without false belief. However, even with knowledge of the methods of logical inquiry and reasoning, mistakes occur, and due to a thinker's inability to apply the methodology consistently, and because of overruling character traits such as egocentrism. Critical thinking includes identification of prejudice, bias, propaganda, self-deception, distortion, misinformation, etc. Given research in cognitive psychology, some educators believe that schools should focus on teaching their students critical-thinking skills and cultivation of intellectual traits.
Critical-thinking skills can be used to help nurses during the assessment process. Through the use of critical thinking, nurses can question, evaluate, and reconstruct the nursing care process by challenging the established theory and practice. Critical-thinking skills can help nurses problem solve, reflect, and make a conclusive decision about the current situation they face. Critical thinking creates "new possibilities for the development of the nursing knowledge". Due to the sociocultural, environmental, and political issues that are affecting healthcare delivery, it would be helpful to embody new techniques in nursing. Nurses can also engage their critical-thinking skills through the Socratic method of dialogue and reflection. This practice standard is even part of some regulatory organizations such as the College of Nurses of Ontario's Professional Standards for Continuing Competencies (2006).
It requires nurses to engage in Reflective Practice and keep records of this continued professional development for possible review by the college.
Critical thinking is also considered important for human rights education for toleration. The Declaration of Principles on Tolerance adopted by UNESCO in 1995 affirms that "education for tolerance could aim at countering factors that lead to fear and exclusion of others, and could help young people to develop capacities for independent judgement, critical thinking and ethical reasoning".
Online communication
The advent and rising popularity of online courses have prompted some to ask if computer-mediated communication (CMC) promotes, hinders, or has no effect on the amount and quality of critical thinking in a course (relative to face-to-face communication). There is some evidence to suggest a fourth, more nuanced possibility: that CMC may promote some aspects of critical thinking but hinder others. For example, Guiller et al. (2008) found that, relative to face-to-face discourse, online discourse featured more justifications, while face-to-face discourse featured more instances of students expanding on what others had said. The increase in justifications may be due to the asynchronous nature of online discussions, while the increase in expanding comments may be due to the spontaneity of 'real-time' discussion. Newman et al. (1995) showed similar differential effects. They found that while CMC boasted more important statements and linking of ideas, it lacked novelty. The authors suggest that this may be due to difficulties participating in a brainstorming-style activity in an asynchronous environment. Rather, the asynchrony may promote users to put forth "considered, thought out contributions".
Researchers assessing critical thinking in online discussion forums often employ a technique called Content Analysis, where the text of online discourse (or the transcription of face-to-face discourse) is systematically coded for different kinds of statements relating to critical thinking. For example, a statement might be coded as "Discuss ambiguities to clear them up" or "Welcoming outside knowledge" as positive indicators of critical thinking. Conversely, statements reflecting poor critical thinking may be labeled as "Sticking to prejudice or assumptions" or "Squashing attempts to bring in outside knowledge". The frequency of these codes in CMC and face-to-face discourse can be compared to draw conclusions about the quality of critical thinking.
Searching for evidence of critical thinking in discourse has roots in a definition of critical thinking put forth by Kuhn (1991), which emphasizes the social nature of discussion and knowledge construction. There is limited research on the role of social experience in critical thinking development, but there is some evidence to suggest it is an important factor. For example, research has shown that three- to four-year-old children can discern, to some extent, the differential credibility and expertise of individuals. Further evidence for the impact of social experience on the development of critical-thinking skills comes from work that found that 6- to 7-year-olds from China have similar levels of skepticism to 10- and 11-year-olds in the United States. If the development of critical-thinking skills was solely due to maturation, it is unlikely we would see such dramatic differences across cultures.
See also
References
Further reading
Books
Damer, T. Edward. (2005) Attacking Faulty Reasoning, 6th Edition, Wadsworth.
Dauer, Francis Watanabe. Critical Thinking: An Introduction to Reasoning, 1989,
Fisher, Alec and Scriven, Michael. (1997) Critical Thinking: Its Definition and Assessment, Center for Research in Critical Thinking (UK) / Edgepress (US).
Hamby, B.W. (2007) The Philosophy of Anything: Critical Thinking in Context, Kendall Hunt Publishing Company, Dubuque Iowa.
Vincent F. Hendricks. (2005) Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP.
(a.k.a. Weaponized Lies: How to Think Critically in the Post-Truth Era)
Moore, Brooke Noel and Parker, Richard. (2012) Critical Thinking. 10th ed. Published by McGraw-Hill. .
Paul, Richard. (1995) Critical Thinking: How to Prepare Students for a Rapidly Changing World. 4th ed. Foundation for Critical Thinking. .
Paul, Richard and Elder, Linda. (2006) Critical Thinking Tools for Taking Charge of Your Learning and Your Life, New Jersey: Prentice Hall Publishing. .
Sagan, Carl. (1995) The Demon-Haunted World: Science As a Candle in the Dark. Ballantine Books.
Theodore Schick & Lewis Vaughn "How to Think About Weird Things: Critical Thinking for a New Age" (2010)
van den Brink-Budgen, R (2010) Critical Thinking for Students, How To Books.
Whyte, J. (2003) Bad Thoughts – A Guide to Clear Thinking, Corvo. .
David Carl Wilson (2020) A Guide to Good Reasoning: Cultivating Intellectual Virtues (2nd edition) University of Minnesota Libraries Ebook Creative Commons Attribution-Non-Commercial 4.0 International License, at https://open.lib.umn.edu/goodreasoning/
Zeigarnik, B.V. (1927). "On finished and unfinished tasks". In English translation Edited by Willis D. Ellis; with an introduction by Kurt Koffka. (1997). A source book of Gestalt psychology, xiv, 403 p. : ill.; 22 cmHighland, N.Y: Gestalt Journal Press. "This Gestalt Journal Press edition is a verbatim reprint of the book as originally published in 1938" – T.p. verso. .
Articles
- See at ResearchGate
Facione, P. 2007. Critical Thinking: What It Is and Why It Counts – 2007 Update
Kompf, M., & Bond, R. (2001). Critical reflection in adult education. In T. Barer-Stein & M. Kompf(Eds.), The craft of teaching adults (pp. 21–38). Toronto, ON: Irwin.
McPeck, J. (1992). Thoughts on subject specificity. In S. Norris (Ed.), The generalizability of critical thinking (pp. 198–205). New York: Teachers College Press.
Twardy, Charles R. (2003) Argument Maps Improve Critical Thinking. Teaching Philosophy 27:2 June 2004.
Patty Souza (2023) Critical Thinking. Teaching 11:5 June 2023.
External links
Critical Thinking: What Is It Good for? (In Fact, What Is It?) by Howard Gabennesch, Skeptical Inquirer magazine.
Glossary of Critical Thinking Terms
Critical Thinking Web
Analysis
Evaluation
Skepticism
Social concepts
Thought
Virtue
Philosophy of education
John Dewey | 0.772295 | 0.998852 | 0.771408 |
Humanism | Humanism is a philosophical stance that emphasizes the individual and social potential, and agency of human beings, whom it considers the starting point for serious moral and philosophical inquiry.
The meaning of the term "humanism" has changed according to successive intellectual movements that have identified with it. During the Italian Renaissance, ancient works inspired Italian scholars, giving rise to the Renaissance humanism movement. During the Age of Enlightenment, humanistic values were reinforced by advances in science and technology, giving confidence to humans in their exploration of the world. By the early 20th century, organizations dedicated to humanism flourished in Europe and the United States, and have since expanded worldwide. In the early 21st century, the term generally denotes a focus on human well-being and advocates for human freedom, autonomy, and progress. It views humanity as responsible for the promotion and development of individuals, espouses the equal and inherent dignity of all human beings, and emphasizes a concern for humans in relation to the world.
Starting in the 20th century, humanist movements are typically non-religious and aligned with secularism. Most frequently, humanism refers to a non-theistic view centered on human agency, and a reliance on science and reason rather than revelation from a supernatural source to understand the world. Humanists tend to advocate for human rights, free speech, progressive policies, and democracy. People with a humanist worldview maintain religion is not a precondition of morality, and object to excessive religious entanglement with education and the state.
Contemporary humanist organizations work under the umbrella of Humanists International. Well-known humanist associations are Humanists UK and the American Humanist Association.
Etymology
The word "humanism" derives from the Latin word , which was first used in ancient Rome by Cicero and other thinkers to describe values related to liberal education. This etymology survives in the modern university concept of the humanities—the arts, philosophy, history, literature, and related disciplines. The word reappeared during the Italian Renaissance as umanista and entered the English language in the 16th century. The word "humanist" was used to describe a group of students of classical literature and those advocating for a classical education.
In 1755, in Samuel Johnson's influential A Dictionary of the English Language, the word humanist is defined as a philologer or grammarian, derived from the French word . In a later edition of the dictionary, the meaning "a term used in the schools of Scotland" was added. In the 1780s, Thomas Howes was one of Joseph Priestley's many opponents during the celebrated Unitarian disputes. Because of the different doctrinal meanings of Unitarian and Unitarianism, Howes used "the more precise appellations of humanists and humanism" when referring to those like Priestley "who maintain the mere humanity of Christ". This theological origin of humanism is considered obsolete.
In the early 19th century, the term humanismus was used in Germany with several meanings and from there, it re-entered the English language with two distinct denotations; an academic term linked to the study of classic literature and a more-common use that signified a non-religious approach to life contrary to theism. It is probable Bavarian theologian Friedrich Immanuel Niethammer coined the term humanismus to describe the new classical curriculum he planned to offer in German secondary schools. Soon, other scholars such as Georg Voigt and Jacob Burckhardt adopted the term. In the 20th century, the word was further refined, acquiring its contemporary meaning of a naturalistic approach to life, and a focus on the well-being and freedom of humans.
Definition
There is no single, widely accepted definition of humanism, and scholars have given different meanings to the term. For philosopher Sidney Hook, writing in 1974, humanists are opposed to the imposition of one culture in some civilizations, do not belong to a church or established religion, do not support dictatorships, and do not justify the use of violence for social reforms. Hook also said humanists support the elimination of hunger and improvements to health, housing, and education. In the same edited collection, Humanist philosopher H. J. Blackham argued humanism is a concept focusing on improving humanity's social conditions by increasing the autonomy and dignity of all humans. In 1999, Jeaneane D. Fowler said the definition of humanism should include a rejection of divinity, and an emphasis on human well-being and freedom. She also noted there is a lack of shared belief system or doctrine but, in general, humanists aim for happiness and self-fulfillment.
In 2015, prominent humanist Andrew Copson defined humanism as follows:
Humanism is naturalistic in its understanding of the universe; science and free inquiry will help us comprehend more about the universe.
This scientific approach does not reduce humans to anything less than human beings.
Humanists place importance of the pursuit of a self-defined, meaningful, and happy life.
Humanism is moral; morality is a way for humans to improve their lives.
Humanists engage in practical action to improve personal and social conditions.
According to the International Humanist and Ethical Union:
Humanism is a democratic and ethical life stance, which affirms that human beings have the right and responsibility to give meaning and shape to their own lives. It stands for the building of a more humane society through an ethic based on human and other natural values in the spirit of reason and free inquiry through human capabilities. It is not theistic, and it does not accept supernatural views of reality.
Dictionaries define humanism as a worldview or philosophical stance. According to Merriam Webster Dictionary, humanism is " ... a doctrine, attitude, or way of life centered on human interests or values; especially: a philosophy that usually rejects supernaturalism and stresses an individual's dignity and worth and capacity for self-realization through reason".
History
Predecessors
Traces of humanism can be found in ancient Greek philosophy. Pre-Socratic philosophers were the first Western philosophers to attempt to explain the world in terms of human reason and natural law without relying on myth, tradition, or religion. Protagoras, who lived in Athens , put forward some fundamental humanist ideas, although only fragments of his work survive. He made one of the first agnostic statements; according to one fragment: "About the gods I am able to know neither that they exist nor that they do not exist nor of what kind they are in form: for many things prevent me for knowing this, its obscurity and the brevity of man's life". Socrates spoke of the need to "know thyself"; his thought changed the focus of then-contemporary philosophy from nature to humans and their well-being. He was a theist executed for atheism, who investigated the nature of morality by reasoning. Aristotle (384–322 BCE) taught rationalism and a system of ethics based on human nature that also parallels humanist thought. In the third century BCE, Epicurus developed an influential, human-centered philosophy that focused on achieving eudaimonia. Epicureans continued Democritus' atomist theory—a materialistic theory that suggests the fundamental unit of the universe is an indivisible atom. Human happiness, living well, friendship, and the avoidance of excesses were the key ingredients of Epicurean philosophy that flourished in and beyond the post-Hellenic world. It is a repeated view among scholars that the humanistic features of ancient Greek thought are the roots of humanism 2,000 years later.
Other predecessor movements that sometimes use the same or equivalent vocabulary to modern Western humanism can be found in Chinese philosophy and religions such as Taoism and Confucianism.
Arabic translations of Ancient Greek literature during the Abbasid Caliphate in the eighth and ninth centuries influenced Islamic philosophers. Many medieval Muslim thinkers pursued humanistic, rational, and scientific discourse in their search for knowledge, meaning, and values. A wide range of Islamic writings on love, poetry, history, and philosophical theology show medieval Islamic thought was open to humanistic ideas of individualism, occasional secularism, skepticism, liberalism, and free speech; schools were established at Baghdad, Basra and Isfahan.
Renaissance
The intellectual movement later known as Renaissance humanism first appeared in Italy and has greatly influenced both contemporaneous and modern Western culture. Renaissance humanism emerged in Italy and a renewed interest in literature and the arts occurred in 13th-century Italy. Italian scholars discovered Ancient Greek thought, particularly that of Aristotle, through Arabic translations from Africa and Spain. Other centers were Verona, Naples, and Avignon. Petrarch, who is often referred to as the father of humanism, is a significant figure. Petrarch was raised in Avignon; he was inclined toward education at a very early age and studied alongside his well-educated father. Petrarch's enthusiasm for ancient texts led him to discover manuscripts such as Cicero's Pro Archia and Pomponius Mela's De Chorographia that were influential in the development of the Renaissance. Petrarch wrote Latin poems such as Canzoniere and De viris illustribus, in which he described humanist ideas. His most-significant contribution was a list of books outlining the four major disciplines—rhetoric, moral philosophy, poetry, and grammar—that became the basis of humanistic studies (studia humanitatis). Petrarch's list relied heavily on ancient writers, especially Cicero.
The revival of classicist authors continued after Petrarch's death. Florence chancellor and humanist Coluccio Salutati made his city a prominent center of Renaissance humanism; his circle included other notable humanists—including Leonardo Bruni, who rediscovered, translated, and popularized ancient texts. Humanists heavily influenced education. Vittorino da Feltre and Guarino Veronese created schools based on humanistic principles; their curriculum was widely adopted and by the 16th century, humanistic paideia was the dominant outlook of pre-university education. Parallel with advances in education, Renaissance humanists made progress in fields such as philosophy, mathematics, and religion. In philosophy, Angelo Poliziano, Nicholas of Cusa, and Marsilio Ficino further contributed to the understanding of ancient classical philosophers and Giovanni Pico della Mirandola undermined the dominance of Aristotelian philosophy by revitalizing Sextus Empiricus' skepticism. Religious studies were affected by the growth of Renaissance humanism when Pope Nicholas V initiated the translation of Hebrew and Greek biblical texts, and other texts in those languages, to contemporaneous Latin.
Humanist values spread from Italy in the 15th century. Students and scholars went to Italy to study before returning to their homelands carrying humanistic messages. Printing houses dedicated to ancient texts were established in Venice, Basel, and Paris. By the end of the 15th century, the center of humanism had shifted from Italy to northern Europe, with Erasmus of Rotterdam being the leading humanist scholar. The longest-lasting effect of Renaissance humanism was its education curriculum and methods. Humanists insisted on the importance of classical literature in providing intellectual discipline, moral standards, and a civilized taste for the elite—an educational approach that reached the contemporary era.
Enlightenment
During the Age of Enlightenment, humanistic ideas resurfaced, this time further from religion and classical literature. Science and intellectualism advanced, and humanists argued that rationality could replace deism as the means with which to understand the world. Humanistic values, such as tolerance and opposition to slavery, started to take shape. New philosophical, social, and political ideas appeared. Some thinkers rejected theism outright; and atheism, deism, and hostility to organized religion were formed. During the Enlightenment, Baruch Spinoza redefined God as signifying the totality of nature; Spinoza was accused of atheism but remained silent on the matter. Naturalism was also advanced by prominent Encyclopédistes. Baron d'Holbach wrote the polemic System of Nature, claiming that religion was built on fear and had helped tyrants throughout history. Diderot and Helvetius combined their materialism with sharp, political critique.
Also during the Enlightenment, the abstract conception of humanity started forming—a critical juncture for the construction of humanist philosophy. Previous appeals to "men" now shifted toward "man"; to illustrate this point, scholar Tony Davies points to political documents like The Social Contract (1762) of Rousseau, in which he says "Man is born free, but is everywhere in chains". Likewise, Thomas Paine's Rights of Man uses the singular form of the word, revealing a universal conception of "man". In parallel, Baconian empiricism—though not humanism per se—led to Thomas Hobbes's materialism.
Scholar J. Brent Crosson argues that, while there is a widely-held belief that the birth of humanism was solely a European affair, intellectual thought from Africa and Asia significantly contributed as well. He also notes that during enlightenment, the universal man did not encompass all humans but was shaped by gender and race. According to Crosson, the shift from man to human started during enlightenment and is still ongoing. Crosson also argues that enlightenment, especially in Britain, produced not only a notion of universal man, but also gave birth to pseudoscientific ideas, such as those about differences between races, that shaped European history.
From Darwin to current era
French philosopher Auguste Comte (1798–1857) introduced the idea—which is sometimes attributed to Thomas Paine—of a "religion of humanity". According to scholar Tony Davies, this was intended to be an atheist cult based on some humanistic tenets, and had some prominent members but soon declined. It was nonetheless influential during the 19th century, and its humanism and rejection of supernaturalism are echoed in the works of later authors such as Oscar Wilde, George Holyoake—who coined the word secularism—George Eliot, Émile Zola, and E. S. Beesly. Paine's The Age of Reason, along with the 19th-century Biblical criticism of the German Hegelians David Strauss and Ludwig Feuerbach, also contributed to new forms of humanism.
Advances in science and philosophy provided scholars with further alternatives to religious belief. Charles Darwin's theory of natural selection offered naturalists an explanation for the plurality of species. Darwin's theory also suggested humans are simply a natural species, contradicting the traditional theological view of humans as more than animals. Philosophers Ludwig Feuerbach, Friedrich Nietzsche, and Karl Marx attacked religion on several grounds, and theologians David Strauss and Julius Wellhausen questioned the Bible. In parallel, utilitarianism was developed in Britain through the works of Jeremy Bentham and John Stuart Mill. Utilitarianism, a moral philosophy, centers its attention on human happiness, aiming to eliminate human and animal pain via natural means. In Europe and the US, as philosophical critiques of theistic beliefs grew, large parts of society distanced themselves from religion. Ethical societies were formed, leading to the contemporary humanist movement.
The rise of rationalism and the scientific method was followed in the late 19th century in Britain by the start of many rationalist and ethical associations, such as the National Secular Society, the Ethical Union, and the Rationalist Press Association. In the 20th century, humanism was further promoted by the work of philosophers such as A. J. Ayer, Antony Flew, and Bertrand Russell, whose advocacy of atheism in Why I Am Not a Christian further popularized humanist ideas. In 1963, the British Humanist Association evolved out of the Ethical Union, and merged with many smaller ethical and rationalist groups. Elsewhere in Europe, humanist organizations also flourished. In the Netherlands, the Dutch Humanist Alliance gained a wide base of support after World War II; in Norway, the Norwegian Humanist Association gained popular support.
In the US, humanism evolved with the aid of significant figures of the Unitarian Church. Humanist magazines began to appear, including The New Humanist, which published the Humanist Manifesto I in 1933. The American Ethical Union emerged from newly founded, small, ethicist societies. The American Humanist Association (AHA) was established in 1941 and became as popular as some of its European counterparts. The AHA spread to all states, and some prominent public figures such as Isaac Asimov, John Dewey, Erich Fromm, Paul Kurtz, Carl Sagan, and Gene Roddenberry became members. Humanist organizations from all continents formed the International Humanist and Ethical Union (IHEU), which is now known as Humanists International, and promotes the humanist agenda via the United Nations organizations UNESCO and UNICEF.
Varieties of humanism
Early 20th century naturalists, who viewed their humanism as a religion and participated in church-like congregations, used the term "religious humanism". Religious humanism appeared mostly in the US and is now rarely practiced. The American Humanist Association arose from religious humanism. The same term has been used by religious groups such as the Quakers to describe their humanistic theology.
The term "Renaissance humanism" was given to a tradition of cultural and educational reform engaged in by civic and ecclesiastical chancellors, book collectors, educators, and writers that developed during the 14th and early 15th centuries. By the late 15th century, these academics began to be referred to as umanisti (humanists). While modern humanism's roots can be traced to the Renaissance, Renaissance humanism vastly differs from it.
Other terms using "humanism" in their name include:
Christian humanism: a historical current in the late Middle Ages in which Christian scholars combined Christian faith with interest in classical antiquity and a focus on human well-being.
Ethical humanism: a synonym of Ethical culture, was prominent in the US in the early 20th century and focused on relations between humans.
Scientific humanism: this emphasizes belief in the scientific method as a component of humanism as described in the works of John Dewey and Julian Huxley; scientific humanism is largely synonymous with secular humanism.
Secular humanism: coined in the mid-20th century, it was initially an attempt to denigrate humanism, but some humanist associations embraced the term. Secular humanism is synonymous with the contemporary humanist movement.
Marxist humanism: one of several rival schools of Marxist thought that accepts basic humanistic tenets such as secularism and naturalism, but differs from other strands of humanism because of its vague stance on democracy and rejection of free will.
Digital humanism: an emerging philosophical and ethical framework that seeks to preserve and promote human values, dignity, and well-being in the context of rapid technological advancements, particularly in the digital realm.
These varieties of humanism are now largely of historical interest only. Some ethical movements continue (eg New York Society for Ethical Culture) but in general humanism no longer needs any qualification "because the lifestance is by definition naturalistic, scientific, and secular". However, according to Andrew Copson the view that there are still two types of humanism – religious and secular – "has begun to seriously muddy the conceptual water".
Philosophy
Humanism is strongly linked to rationality. For humanists, humans are reasonable beings, and reasoning and the scientific method are means of finding truth. Humanists argue science and rationality have driven successful developments in various fields while the invocation of supernatural phenomena fails to coherently explain the world. One form of irrational thinking is adducing. Humanists are skeptical of explanations of natural phenomena or diseases that rely on hidden agencies.
Human autonomy is another hallmark of humanist philosophy. For people to be autonomous, their beliefs and actions must be the result of their own reasoning. For humanists, autonomy dignifies each individual; without autonomy, people's humanity is lessened. Humanists also consider human essence to be universal, irrespective of race and social status, diminishing the importance of collective identities and signifying the importance of individuals.
Immanuel Kant provided the modern philosophical basis of the humanist narrative. His theory of critical philosophy formed the basis of the world of knowledge, defending rationalism and grounding it in the empirical world. He also supported the idea of the moral autonomy of the individual, which is fundamental to his philosophy. According to Kant, morality is the product of the way humans live and not a set of fixed values. Instead of a universal ethic code, Kant suggested a universal procedure that shapes the ethics that differ among groups of people.
Philosopher and humanist advocate Corliss Lamont, in his book The Philosophy of Humanism (1997), states:
In the Humanist ethics the chief end of thought and action is to further this-earthly human interests on behalf of the greater glory of people. The watchword of Humanism is happiness for all humanity in this existence as contrasted with salvation for the individual soul in a future existence and the glorification of a supernatural Supreme Being ... It heartily welcomes all life-enhancing and healthy pleasures, from the vigorous enjoyments of youth to the contemplative delights of mellowed age, from the simple gratifications of food and drink, sunshine and sports, to the more complex appreciation of art and literature, friendship and social communion.
Themes
Morality
The humanist attitude toward morality has changed since its beginning. Starting in the 18th century, humanists were oriented toward an objective and universalist stance on ethics. Both Utilitarian philosophy—which aims to increase human happiness and decrease suffering—and Kantian ethics, which states one should act in accordance with maxims one could will to become a universal law, shaped the humanist moral narrative until the early 20th century. Because the concepts of free will and reason are not based on scientific naturalism, their influence on humanists remained in the early 20th century but was reduced by social progressiveness and egalitarianism. As part of social changes in the late 20th century, humanist ethics evolved to support secularism, civil rights, personal autonomy, religious toleration, multiculturalism, and cosmopolitanism.
A naturalistic criticism of humanistic morality is the denial of the existence of morality. For naturalistic skeptics, morality was not hardwired within humans during their evolution; humans are primarily selfish and self-centered. Defending humanist morality, humanist philosopher John R. Shook makes three observations that lead him to the acceptance of morality. According to Shook, homo sapiens has a concept of morality that must have been with the species since the beginning of human history, developing by recognizing and thinking upon behaviors. He adds morality is universal among human cultures and all cultures strive to improve their moral level. Shook concludes that while morality was initially generated by our genes, culture shaped human morals and continues to do so. He calls "moral naturalism" the view that morality is a natural phenomenon, can be scientifically studied, and is a tool rather than a set of doctrines that was used to develop human culture.
Humanist philosopher Brian Ellis advocates a social humanist theory of morality called "social contractual utilitarianism", which is based on Hume's naturalism and empathy, Aristotelian virtue theory, and Kant's idealism. According to Ellis, morality should aim for eudaimonia, an Aristotelian concept that combines a satisfying life with virtue and happiness by improving societies worldwide. Humanist Andrew Copson takes a consequentialist and utilitarian approach to morality; according to Copson, all humanist ethical traits aim at human welfare. Philosopher Stephen Law emphasizes some principles of humanist ethics; respect for personal moral autonomy, rejection of god-given moral commands, an aim for human well-being, and "emphasiz[ing] the role of reason in making moral judgements".
Humanism's godless approach to morality has driven criticism from religious commentators. The necessity for a divine being delivering sets of doctrines for morals to exist is a common argument; according to Dostoevsky's character Ivan Karamázov in The Brothers Karamazov, "if God does not exist, then everything is permitted". This argument suggests chaos will ensue if religious belief disappears. For humanists, theism is an obstacle to morality rather than a prerequisite for it. According to humanists, acting only out of fear, adherence to dogma, and expectation of a reward is a selfish motivation rather than morality. Humanists point to the subjectivity of the supposedly objective divine commands by referring to the Euthyphro dilemma, originally posed by Socrates: "does God command something because it is good or is something good because God commands it?" If goodness is independent from God, humans can reach goodness without religion but relativism is elicited if God creates goodness. Another argument against this religious criticism is the human-made nature of morality, even through religious means. The interpretation of holy scriptures almost always includes human reasoning; different interpreters reach contradictory theories.
Religion
Humanism has widely been seen as antithetical to religion. Philosopher of religion David Kline, traces the roots of this animosity since the Renaissance, when humanistic views deconstructed the previous religiously defined order. Kline describes several ways this antithesis has evolved. Kline notes the emergence of a confident human-made knowledge, which was a new way of epistemology, repelled the church from its authoritative position. Kline uses the paradigm of non-humanists Copernicus, Kepler, and Galileo to illustrate how scientific discoveries added to the deconstruction of the religious narrative in favor of human-generated knowledge. This ultimately uncoupled the fate of humans from the divine will, prompting social and political shifts. The relation of state and citizens changed as civic humanistic principles emerged; people were no longer to be servile to religiously grounded monarchies but could pursue their own destinies. Kline also points at the aspects of personal belief that added to the hostility between humanism and religion. Humanism was linked with prominent thinkers who advocated against the existence of God using rationalistic arguments. Critique of theism continued through the humanistic revolutions in Europe, challenging religious worldviews, attitudes and superstitions on a rational basis—a tendency that continued to the 20th century.
According to Stephen Law, humanist adherence to secularism placed humans at odds with religion, especially nationally dominant religions striving to retain privileges gained in the last centuries. Worth notes religious persons can be secularists. Law notes secularism is criticized for suppressing freedom of expression of religious persons but firmly denies such accusation; instead, he says, secularism protects this kind of freedom but opposes the privileged status of religious views.
According to Andrew Copson, humanism is not incompatible with some aspects of religion. He observes that components like belief, practice, identity, and culture can coexist, allowing an individual who subscribes to only a few religious doctrines to also identify as a humanist. Copson adds that religious critics usually frame humanism as an enemy of religion but most humanists are proponents of religious tolerance or exhibit a curiosity about religion's effects in society and politics, commenting: "Only a few are regularly outraged by other people's false beliefs per se".
The meaning of life
In the 19th century, along with the decline of religion and its accompanied teleology, the question of the meaning of life became more prominent. Unlike religions, humanism does not have a definite view on the meaning of life. Humanists commonly say people create rather than discover meaning. While philosophers such as Nietzsche and Sartre wrote on the meaning of life in a godless world, the work of Albert Camus has echoed and shaped humanism. In Camus' The Myth of Sisyphus, he quotes a Greek myth in which the absurd hero Sisyphus is destined to push a heavy rock up a hill; the rock slips back and he must repeat the task. Sisyphus is negating Gods and preset meanings of life, but argues that life has value and significance, and that each individual is able to create their meanings of life. Camus thus highlights the importance of personal agency and self-determination that lie at the centre of humanism.
Personal humanist interpretations of the meaning of life vary from the pursuit of happiness without recklessness and excesses to participation in human history, and connection with loved ones, living animals, and plants. Some answers are close to those of religious discourse if the appeal to divinity is overlooked. According to humanist professor Peter Derks, elements that contribute to the meaning of life are a morally worthy purpose in life, positive self-evaluation, an understanding of one's environment, being seen and understood by others, the ability to emotionally connect with others, and a desire to have a meaning in life. Humanist professor Anthony B. Pinn places the meaning of life in the quest of what he calls "complex subjectivity". Pinn, who is advocating for a non-theistic, humanistic religion inspired by African cultures, says seeking the never-reaching meaning of life contributes to well-being, and that rituals and ceremonies, which are occasions for reflection, provide an opportunity to assess the meaning of life, improving well-being.
In public life
In politics
The hallmark of contemporary humanism in politics is the demand for secularism. Philosopher Alan Haworth said secularism delivers fair treatment to all citizens of a nation-state since all are treated without discrimination; religion is a private issue and the state should have no power over it. He also argues that secularism helps plurality and diversity, which are fundamental aspects of our modern world. While barbarism and violence can be found in most civilizations, Haworth notes religion usually fuels rhetoric and enables these actions. He also said the values of hard work, honesty, and charity are found in other civilizations. According to Haworth, humanism opposes the irrationality of nationalism and totalitarianism, whether these are part of fascism or Marxist–Leninist communism.
According to professor Joseph O. Baker, in political theory, contemporary humanism is formed by two main tendencies; the first is individualistic and the second inclines to collectivism. The trajectory of each tendency can lead to libertarianism and socialism respectively, but a range of combinations exists. Individualistic humanists often have a philosophical perspective of humanism; in politics, these are inclined to libertarianism and in ethics tend to follow a scientistic approach. Collectivists have a more-applied view of humanism, lean toward socialism, and have a humanitarian approach to ethics. The second group has connections with the thought of young Marx, especially his anthropological views rejecting his political practices. A factor that repels many humanists from the libertarian view is the neoliberal or capitalistic consequences they feel it entails.
Humanism has been a part of both major 20th-century ideological currents—liberalism and Marxism. Early 19th-century socialism was connected to humanism. In the 20th century, a humanistic interpretation of Marxism focused on Marx's early writings, viewing Marxism not as "scientific socialism" but as a philosophical critique aimed at the overcoming of "alienation". In the US, liberalism is associated mostly with humanistic principles, which is distinct from the European use of the same word, which has economical connotations. In the post-1945 era, Jean-Paul Sartre and other French existentialists advocated for humanism, linking it to socialism while trying to stay neutral during the Cold War.
In psychology and counseling
Humanist counseling is humanism-inspired applied psychology, which is a major current of counseling. There are various approaches such as discussion and critical thinking, replying to existential anxiety, and focusing on social and political dimensions of problems. Humanist counseling focuses on respecting the client's worldview and placing it in the correct cultural context. The approach emphasizes an individual's inherent drive towards self-actualization and creativity. It also recognizes the importance of moral questions about one's interactions with people according to one's worldview. This is examined using a process of dialogue. Humanist counseling originated in post-World War II Netherlands.
Humanistic counseling is based on the works of psychologists Carl Rogers and Abraham Maslow. It introduced a positive, humanistic psychology in response to what Rogers and Maslow viewed as the over-pessimistic view of psychoanalysis in the early 1960s. Other sources include the philosophies of existentialism and phenomenology.
Some modern counseling organizations have humanist origins, like the British Association for Counselling and Psychotherapy, which was founded by Harold Blackham, which he developed alongside the British Humanist Association's Humanist Counselling Service. Modern-day humanist pastoral care in the UK and the Netherlands draws on elements of humanistic psychology.
Demographics
Demographic data about humanists is sparse. Scholar Yasmin Trejo examined the results of Pew Research Center's 2014 Religious Landscape Study. Trejo did not use self-identification to measure humanists but combined the answers of two questions: "Do you believe in God or a universal spirit?" (she chose those who answered 'no') and "when it comes to questions of right or wrong, which of the following do you look to most for guidance?" (picking answers 'scientific information' and 'philosophy and reason'). According to Trejo, most humanists identify as atheist or agnostic (37% and 18%), 29% as "nothing in particular", while 16% of humanists identify as religious. She also found most humanists (80%) were raised in a religious background. Sixty percent of humanists are married to non-religious spouses, while one quarter are married to a Christian. There is a gender divide among humanists: 67 percent are male. Trejo says this can be explained by the fact that more males self-identify as atheist, while women have stronger connections to religion because of socialization, community influence, and stereotypes; some women, especially Catholic Latinas, are expected to be religious and many of them abide by their community expectations. Other findings note the high level of education of most humanists, indicating a higher socioeconomic status. The population of humanists is overwhelming non-Hispanic white; according to Trejo, this is because minority groups are usually very religious.
Criticisms
Western and Christian
Criticism of humanism focus on its adherence to human rights, which some critics have called "Western". Critics say humanist values have become a tool of Western moral dominance, which is a form of neo-colonialism that leads to oppression and a lack of ethical diversity. Other critics, namely feminists, black activists, postcolonial critics, and gay and lesbian advocates, say humanism is an oppressive philosophy because it is not free from the biases of the white, heterosexual males who shaped it. History professor Samuel Moyn attacks humanism for its connection to human rights. According to Moyn, the concept of human rights in the 1960s was a declaration of anti-colonial struggle, but that idea was later transformed into an impossible utopian vision, replacing the failing utopias of the 20th century. The humanist use of human rights rhetoric thus turns human rights into a moral tool that is impractical and ultimately non-political. He also notes a commonality between humanism and the Catholic discourse on human dignity.
Anthropology professor Talal Asad argues humanism is a project of modernity and a secularized continuation of Western Christian theology. According to Asad, just as the Catholic Church passed the Christian doctrine of love to Africa and Asia while assisting in the enslavement of large parts of their population, humanist values have at times been a pretext for Western countries to expand their influence to other parts of the world to humanize "barbarians". Asad has also said humanism is not a purely secular phenomenon but takes the idea of the essence of humanity from Christianity. According to Asad, Western humanism cannot incorporate other humanistic traditions, such as those from India and China, without subsuming and ultimately eliminating them.
Sociology professor Didier Fassin has stated that humanism's focus on empathy and compassion, rather than goodness and justice, is a problem. According to Fassin, humanism originated in the Christian tradition, particularly the Parable of the Good Samaritan, in which empathy is universalized. Fassin has also argued that humanism's central essence, the sanctity of human life, is a religious victory hidden in a secular wrapper.
Amoral and materialistic
The main criticism from evangelical Christians, such as Tim LaHaye, is that humanism destroys traditional family and moral values. According to Corliss Lamont, this criticism is a malicious campaign by religious fanatics, the so-called Moral Majority, who need a demonic scapegoat to rally its followers. Other religious opponents scorn humanism by stating it is materialistic thereby diminishing humanity because it denies the spiritual nature and needs of man. Also, because the goal in life is the acquisition of material goods, humanism produces greed and selfishness. In response to this criticism Norman states that there is absolutely no reason why humanists should be committed to the view that the only things worth living for are 'material goods'. Such an accusation, he says, is based on a "sloppy" understanding of materialism. However, he does acknowledge a "tension" in humanism that because of its championing of scientific knowledge, it appears to be committed to a materialistic conception of human beings as physical systems and therefore as not much different from anything else in the universe.
Vague and indefinable
Humanism has frequently been criticised for its vagueness and the difficulty of defining the term. According to Paul Kurtz, “Humanism is so charged with levels of emotion and rhetoric that its meaning is often vague and ambiguous”. For Giustiniani, “the meaning of ‘humanism’ has so many shades that to analyze all of them is hardly feasible”. Nicolas Walter points out that most of the people in the past who have called themselves or been called humanists would reject many of today's tenets. The origins of humanism, he writes, “are so contradictory and confusing that it is often meaningless on its own”. Andrew Copson notes that the suggestion that there are two types of humanism – religious and secular – “has begun to seriously muddy the conceptual water”. According to Tony Davies, “the meaning of ‘humanism’ is the semantic tangle, or grapple, that makes its meaning so difficult to grasp”. For Sarah Bakewell, humanism “is a semantic cloud of meanings and implications, none attachable to any particular theorist or practitioner”.
Yet, the difficulty of defining humanism is not necessarily a problem. Davies avoids offering a definition, choosing instead “to stress the plurality, complexity and fluidity of meanings”. Jeaneane Fowler argues that humanism is indefinable precisely because of its “particular dynamism” and the acknowledged vagueness of the term “far from being a disadvantage, is an asset”.
Antihumanism
Antihumanism is a philosophical theory that rejects humanism as a pre-scientific ideology. This argument developed during the 19th and 20th centuries in parallel with the advancement of humanism. Prominent thinkers questioned the metaphysics of humanism and the human nature of its concept of freedom. Nietzsche, while departing from a humanistic, pro-Enlightenment viewpoint, criticized humanism for illusions on a number of topics, especially the nature of truth. According to Nietzsche, objective truth is an anthropomorphic illusion and humanism is meaningless, and replacing theism with reason and science simply replaces one religion with another.
According to Karl Marx, humanism is a bourgeois project that inaccurately attempts to present itself as radical. After the atrocities of World War II, questions about human nature and the concept of humanity were renewed. During the Cold War, influential Marxist philosopher Louis Althusser introduced the term "theoretical antihumanism" to attack both humanism and humanist-like socialist currents, eschewing more structural and formal interpretations of Marx. According to Althusser, Marx's early writings resonate with the humanistic idealism of Hegel, Kant, and Feuerbach, but Marx radically moved toward scientific socialism in 1845, rejecting concepts such as the essence of man.
Humanist organizations
Humanist organizations exist in several countries. Humanists International is a global organization. The three countries with the highest numbers of Humanist International member organisations are the UK, India, and the US. The largest humanist organisation is the Norwegian Humanist Association. Humanists UK – formerly the British Humanist Association – and the American Humanist Association are two of the oldest humanist organizations.
In 2015, London-based Humanists UK had around 28,000 members. Its membership includes some high-profile people such as Richard Dawkins, Brian Cox, Salman Rushdie, Polly Toynbee, and Stephen Fry, who are known for their participation in public debate, promoting secularism, and objecting to state funding for faith-based institutions. Humanists UK organizes and conducts non-religious ceremonies for weddings, namings, comings of age, and funerals.
The American Humanist Association (AHA) was formed in 1941 from previous humanist associations. Its journal The Humanist is the continuation of a previous publication The Humanist Bulletin. In 1953, the AHA established the "Humanist of the Year" award to honor individuals who promote science. By the 1970s, it became a well-recognized organization, initiating campaigns for abortion rights and opposing discriminatory policies. This resulted in the organization becoming a target of the religious right by the 1980s.
See also
Notes
References
Sources
Further reading
External links
American Humanist Association
International Humanist and Ethical Union
Humanists UK | 0.771843 | 0.999418 | 0.771394 |
History of ethics | Ethics is the branch of philosophy that examines right and wrong moral behavior, moral concepts (such as justice, virtue, duty) and moral language. Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and wrong behavior". The field of ethics, along with aesthetics, concerns matters of value, and thus comprises the branch of philosophy called axiology.
Various ethical theories pose various answers to the question "What is the greatest good?" and elaborate a complete set of proper behaviors for individuals and groups. Ethical theories are closely related to forms of life in various social orders.
Origins
The epic poems that stand at the beginning of many world literatures, such as the Mesopotamian Epic of Gilgamesh, Homer's Iliad and the Icelandic Eddas, portray a set of values that suit the strong leader of a small tribe. Valour and success are the principal qualities of a hero and are generally not constrained by moral considerations. Revenge and vendetta are appropriate activities for heroes. The gods that appear in such epics are not defenders of moral values but are capricious forces of nature and are to be feared and propitiated.
More strictly ethical claims are found occasionally in the literature of ancient civilizations that is aimed at lower classes of society. The Sumerian Farmer's Almanac and the Egyptian Instruction of Amenemope both advise farmers to leave some grain for poor gleaners, and promise favours from the gods for doing so. A number of ancient religions and ethical thinkers also put forward some version of the golden rule, at least in its negative version: do not do to others what you do not want done to yourself.
Ancient Greek ethics
While Greek moral thought was originally based on mythology, which provided moral meaning but no comprehensive framework, from the 600s BC a new moral approach emerged which used rational arguments instead, leading to the rise of philosophy as a distinct mode of thought. This has been especially attributed to Socrates. The Socratic method aimed to establish moral truths by questioning the beliefs of others, rather than by explaining them directly. He opposed the moral relativism of the Sophists, insisting on the formulation of moral principles from beginning. As portrayed in Plato's Republic, he articulates the greatest good as the transcendent "form of good itself". In his personal life, Socrates lived extremely morally. He was chaste, disciplined, pious, responsible, and cared for his friends In the so-called Euthyphro dilemma, he raised the problem of whether divine action was motivated by it being good, or whether it was good because it was divine. In Gorgias he defends the notion that it is better to suffer injustice than to do it.
The key work of Plato's ethics was the Republic, which was focused on conceiving justice, a concept which for Plato was inclusive of wider morality as well. In a dialogue, Thrasymachus argued that conventional morality was a ruse invented to keep the elite in power, which should be discarded in favour of self-interest. Plato responded by planning a utopia and giving a metaphysical theory of what is good. He argued there were five regimes into which different societies could be divided, with the best one being aristocracy, in which "the desires of the inferior many are controlled by the wisdom and desires of the superior few". In contrast, democracy would lead to the degradation of culture and morality, with him arguing that "extreme freedom can't be expected to lead to anything but a change to extreme slavery". Whereas ordinary people were living in an illusion, demonstrated by the allegory of the cave, the theory of forms suggested that objective definitions, as looked for by Socrates, did actually exist. The highest form was that of the Good, which gave purpose for everything in the world and could only be understood by the philosophers.
Aristotle's ethics builds upon Plato's with important variations. Aristotle defined the good as "that at which all things aim". While many different goods were being pursued by different people and activities, that good which is being pursued for its own sake was the supreme good, or what he called eudaimonia, which has been translated as 'happiness' but may be more broadly described as 'flourishing', and involves "living well and doing well", not mere pleasure (which will itself follow). A "great-souled" citizen who lives a life of virtue can expect to achieve eudaimonia, which Aristotle argues is the highest good for man. Following Plato, Aristotle gives a significant role in moral life to the virtues, fixed habits of behaviour that lead to good outcomes; the main virtues are courage, justice, prudence and temperance. The highest form of life is, however, purely intellectual activity. However, the virtues for him are merely the means to an end. Furthermore, he disagreed with Plato on there being a universal transcendental good, instead seeing ethics as practical and particular. Rather, the virtues should be based on finding the golden mean between extremes.
Later Greek schools of philosophy, such as the Epicureans and Stoics, debated the conditions of the good life. Both of these schools argued that tranquility should be the aim of life but disagreed on the mean of getting there despite both claiming the Socratic tradition. Epicurus taught that the greatest good was pleasure and freedom from pain. However, the latter was more important, as indulgences should be avoided so they did not lead to want and therefore suffering. Instead, the Epicureans emphasized the quiet enjoyment of pleasures, especially mental pleasure, free of fear and anxiety. Founded by Zeno of Citium, the Stoics thought the greatest good not pleasure but reason and everything in accord with reason, even if painful. Hence, they praised the life of reason lived in accordance with nature. They had been influenced by the Cynics' and Socrates' ascetism and indifference to adversity. The acceptance of the inevitable subsequently became a key aspect of their thinking, based also on their belief in determinism. Whereas the Epicureans believed the universe was essentially meaningless, the Stoics believed that God (understood to be one with the universe) gave meaning to the world. In response to the problem of evil, the Stoics developed the concept of theodicy. The Stoic philosopher Hierocles also developed the concept of morality being based on concentric circles of proximity to the individual, such as family, community and humanity, with the process of bringing the self and the other together called Oikeiôsis.
Indian ethics
The foundation of Hinduism is in the epic of Mahabharata, which contains the concept of dharma, a conception of natural law and the duties required for the upholding of the natural order. Hinduism itself is viewed by its followers as Sanātana Dharma, or the 'Eternal Law', which binds everyone. The four aims of Hinduism are moksha (enlightenment), artha (wealth), kama (pleasure), and dharma. The significance of moksha is that only it can break through maya, the illusion hiding reality, which requires both understanding the impermanence of material reality as well as the attainment of an understanding of the unity of the Self (atman) and the foundation of being (brahman). Moksha also means breaking free from the cycle of reincarnation which is governed by karma, the accumulated balance of good and bad actions by an individual. This was in turn used as a justification for the caste system. During the Axial Age, asceticism and becoming a hermit increased in popularity, sometimes being a reaction to the prevailing social structures. Two significant belief systems emerged from this reaction. Jainism, formalised by the ascetic philosopher Mahavira, according to which enlightenment came through a perfectly ethical life that necessitated a complete renunciation of the killing of any living beings, including the smallest of insects. The other one was Buddhism, founded by the Buddha. Other responses to the era included materialist schools such as Charvaka, which embraced hedonism and rejected spirituality.
The most important of the Buddha's teaching was the Dhammacakkappavattana Sutta, at the core of which were the Four Noble Truths. The first of these was duḥkha, the suffering that is part of life. This is also one of the three marks of existence which define life, the others being anitya, the impermanence of everything, and anatman, or the non-existence of the self across time. The second Noble Truth was that all human suffering is caused by desire that cannot be satisfied, and that only be renouncing the desire could the suffering be ended, which was the Third Noble Truth. The final Noble Truth was that desire could only be relinquished by following Noble Eightfold Path. The Eightfold Path consists of eight practices: right view, right resolve, right speech, right conduct, right livelihood, right effort, right mindfulness, and right samadhi ('meditative absorption or union'; alternatively, equanimous meditative awareness). The Middle Way refers to major aspects of the teaching of the Buddha, either to the spiritual practice that steers clear of both extreme asceticism and sensual indulgence, which is defined as the Noble Eightfold Path, or the Buddha's avoiding of eternalism (or absolutism) and annihilationism (and nihilism). In Mahāyāna Buddhism, śūnyatā ('emptiness') refers to the tenet that "all things are empty of intrinsic existence and nature (svabhava)".
Chinese ethics
Confucius, who lived around the same time as the Buddha, was focused mostly on ethical philosophy. He was especially interested in how to create a harmonious society, which he believed was based on two human qualities: ren and li. Ren, the highest principle, describes humaneness, encompassing all the qualities required for ideal behaviour between people. Confucious argued that a form of the Golden Rule should be the guiding principle of all actions. However, he also believed that different forms of behaviour were appropriate in different relationships. The second principle of li embodied this by establishing the need to follow tradition, rituals and other conventional norms.
Natural law ethics
In the Middle Ages, Thomas Aquinas developed a synthesis of Biblical and Aristotelian ethics called natural law theory, according to which the nature of humans determines what is right and wrong. For example, murder is wrong because life is essential to humans so depriving someone of it is inherently an evil. Education is needed for humans, and is their right, because their intellectual nature requires developing. Natural law theory remains at the heart of Catholic moral teaching, for example in its positions on contraception and other controversial moral issues.
The Catholic practice of compulsory confession led to the development of manuals of casuistry, the application of ethical principles to detailed cases of conscience, such as the conditions of a just war.
Kantian ethics
Immanuel Kant, in the 18th century, argued that right and wrong are founded on duty, which issues a Categorical Imperative to us, a command that, of its nature, ought to be obeyed. An action is only truly moral if done from a sense of duty, and the most valuable thing is a human will that has decided to act rightly. To decide what duty requires, Kant proposes the principle of universalizability: correct moral rules are those everyone could adopt.
Kant's philosophy marks a number of important conceptual shifts in philosophical thinking about ethics. Kant argues that questions about happiness should not be a focus in ethical thought, because ethics should be universal while happiness may involve very different modes of life for different individuals. He also believed this approach was necessary if an ethical theory was to avoid becoming 'heteronomous'; that is, locating the source of proper moral motivation outside of properly moral concerns.
Utilitarianism
In 19th century Britain, Jeremy Bentham and John Stuart Mill advocated utilitarianism, the view that right actions are those that are likely to result in the greatest happiness of the greatest number. Utilitarianism remains popular in the twenty-first century.
Both Kantianism and Utilitarianism provide ethical theories that can support contemporary liberal political developments, and associated enlightenment ways of conceiving of the individual.
Twentieth century
The early twentieth century saw many debates on metaethics, that is, philosophical theory on the nature of ethics. Views ranged from moral realism, which holds that moral truths are about mind-independent realities, to evolutionary ethics, which believes ethical practices are merely evolved ways of behavior that led to evolutionary success, to the error theory of J. L. Mackie, which held that the entire notion of ethical obligation is a mistake.
Reflections on the Holocaust, such as those of Hannah Arendt, led to a deepening appreciation of the reality of extreme evil. The Holocaust impacted other Jewish philosophers immensely, for instance, the post-war period saw Emmanuel Levinas develop his 'ethics of the other' and situate ethics as 'first philosophy'. This philosophy showed a focus on the relation to the other in distress as central to the development of ethics and placed ethical theories center-stage in philosophy. Also, in reaction to the Holocaust, rights theories, as expressed for example in the 1948 Universal Declaration of Human Rights, asserted the inalienable moral rights of humans to life, education, and other basic goods. Another response to the atrocities of World War II included existential reflections on the meaning of life, leading to approaches to ethics based on "the situation" and personal interaction.
In the late 20th century, there was a so-called 'aretaic turn' and renewed interest in virtue ethics. This turn is often traced to a paper by G.E.M. Anscombe entitled "Modern Moral Philosophy". This approach was then furthered and popularized by figures such as Philippa Foot, Alasdair MacIntyre, Rosalind Hursthouse as well as Paul Ricoeur. The revival of this ethical position congruently saw a return to engagement with earlier philosophers associated with moral philosophy such as Thomas Aquinas and Aristotle.
Professional and applied ethics
While mid-twentieth century ethics mostly dealt with theoretical issues, medical ethics continued to deal with issues of practice. The 1970s saw a revival of other fields of applied ethics, the consideration of detailed practical cases in bioethics, animal ethics, business ethics, environmental ethics, computer ethics and other special fields. The development of new technologies produced many new issues requiring ethical debate.
See also
Ethics
Ethics in religion
History of ethics in Ancient Greece
List of years in philosophy
References
Sources
Further reading
External links
Ancient ethical theory (Stanford Encyclopedia of Philosophy)
Ancient ethics (Internet Encyclopedia of Philosophy)
The Natural Law Tradition in Ethics (Stanford Encyclopedia of Philosophy)
Ethics
Ethics
Ethics
Morality
Virtue | 0.779135 | 0.989863 | 0.771237 |
Concept | A concept is an abstract idea that serves as a foundation for more concrete principles, thoughts, and beliefs.
Concepts play an important role in all aspects of cognition. As such, concepts are studied within such disciplines as linguistics, psychology, and philosophy, and these disciplines are interested in the logical and psychological structure of concepts, and how they are put together to form thoughts and sentences. The study of concepts has served as an important flagship of an emerging interdisciplinary approach, cognitive science.
In contemporary philosophy, three understandings of a concept prevail:
mental representations, such that a concept is an entity that exists in the mind (a mental object)
abilities peculiar to cognitive agents (mental states)
Fregean senses, abstract objects rather than a mental object or a mental state
Concepts are classified into a hierarchy, higher levels of which are termed "superordinate" and lower levels termed "subordinate". Additionally, there is the "basic" or "middle" level at which people will most readily categorize a concept. For example, a basic-level concept would be "chair", with its superordinate, "furniture", and its subordinate, "easy chair".
Concepts may be exact or inexact. When the mind makes a generalization such as the concept of tree, it extracts similarities from numerous examples; the simplification enables higher-level thinking. A concept is instantiated (reified) by all of its actual or potential instances, whether these are things in the real world or other ideas.
Concepts are studied as components of human cognition in the cognitive science disciplines of linguistics, psychology, and philosophy, where an ongoing debate asks whether all cognition must occur through concepts. Concepts are regularly formalized in mathematics, computer science, databases and artificial intelligence. Examples of specific high-level conceptual classes in these fields include classes, schema or categories. In informal use the word concept often just means any idea.
Ontology of concepts
A central question in the study of concepts is the question of what they are. Philosophers construe this question as one about the ontology of concepts—what kind of things they are. The ontology of concepts determines the answer to other questions, such as how to integrate concepts into a wider theory of the mind, what functions are allowed or disallowed by a concept's ontology, etc. There are two main views of the ontology of concepts: (1) Concepts are abstract objects, and (2) concepts are mental representations.
Concepts as mental representations
The psychological view of concepts
Within the framework of the representational theory of mind, the structural position of concepts can be understood as follows: Concepts serve as the building blocks of what are called mental representations (colloquially understood as ideas in the mind). Mental representations, in turn, are the building blocks of what are called propositional attitudes (colloquially understood as the stances or perspectives we take towards ideas, be it "believing", "doubting", "wondering", "accepting", etc.). And these propositional attitudes, in turn, are the building blocks of our understanding of thoughts that populate everyday life, as well as folk psychology. In this way, we have an analysis that ties our common everyday understanding of thoughts down to the scientific and philosophical understanding of concepts.
The physicalist view of concepts
In a physicalist theory of mind, a concept is a mental representation, which the brain uses to denote a class of things in the world. This is to say that it is literally a symbol or group of symbols together made from the physical material of the brain. Concepts are mental representations that allow us to draw appropriate inferences about the type of entities we encounter in our everyday lives. Concepts do not encompass all mental representations, but are merely a subset of them. The use of concepts is necessary to cognitive processes such as categorization, memory, decision making, learning, and inference.
Concepts are thought to be stored in long term cortical memory, in contrast to episodic memory of the particular objects and events which they abstract, which are stored in hippocampus. Evidence for this separation comes from hippocampal damaged patients such as patient HM. The abstraction from the day's hippocampal events and objects into cortical concepts is often considered to be the computation underlying (some stages of) sleep and dreaming. Many people (beginning with Aristotle) report memories of dreams which appear to mix the day's events with analogous or related historical concepts and memories, and suggest that they were being sorted or organized into more abstract concepts. ("Sort" is itself another word for concept, and "sorting" thus means to organize into concepts.)
Concepts as abstract objects
The semantic view of concepts suggests that concepts are abstract objects. In this view, concepts are abstract objects of a category out of a human's mind rather than some mental representations.
There is debate as to the relationship between concepts and natural language. However, it is necessary at least to begin by understanding that the concept "dog" is philosophically distinct from the things in the world grouped by this concept—or the reference class or extension. Concepts that can be equated to a single word are called "lexical concepts".
The study of concepts and conceptual structure falls into the disciplines of linguistics, philosophy, psychology, and cognitive science.
In the simplest terms, a concept is a name or label that regards or treats an abstraction as if it had concrete or material existence, such as a person, a place, or a thing. It may represent a natural object that exists in the real world like a tree, an animal, a stone, etc. It may also name an artificial (man-made) object like a chair, computer, house, etc. Abstract ideas and knowledge domains such as freedom, equality, science, happiness, etc., are also symbolized by concepts. A concept is merely a symbol, a representation of the abstraction. The word is not to be mistaken for the thing. For example, the word "moon" (a concept) is not the large, bright, shape-changing object up in the sky, but only represents that celestial object. Concepts are created (named) to describe, explain and capture reality as it is known and understood.
A priori concepts
Kant maintained the view that human minds possess pure or a priori concepts. Instead of being abstracted from individual perceptions, like empirical concepts, they originate in the mind itself. He called these concepts categories, in the sense of the word that means predicate, attribute, characteristic, or quality. But these pure categories are predicates of things in general, not of a particular thing. According to Kant, there are twelve categories that constitute the understanding of phenomenal objects. Each category is that one predicate which is common to multiple empirical concepts. In order to explain how an a priori concept can relate to individual phenomena, in a manner analogous to an a posteriori concept, Kant employed the technical concept of the schema. He held that the account of the concept as an abstraction of experience is only partly correct. He called those concepts that result from abstraction "a posteriori concepts" (meaning concepts that arise out of experience). An empirical or an a posteriori concept is a general representation (Vorstellung) or non-specific thought of that which is common to several specific perceived objects (Logic, I, 1., §1, Note 1)
A concept is a common feature or characteristic. Kant investigated the way that empirical a posteriori concepts are created.
Embodied content
In cognitive linguistics, abstract concepts are transformations of concrete concepts derived from embodied experience. The mechanism of transformation is structural mapping, in which properties of two or more source domains are selectively mapped onto a blended space (Fauconnier & Turner, 1995; see conceptual blending). A common class of blends are metaphors. This theory contrasts with the rationalist view that concepts are perceptions (or recollections, in Plato's term) of an independently existing world of ideas, in that it denies the existence of any such realm. It also contrasts with the empiricist view that concepts are abstract generalizations of individual experiences, because the contingent and bodily experience is preserved in a concept, and not abstracted away. While the perspective is compatible with Jamesian pragmatism, the notion of the transformation of embodied concepts through structural mapping makes a distinct contribution to the problem of concept formation.
Realist universal concepts
Platonist views of the mind construe concepts as abstract objects. Plato was the starkest proponent of the realist thesis of universal concepts. By his view, concepts (and ideas in general) are innate ideas that were instantiations of a transcendental world of pure forms that lay behind the veil of the physical world. In this way, universals were explained as transcendent objects. Needless to say, this form of realism was tied deeply with Plato's ontological projects. This remark on Plato is not of merely historical interest. For example, the view that numbers are Platonic objects was revived by Kurt Gödel as a result of certain puzzles that he took to arise from the phenomenological accounts.
Sense and reference
Gottlob Frege, founder of the analytic tradition in philosophy, famously argued for the analysis of language in terms of sense and reference. For him, the sense of an expression in language describes a certain state of affairs in the world, namely, the way that some object is presented. Since many commentators view the notion of sense as identical to the notion of concept, and Frege regards senses as the linguistic representations of states of affairs in the world, it seems to follow that we may understand concepts as the manner in which we grasp the world. Accordingly, concepts (as senses) have an ontological status.
Concepts in calculus
According to Carl Benjamin Boyer, in the introduction to his The History of the Calculus and its Conceptual Development, concepts in calculus do not refer to perceptions. As long as the concepts are useful and mutually compatible, they are accepted on their own. For example, the concepts of the derivative and the integral are not considered to refer to spatial or temporal perceptions of the external world of experience. Neither are they related in any way to mysterious limits in which quantities are on the verge of nascence or evanescence, that is, coming into or going out of existence. The abstract concepts are now considered to be totally autonomous, even though they originated from the process of abstracting or taking away qualities from perceptions until only the common, essential attributes remained.
Notable theories on the structure of concepts
Classical theory
The classical theory of concepts, also referred to as the empiricist theory of concepts, is the oldest theory about the structure of concepts (it can be traced back to Aristotle), and was prominently held until the 1970s. The classical theory of concepts says that concepts have a definitional structure. Adequate definitions of the kind required by this theory usually take the form of a list of features. These features must have two important qualities to provide a comprehensive definition. Features entailed by the definition of a concept must be both necessary and sufficient for membership in the class of things covered by a particular concept. A feature is considered necessary if every member of the denoted class has that feature. A feature is considered sufficient if something has all the parts required by the definition. For example, the classic example bachelor is said to be defined by unmarried and man. An entity is a bachelor (by this definition) if and only if it is both unmarried and a man. To check whether something is a member of the class, you compare its qualities to the features in the definition. Another key part of this theory is that it obeys the law of the excluded middle, which means that there are no partial members of a class, you are either in or out.
The classical theory persisted for so long unquestioned because it seemed intuitively correct and has great explanatory power. It can explain how concepts would be acquired, how we use them to categorize and how we use the structure of a concept to determine its referent class. In fact, for many years it was one of the major activities in philosophy—concept analysis. Concept analysis is the act of trying to articulate the necessary and sufficient conditions for the membership in the referent class of a concept. For example, Shoemaker's classic "Time Without Change" explored whether the concept of the flow of time can include flows where no changes take place, though change is usually taken as a definition of time.
Arguments against the classical theory
Given that most later theories of concepts were born out of the rejection of some or all of the classical theory, it seems appropriate to give an account of what might be wrong with this theory. In the 20th century, philosophers such as Wittgenstein and Rosch argued against the classical theory. There are six primary arguments summarized as follows:
It seems that there simply are no definitions—especially those based in sensory primitive concepts.
It seems as though there can be cases where our ignorance or error about a class means that we either don't know the definition of a concept, or have incorrect notions about what a definition of a particular concept might entail.
Quine's argument against analyticity in Two Dogmas of Empiricism also holds as an argument against definitions.
Some concepts have fuzzy membership. There are items for which it is vague whether or not they fall into (or out of) a particular referent class. This is not possible in the classical theory as everything has equal and full membership.
Experiments and research showed that assumptions of well defined concepts and categories might not be correct. Researcher Hampton asked participants to differentiate whether items were in different categories. Hampton did not conclude that items were either clear and absolute members or non-members. Instead, Hampton found that some items were barely considered category members and others that were barely non-members. For example, participants considered sinks as barely members of kitchen utensil category, while sponges were considered barely non-members, with much disagreement among participants of the study. If concepts and categories were very well defined, such cases should be rare. Since then, many researches have discovered borderline members that are not clearly in or out of a category of concept.
Rosch found typicality effects which cannot be explained by the classical theory of concepts, these sparked the prototype theory. See below.
Psychological experiments show no evidence for our using concepts as strict definitions.
Prototype theory
Prototype theory came out of problems with the classical view of conceptual structure. Prototype theory says that concepts specify properties that members of a class tend to possess, rather than must possess. Wittgenstein, Rosch, Mervis, Brent Berlin, Anglin, and Posner are a few of the key proponents and creators of this theory. Wittgenstein describes the relationship between members of a class as family resemblances. There are not necessarily any necessary conditions for membership; a dog can still be a dog with only three legs. This view is particularly supported by psychological experimental evidence for prototypicality effects. Participants willingly and consistently rate objects in categories like 'vegetable' or 'furniture' as more or less typical of that class. It seems that our categories are fuzzy psychologically, and so this structure has explanatory power. We can judge an item's membership of the referent class of a concept by comparing it to the typical member—the most central member of the concept. If it is similar enough in the relevant ways, it will be cognitively admitted as a member of the relevant class of entities. Rosch suggests that every category is represented by a central exemplar which embodies all or the maximum possible number of features of a given category. Lech, Gunturkun, and Suchan explain that categorization involves many areas of the brain. Some of these are: visual association areas, prefrontal cortex, basal ganglia, and temporal lobe.
The Prototype perspective is proposed as an alternative view to the Classical approach. While the Classical theory requires an all-or-nothing membership in a group, prototypes allow for more fuzzy boundaries and are characterized by attributes. Lakoff stresses that experience and cognition are critical to the function of language, and Labov's experiment found that the function that an artifact contributed to what people categorized it as. For example, a container holding mashed potatoes versus tea swayed people toward classifying them as a bowl and a cup, respectively. This experiment also illuminated the optimal dimensions of what the prototype for "cup" is.
Prototypes also deal with the essence of things and to what extent they belong to a category. There have been a number of experiments dealing with questionnaires asking participants to rate something according to the extent to which it belongs to a category. This question is contradictory to the Classical Theory because something is either a member of a category or is not. This type of problem is paralleled in other areas of linguistics such as phonology, with an illogical question such as "is /i/ or /o/ a better vowel?" The Classical approach and Aristotelian categories may be a better descriptor in some cases.
Theory-theory
Theory-theory is a reaction to the previous two theories and develops them further. This theory postulates that categorization by concepts is something like scientific theorizing. Concepts are not learned in isolation, but rather are learned as a part of our experiences with the world around us. In this sense, concepts' structure relies on their relationships to other concepts as mandated by a particular mental theory about the state of the world. How this is supposed to work is a little less clear than in the previous two theories, but is still a prominent and notable theory. This is supposed to explain some of the issues of ignorance and error that come up in prototype and classical theories as concepts that are structured around each other seem to account for errors such as whale as a fish (this misconception came from an incorrect theory about what a whale is like, combining with our theory of what a fish is). When we learn that a whale is not a fish, we are recognizing that whales don't in fact fit the theory we had about what makes something a fish. Theory-theory also postulates that people's theories about the world are what inform their conceptual knowledge of the world. Therefore, analysing people's theories can offer insights into their concepts. In this sense, "theory" means an individual's mental explanation rather than scientific fact. This theory criticizes classical and prototype theory as relying too much on similarities and using them as a sufficient constraint. It suggests that theories or mental understandings contribute more to what has membership to a group rather than weighted similarities, and a cohesive category is formed more by what makes sense to the perceiver. Weights assigned to features have shown to fluctuate and vary depending on context and experimental task demonstrated by Tversky. For this reason, similarities between members may be collateral rather than causal.
Ideasthesia
According to the theory of ideasthesia (or "sensing concepts"), activation of a concept may be the main mechanism responsible for the creation of phenomenal experiences. Therefore, understanding how the brain processes concepts may be central to solving the mystery of how conscious experiences (or qualia) emerge within a physical system e.g., the sourness of the sour taste of lemon. This question is also known as the hard problem of consciousness. Research on ideasthesia emerged from research on synesthesia where it was noted that a synesthetic experience requires first an activation of a concept of the inducer. Later research expanded these results into everyday perception.
There is a lot of discussion on the most effective theory in concepts. Another theory is semantic pointers, which use perceptual and motor representations and these representations are like symbols.
Etymology
The term "concept" is traced back to 1554–60 (Latin conceptum – "something conceived").
See also
Abstraction
Categorization
Class (philosophy)
Conceptualism
Concept and object
Concept map
Conceptual blending
Conceptual framework
Conceptual history
Conceptual model
Conversation theory
Definitionism
Formal concept analysis
Fuzzy concept
General Concept Lattice
Hypostatic abstraction
Idea
Ideasthesia
Noesis
Notion (philosophy)
Object (philosophy)
Process of concept formation
Schema (Kant)
Intuitive statistics
References
Further reading
Armstrong, S. L., Gleitman, L. R., & Gleitman, H. (1999). what some concepts might not be. In E. Margolis, & S. Lawrence, Concepts (pp. 225–261). Massachusetts: MIT press.
Carey, S. (1999). knowledge acquisition: enrichment or conceptual change? In E. Margolis, & S. Lawrence, concepts: core readings (pp. 459–489). Massachusetts: MIT press.
Fodor, J. A., Garrett, M. F., Walker, E. C., & Parkes, C. H. (1999). against definitions. In E. Margolis, & S. Lawrence, concepts: core readings (pp. 491–513). Massachusetts: MIT press.
Hume, D. (1739). book one part one: of the understanding of ideas, their origin, composition, connexion, abstraction etc. In D. Hume, a treatise of human nature. England.
Murphy, G. (2004). Chapter 2. In G. Murphy, a big book of concepts (pp. 11 – 41). Massachusetts: MIT press.
Murphy, G., & Medin, D. (1999). the role of theories in conceptual coherence. In E. Margolis, & S. Lawrence, concepts: core readings (pp. 425–459). Massachusetts: MIT press.
Putnam, H. (1999). is semantics possible? In E. Margolis, & S. Lawrence, concepts: core readings (pp. 177–189). Massachusetts: MIT press.
Quine, W. (1999). two dogmas of empiricism. In E. Margolis, & S. Lawrence, concepts: core readings (pp. 153–171). Massachusetts: MIT press.
Rey, G. (1999). Concepts and Stereotypes. In E. Margolis, & S. Laurence (Eds.), Concepts: Core Readings (pp. 279–301). Cambridge, Massachusetts: MIT Press.
Rosch, E. (1977). Classification of real-world objects: Origins and representations in cognition. In P. Johnson-Laird, & P. Wason, Thinking: Readings in Cognitive Science (pp. 212–223). Cambridge: Cambridge University Press.
Rosch, E. (1999). Principles of Categorization. In E. Margolis, & S. Laurence (Eds.), Concepts: Core Readings (pp. 189–206). Cambridge, Massachusetts: MIT Press.
Wittgenstein, L. (1999). philosophical investigations: sections 65–78. In E. Margolis, & S. Lawrence, concepts: core readings (pp. 171–175). Massachusetts: MIT press.
The History of Calculus and its Conceptual Development, Carl Benjamin Boyer, Dover Publications,
The Writings of William James, University of Chicago Press,
Logic, Immanuel Kant, Dover Publications,
A System of Logic, John Stuart Mill, University Press of the Pacific,
Parerga and Paralipomena, Arthur Schopenhauer, Volume I, Oxford University Press,
Kant's Metaphysic of Experience, H. J. Paton, London: Allen & Unwin, 1936
Conceptual Integration Networks. Gilles Fauconnier and Mark Turner, 1998. Cognitive Science. Volume 22, number 2 (April–June 1998), pp. 133–187.
The Portable Nietzsche, Penguin Books, 1982,
Stephen Laurence and Eric Margolis "Concepts and Cognitive Science". In Concepts: Core Readings, MIT Press pp. 3–81, 1999.
Georgij Yu. Somov (2010). Concepts and Senses in Visual Art: Through the example of analysis of some works by Bruegel the Elder. Semiotica 182 (1/4), 475–506.
Daltrozzo J, Vion-Dury J, Schön D. (2010). Music and Concepts. Horizons in Neuroscience Research 4: 157–167.
External links
Blending and Conceptual Integration
Concepts. A Critical Approach, by Andy Blunden
Conceptual Science and Mathematical Permutations
Concept Mobiles Latest concepts
v:Conceptualize: A Wikiversity Learning Project
Concept simultaneously translated in several languages and meanings
TED-Ed Lesson on ideasthesia (sensing concepts)
Abstraction
Cognitive science
Concepts in metaphysics
Main topic articles
Mental content
Ontology
Philosophy of language
Concepts in the philosophy of mind
Semantics
Thought
Objects | 0.771803 | 0.999254 | 0.771227 |
Phronesis | In Ancient Greek philosophy, is a type of wisdom or intelligence concerned with practical action. It implies both good judgment and excellence of character and habits. Classical works about this topic are still influential today. In Aristotelian ethics, the concept was distinguished from other words for wisdom and intellectual virtues—such as and —because of its practical character. The traditional Latin translation is , which is the source of the English word "prudence".
Ancient Greek philosophy
Plato
Plato was a teacher and friend of Aristotle. In some of his dialogues, Socrates proposes that is a necessary condition for all virtue. Being good is to be an intelligent or reasonable person with intelligent and reasonable thoughts. Having allows a person to have moral or ethical strength.
In Plato's Meno, Socrates explains how , a quality synonymous with moral understanding, is the most important attribute to learn, although it cannot be taught and is instead gained through the development of the understanding of one's own self.
Aristotle
In the sixth book of his Nicomachean Ethics, Plato's student Aristotle distinguished between two intellectual virtues: (wisdom) and , and described the relationship between them and other intellectual virtues. is a combination of , the ability to discern reality, and , which is concerned with things which "could not be otherwise... e.g., the necessary truths of mathematics" and is logically built up and teachable. This involves reasoning concerning universal truths. involves not only the ability to decide how to achieve a certain end, but also the ability to reflect upon and determine good ends consistent with the aim of living well overall.
Aristotle points out that although is higher and more serious than , the highest pursuit of wisdom and happiness requires both, because facilitates . He also associates with political ability.
According to Aristotle's theory of rhetoric, is one of the three types of appeal to character. The other two are respectively appeals to (virtue) and (goodwill).
Gaining requires experience, according to Aristotle who wrote that:
is concerned with particulars, because it is concerned with how to act in particular situations. One can learn the principles of action, but applying them in the real world, in situations one could not have foreseen, requires experience of the world. For example, if one knows that one should be honest, in certain situations one might act in ways that cause pain and offense; knowing how to apply honesty in balance with other considerations and in specific contexts requires experience.
Aristotle holds that having is both necessary and sufficient for being virtuous: because is practical, it is impossible to be both and ; i.e., prudent persons cannot act against their "better judgement".
Modern philosophy
Heidegger
In light of his fundamental ontology, Martin Heidegger interprets Aristotle in such a way that phronesis (and practical philosophy as such) is the original form of knowledge and thus prior to sophia (and theoretical philosophy).
Heidegger interprets the Nicomachean Ethics as an ontology of human existence. The practical philosophy of Aristotle is a guiding thread in his Being and Time according to which "facticity" names our unique mode of being-in-the-world. Through his "existential analytic", Heidegger says "Aristotelian phenomenology" suggests three fundamental —, , and —and that these have three corresponding dispositions: , , and . Heidegger considers these as modalities of Being inherent in the structure of as being-in-the-world that is situated within the context of concern and care. According to Heidegger in Aristotle's work discloses the right and proper way to . Heidegger sees as a mode of comportment in and toward the world, a way of orienting oneself and thus of caring-seeing-knowing and enabling a particular way of being concerned.
While is a way of being concerned with things and principles of production, and a way of being concerned with eternal principles, is a way of being concerned with one's life (qua action) and with the lives of others and all particular circumstances as . is a disposition or habit, which while deliberation is the mode of bringing about the disclosive appropriation of that action. In other words, deliberation is the way in which the phronetic nature of is made manifest.
is a form of circumspection, connected to conscience and resoluteness of human existence as . As such it discloses the concrete possibilities of being in a situation, as the starting point of meaningful action, processed with resolution, while facing the contingencies of life.
Heidegger's ontologisation has been criticised as that .
Other uses in psychology
According to Kristjansson, Fowers, Darnell and Pollard, Phronesis is about making decisions in regards to moral events or circumstances. There is recent work to bring back the virtue of practical judgement to overcome disagreements and conflicts in the form of Aristotle’s phronesis.
In Aristotle’s work, is the intellectual virtue that helps turn one’s moral instincts into practical action by inculcating the practical know-how to translate virtue in thought into concrete successful action and this will produce by being able to weigh up the most integral parts of various virtues and competing goals in moral situations. Moral virtues help any person to achieve the end, , is what it takes to figure out the right means to gain that end. Without moral virtues, degenerates into an inability to make practical actions in regards to ends that are genuine goods for man and without we may be lost in regards to exercising decisive judgment on any moral matter. The concept of includes the that is the "well-being for all in society."
The common wisdom model was developed by Grossmann, Weststrate, Ardelt, et al. as explaining the foundation for making moral functioning to occur and by strategy for fitting it to the context of the situation at hand, using major scholars research on the idea that wisdom is best described as morally-grounded excellence in social-cognitive processing, by empirical wisdom scientists. Moral grounding is what the researchers found that the following is the moral basis: "balance of self-interests and other interests, pursuit of truth (as opposed to dishonesty), and orientation toward shared humanity". Secondly this means excellence in social cognitive processing: "context adaptability (e.g. practical or pragmatic reasoning, optimization of behavior towards achieving certain outcomes), perspectivism (e.g. considering diverse perspectives, foresight and long-term thinking), dialectical and reflective thinking (e.g. balancing and integrating points of view, entertaining opposites), and epistemic modesty (e.g. unbiased/accurate thinking, looking through illusions, understanding your own limitations)."
In the social sciences
In After Virtue, Alasdair MacIntyre called for a phronetic social science. He points out that for every prediction made by a social scientific theory there are usually counter-examples. Hence the unpredictability of human beings and human life requires a focus on practical experiences.
In his book Cognitive Capitalism, The psychologist Heiner Rindermann uses the term to describe a rational approach of thinking and acting: "a circumspect and thoughtful way of life in a rational manner". Intelligence supports such a "burgher" lifestyle.
See also
References
Sources and further reading
External links
Aristotelianism
Concepts in ancient Greek ethics
Intelligence
Philosophy of Aristotle
Virtue | 0.776196 | 0.993539 | 0.771181 |
Virtue epistemology | Virtue epistemology is a current philosophical approach to epistemology that stresses the importance of intellectual and specifically epistemic virtues. Virtue epistemology evaluates knowledge according to the properties of the persons who hold beliefs in addition to or instead of the properties of the propositions and beliefs. Some advocates of virtue epistemology also adhere to theories of virtue ethics, while others see only loose analogy between virtue in ethics and virtue in epistemology.
Intellectual virtue has been a subject of philosophy since the work of Aristotle, but virtue epistemology is a development in the modern analytic tradition. It is characterized by efforts to solve problems of special concern to modern epistemology, such as justification and reliabilism, by focusing on the knower as agent in a manner similar to how virtue ethics focuses on moral agents rather than moral acts.
The area has a parallel in the theory of Unity of knowledge and action proposed by Chinese philosopher Wang Yangming.
The Raft and the Pyramid
Virtue epistemology was partly inspired by a recent renewal of interest in virtue concepts among moral philosophers, and partly as a response to the intractability of the competing analyses of knowledge that arose in response to Edmund Gettier. Ernest Sosa introduced intellectual virtue into contemporary epistemological discussion in a 1980 paper called "The Raft and the Pyramid".
Sosa argued that an appeal to intellectual virtue could resolve the conflict between foundationalists and coherentists over the structure of epistemic justification. Foundationalism holds that beliefs are founded or based on other beliefs in a hierarchy, similar to the bricks in the structure of a pyramid. Coherentism, on the other hand, uses the metaphor of a raft in which all beliefs are not tied down by foundations but instead are interconnected due to the logical relationships between each belief. Sosa found a flaw in each of these schools of epistemology, in both cases having to do with the relationship between belief and perception.
Coherentism only allows for justification based on logical relations between all the beliefs within a system of beliefs. However, because perceptual beliefs may not have many logical ties with other beliefs in the system, the coherentist account of knowledge cannot accommodate the importance normally attributed to perceptual information. On the other hand, foundationalism arguably encounters a problem when it attempts to describe how foundational beliefs relate to the sensory experiences that support them.
Coherentism and foundationalism developed as responses to the problems with the "traditional" account of knowledge (as justified true belief) developed by Edmund Gettier in 1963. As a result of Gettier's counterexamples, competing theories were developed, but the disputes between coherentists and foundationalists proved to be intractable. Sosa's paper suggested that virtue may resolve such disputes.
Theory
Virtue epistemology replaces formulaic expressions for apprehending knowledge, such as "S knows that p", by amending these formulas with virtue theory applied to intellect, where virtue then becomes potential candidates of "knowledge". This amendation raises problems of its own, however. Some virtue epistemologists use reliabilism as a basis for belief justification, stressing reliable functioning of the intellect.
Virtue epistemology is consistent with some of contextualism. Several areas of contextual epistemology attack the problem of knowledge from an objective standpoint. Virtue epistemology attempts to simplify the analysis of knowledge by replacing of the highest level of knowledge with . Specifically, it leaves room for cognitive relativism. is not constant; it can change depending on the context. Under this view, a well-functioning intellectual faculty is a necessary condition for the formation of knowledge. This differs from other areas of epistemology because it takes the state of an individual's intellect into account. As a result of this, social context also has the ability to alter knowledge. Social contexts change over time, making it necessary for beliefs and knowledge to change with it.
Virtue epistemology, similarly to virtue ethics, is based on the intellectual qualities of the individual as opposed to the quality of the belief; virtue epistemology is person-based, rather than belief-based. Consequently, virtue epistemology can stress "epistemic responsibility"—in which an individual is held responsible for the virtue of their knowledge-gathering faculties.
For example, Massimo Pigliucci applies virtue epistemology to critical thinking and suggests the virtuous individual will consider the following:
non-dismissive consideration of arguments
charitable interpretation of opposing arguments
awareness of one's own presuppositions and potential for being mistaken
consultation of expert knowledge
reliability of source material
knowledge of what one is talking about rather than merely repeating others' opinions.
Varieties
Virtue epistemologists differ in the role they believe virtue to play: eliminative virtue replaces epistemic concepts like knowledge and justification with intellectual virtue and intellectual vice, while non-eliminative virtue epistemology retains a role for such traditional concepts and uses virtue to provide substantive explanation of them.
Virtue epistemologists differ in what they believe epistemic virtues to be. Some accounts are Aristotelian, drawing a relationship between intellectual virtue and character in a similar way to the way moral virtue is related to character. "Weak" virtue epistemology, on the other hand, doesn't require any particular commitment or cultivation of intellectual virtue. Abrol Fairweather argues that "weak" virtue epistemologists "merely [use] virtue theory as a novel lexicon for expressing an independent epistemic theory".
Some virtue epistemologists favor the "virtue reliabilist" account of virtues as reliably functioning cognitive faculties, and others favor a "virtue responsibilist" account in which the responsible epistemic conduct of the agent plays a key role.
Virtue reliabilism
Virtue reliabilism emphasizes that the process whereby truth is garnered must be reliable. However, is not on . Instead, the extent of the person's reality-tracking ability determines how virtuous that person's intellect is, and therefore how good their knowledge is.
For Sosa, the more virtuous faculties are related to direct sensory perception and memory, and less virtuous capacities are ones related to beliefs derived from the primary memory or sense experience. Sosa has two criteria for a belief to be "fully apt": It must be "meta-justified"—the agent must have hit the truth as such—and it must be "apt"—the agent must have been displaying his virtuous capacities in claiming such a belief or hitting the truth as such. By analogy, a hunter must not only be able to hit the target with precision and accuracy, but the shot must be one that the hunter should have taken.
John Greco, another advocate of virtue reliabilism, believes that knowledge and justified belief "are grounded in stable and reliable cognitive character. Such character may include both a person's natural cognitive faculties as well as her acquired habits of thought.... So long as such habits are both stable and successful, they make up the kind of character that gives rise to knowledge." This characterization of virtue reliabilism is more inclusive than Sosa's, eschewing the focus on memorial or sensory experience and instead locating virtue in an agent's stable and reliable dispositions to generate successful cognition. Greco makes room for the inclusion of the intellectual virtues typically construed by the responsibilist camp of virtue epistemology, since many of these virtues can be thought of as stable, reliable dispositions of character.
Virtue responsibilism
In virtue responsibilism, the emphasis is not on faculties such as perception and memory. Instead, certain intellectual character traits are valued as more virtuous than others. These can be creativity, inquisitiveness, rational rigor, honesty, or a number of other possibilities. Generally, these theories are normative in nature. A few different approaches are taken.
Some, such as Lorraine Code, think that intellectual virtues involve having the correct cognitive character and epistemic relation to the world rooted in a social context. She sees the acquisition of correct knowledge about the world as the primary "good", and the end towards which our intellectual efforts should be oriented, with the desire for truth as the primary motivating factor for our epistemological virtues.
James Montmarquet's theory of intellectual virtue is similar to Code's, but specifically defines additional intellectual virtues in order to defuse the potential dogmatism or fanaticism that is compatible with Code's desire for truth. The primary virtue is conscientiousness, which focuses on the correct end of intellectual living. In order to obtain conscientiousness, it is important to maintain impartiality, sobriety, and courage.
Linda Trinkaus Zagzebski has proposed a neo-Aristotelian model of virtue epistemology, emphasizing the role of phronesis (practical wisdom) as an architectonic virtue unifying moral and intellectual virtues even more radically than Aristotle proposed. As delineated in her model, the virtues are "a deep and enduring acquired excellence of a person, involving a characteristic motivation to produce a certain desired end and reliable success in bringing about that end." As she sees it, the "characteristic motivation" of an intellectual virtue is the desire for truth, understanding, and other species of cognitive contact with reality. The notion of "reliable success" is invoked in order to avoid issues of well-intentioned but unsuccessful agents who desire truth but use poorly suited methods to pursue it.
Plantinga's theory of warrant
Alvin Plantinga offers another theory of knowledge closely related to virtue epistemology. According to him, knowledge is warranted if one's intellectual faculties are operating as they are designed to. That is, knowledge is valid if it is obtained through the correct operation of the faculties of the intellect which are designed to have an inherent ability, because they are designed that way, to capture and produce true beliefs.
Jonathan Kvanvig's understanding and assertion
In Jonathan Kvanvig's essay, Why Should Inquiring Minds Want to Know?: Meno Problems and Epistemological Axiology he asserts that epistemology has no place in philosophical study. The value of knowledge originates from the Socratic dialogue written by Plato called Meno. In Meno, Socrates' distinction between "true belief" and "knowledge" forms the basis of the philosophical definition of knowledge as "justified true belief". Socrates explains the similarities and differences between "true belief" and "knowledge", arguing that justified true belief fails to "stay in their place" and must be "tethered". According to Kvanvig, true belief is what is necessary to maximize truth and to avoid error, thus dropping justification from the equation of knowledge. He argues that once we recognize what the manipulated boundary notion of non-Gettierized account of knowledge is, then it becomes clear that there is nothing valuable about the anti-Gettier condition on knowledge. Kvanvig acknowledges that true belief falls short of knowledge, however to him, true belief is no less valuable than knowledge. Kvanvig believes that epistemology should be focused on understanding, an epistemic standing that Kvanvig maintains is of more value than knowledge and justified true belief.
Potential advantages
Some varieties of virtue epistemology that contain normative elements, such as virtue responsibilism, can provide a unified framework of normativity and value. Others, such as Sosa's account, can circumvent Cartesian skepticism with the necessity of externalism interacting with internalism. In this same vein, and because of the inherent flexibility and social nature of some of types of virtue epistemology, social conditioning and influence can be understood within an epistemological framework and explored. This flexibility and connection between internal and external makes virtue epistemology more accessible.
Prominent criticism
One criticism of virtue epistemology has focused upon its characterization of human cognition as grounded in stable character dispositions (e.g. the disposition to use reliable faculties, or one's excellent character traits construed as dispositions). As discussed by a parallel criticism leveled at virtue ethics, virtue theories, whether moral or epistemic, typically consider character traits as stable across time, and efficacious as explanatory reasons for persons behaving and thinking as they do. However, this supposition has been challenged by the "situationist critique" in psychology, which argues that human epistemic character changes depending on context, even when that change is epistemically irrelevant. Thus, irrelevant differences in a situation can prompt a drastic change in cognitive behavior.
Reliabilists might characterize this as effecting a drop in reliable functioning, whereas responsibilists would see these variations as negating one's excellent cognitive character. It is therefore argued that virtue theorists should either amend their conception of human psychology to accommodate this or explain how the results of situationist psychological research do not contradict their theory.
See also
Unity of knowledge and action
References
Selected bibliography
Aquino, Frederick D. Communities of Informed Judgment: Newman's Illative Sense and Accounts of Rationality. Washington, D.C.: Catholic University of America Press, 2004.
Axtell, Guy, ed. Knowledge, Belief, and Character: Readings in Contemporary Virtue. Lanham, MD: Rowman & Littlefield, 2000.
_. "Epistemic Luck in Light of the Virtues." In Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, ed. *Abrol Fairweather and Linda Zagzebski, 158–177. Oxford: Oxford University Press, 2001.
Blackburn, Simon. "Reason, Virtue, and Knowledge." In Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, ed. Abrol Fairweather and Linda Zagzebski, 15–29. Oxford: Oxford University Press, 2001.
Bonjour, Laurence, and Ernest Sosa. Epistemic Justification: Internalism vs. Externalism, Foundations vs. Virtues. Oxford: Blackwell, 2003.
Brady, Michael and Duncan Pritchard. "Moral and Epistemic Virtues." In Moral and Epistemic Virtues, ed. Michael Brady and Duncan Pritchard, 1–12. Malden, MA: Blackwell Publishing Ltd., 2003.
Dalmiya, Vrinda. "Why Should a Knower Care?" Hypatia 17, no. 1 (2002): 34–52.
Fairweather, Abrol. "Epistemic Motivation." In Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, ed. Abrol Fairweather and Linda Zagzebski, 63–81. Oxford: Oxford University Press, 2001.
Goldman, Alvin I. "The Unity of the Epistemic Virtues." In Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, ed. Abrol Fairweather and Linda Zagzebski, 30–48. Oxford: Oxford University Press, 2001.
Hibbs, Thomas S. "Aquinas, Virtue, and Recent Epistemology." The Review of Metaphysics 52, no. 3 (1999): 573–594.
Hookway, Christopher. "How to be a Virtue Epistemologist." In Intellectual Virtue: Perspectives from Ethics and Epistemology, ed. Michael DePaul and Linda Zagzebski, 183–202. Oxford: Clarendon Press, 2003.
Kawall, Jason. "Other-regarding Epistemic Virtues." Ratio XV 3 (2002): 257–275.
Lehrer, Keith. "The Virtue of Knowledge." In Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, ed. Abrol Fairweather and Linda Zagzebski, 200–213. Oxford: Oxford University Press, 2001.
McKinnon, Christine. "Knowing Cognitive Selves." In Intellectual Virtue: Perspectives from Ethics and Epistemology, ed. Michael DePaul and Linda Zagzebski, 227–254. Oxford: Clarendon Press, 2003.
Moros, Enrique R. and Richard J. Umbers. "Distinguishing Virtues from Faculties in Virtue Epistemology." The Southern Journal of Philosophy XLII, (2004): 61–85.
Riggs, Wayne D. "Understanding 'Virtue' and the Virtue of Understanding." In Intellectual Virtue: Perspectives from Ethics and Epistemology, ed. Michael DePaul and Linda Zagzebski, 203–226. Oxford: Clarendon Press, 2003.
Roberts, Robert C. and W. Jay Wood. "Humility and Epistemic Goods." In Intellectual Virtue: Perspectives from Ethics and Epistemology, ed. Michael DePaul and Linda Zagzebski, 257–279. Oxford: Clarendon Press, 2003.
Sosa, Ernest. "The Raft and the Pyramid: Coherence versus Foundations in the Theory of Knowledge." Midwest Studies in Philosophy 5, (1980): 3–25.
_. "For the Love of Truth?" In Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, ed. Abrol Fairweather and Linda Zagzebski, 49–62. Oxford: Oxford University Press, 2001.
_. "The Place of Truth in Epistemology." In Intellectual Virtue: Perspectives from Ethics and Epistemology, ed. Michael DePaul and Linda Zagzebski, 155–179. Oxford: Clarendon Press, 2003.
Wood, W. Jay. Epistemology: Becoming Intellectually Virtuous. Downers Grove, IL: InterVarsity Press, 1998.
Zagzebski, Linda. Virtues of the Mind: An Inquiry into the Nature of Virtue and the Ethical Foundations of Knowledge. Cambridge: Cambridge University Press, 1996.
_. "Must Knowers Be Agents?" In Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, ed. Abrol Fairweather and Linda Zagzebski, 142–157. Oxford: Oxford University Press, 2001.
_. "The Search for the Source of Epistemic Good." In Moral and Epistemic Virtues, ed. Michael Brady and Duncan Pritchard, 13–28. Malden, MA: Blackwell Publishing Ltd., 2003.
_. "Intellectual Motivation and the Good of Truth." In Intellectual Virtue: Perspectives from Ethics and Epistemology, ed. Michael DePaul and Linda Zagzebski, 135–154. Oxford: Clarendon Press, 2003.
_, and Abrol Fairweather. "Introduction." In Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, ed. Abrol Fairweather and Linda Zagzebski, 3–14. Oxford: Oxford University Press, 2001.
_, and Michael DePaul. "Introduction." In Intellectual Virtue: Perspectives from Ethics and Epistemology, ed. Michael DePaul and Linda Zagzebski, 1–12. Oxford: Clarendon Press, 2003.
External links
Justification (epistemology)
Epistemological schools and traditions
Virtue ethics | 0.795449 | 0.969149 | 0.770908 |
Eristic | In philosophy and rhetoric, eristic (from Eris, the ancient Greek goddess of chaos, strife, and discord) refers to an argument that aims to successfully dispute another's argument, rather than searching for truth. According to T.H. Irwin, "It is characteristic of the eristic to think of some arguments as a way of defeating the other side, by showing that an opponent must assent to the negation of what he initially took himself to believe." Eristic is defined by Rankin as arguing for the sake of conflict, as opposed to resolving conflict.
Use in education
Eristic was a type of "question-and-answer" teaching method popularized by the Sophists, such as Euthydemos and Dionysiodoros. Students learned eristic arguments to "refute their opponent, no matter whether he [said] yes or no in answer to their initial question".
Plato contrasted this type of argument with dialectic and other more reasonable and logical methods (e.g., at Republic 454a). In the dialogue Euthydemus, Plato satirizes eristic. It is more than persuasion, and it is more than discourse. It is a combination that wins an argument without regard to truth. Plato believed that the eristic style "did not constitute a method of argument" because to argue eristically is to consciously use fallacious arguments, which therefore weakens one's position.
Unlike Plato, Isocrates (often considered a Sophist) did not distinguish eristic from dialectic. He held that both lacked a "'useful application' ... that created responsible citizens", which unscrupulous teachers used for "enriching themselves at the expense of the youth."
Philosophical eristic
Schopenhauer considers that only logic pursues truth. For him, dialectic, sophistry, and eristic have no objective truth in view, but only the appearance of it; he believed that they do not seek truth itself but, rather, victory. He names these three last methods as "eristic dialectic (contentious argument)."
According to Schopenhauer, Eristic Dialectic is mainly concerned to tabulate and analyze dishonest stratagems, so that they may at once be recognized and defeated, in order to continue with a productive dialectic debate. It is for this very reason that Eristic Dialectic must admittedly take victory, and not objective truth, for its selfish aim and purpose.
Argumentation theory
Argumentation theory is a field of study that asks critical questions about eristic arguments and the other types of dialogue.
See also
The Art of Being Right
Logical fallacy
Eris (mythology)
Notes
References
Schopenhauer, Arthur. Eristische Dialektik, 1830.
Encyclopædia Britannica defines eristic
External links
Arthur Schopenhauer's Eristische Dialektik:
German and English version of Eristische Dialektik
German version of Eristische Dialektik
Philosophical arguments
Rhetoric
Concepts in epistemology
Eris (mythology)
Philosophical methodology | 0.78783 | 0.978461 | 0.770861 |
Ecomodernism | Ecomodernism is an environmental philosophy which argues that technological development can protect nature and improve human wellbeing through eco-economic decoupling, i.e., by separating economic growth from environmental impacts.
Description
Ecomodernism embraces substituting natural ecological services with energy, technology, and synthetic solutions as long as they help reduce impact on environment.
Among other things, ecomodernists embrace high-tech farming techniques to produce more food using less land and water, thus freeing up areas for conservation (precision agriculture, vertical farming, regenerative agriculture and genetically modified foods) and cellular agriculture (cultured meat) and alternative proteins, fish from aquaculture farms, desalination and water purification technologies, advanced waste recycling and circular economy, sustainable forestry and ecological restoration of natural habitats and biodiversity which includes a wide scope of projects including erosion control, reforestation, removal of non-native species and weeds, revegetation of degraded lands, daylighting streams, the reintroduction of native species (preferably native species that have local adaptation), and habitat and range improvement for targeted species, water conservation, Building Information Modeling in green building, green building and green infrastructure, smart grids, resource efficiency, urbanization, smart city, urban density and verticalization, adoption of electric vehicles and hydrogen vehicles, use of drone light shows, projection mapping and 3D holograms to provide a sustainable technological alternatives to fireworks, automation, carbon capture and storage and direct air capture, green nanotechnology (nanofilters for water purification, nanomaterials for air pollution control, nanocatalysts for more efficient chemical processes, nanostructured materials for improved solar cells, nanomaterials for enhancing battery performance, nanoparticles for soil and groundwater remediation and nanosensors for detecting pollutants), energy storage, alternative materials such as bioplastics and bio-based materials and high-tech materials such as graphene and carbon fibers, clean energy transition i.e. replacing low power-density energy sources (e.g. firewood in low-income countries, which leads to deforestation) with high power-density sources as long as their net impact on environment is lower (nuclear power plants, and advanced renewable energy sources), artificial intelligence for resource optimization (predictive maintenance in industrial settings to reduce waste, optimized routing for transportation to reduce fuel consumption, AI-driven climate modeling for better environmental predictions and supply chain optimization to reduce transportation emissions), climate engineering, synthetic fuels and biofuels, 3D printing, 3D food printing, digitalization, miniaturization, servitization of products and dematerialization. Key among the goals of an ecomodern environmental ethic is the use of technology to intensify human activity and make more room for wild nature.
Debates that form the foundation of ecomodernism were born from disappointment in traditional organizations who denied energy sources such as nuclear power, thus leading to an increase of reliance of fossil gas and increase of emissions instead of reduction (e.g. Energiewende). Coming from evidence-based, scientific and pragmatic positions, ecomodernism engages in the debate on how to best protect natural environments, how to accelerate decarbonization to mitigate climate change, and how to accelerate the economic and social development of the world's poor. In these debates, ecomodernism distinguishes itself from other schools of thought, including ecological economics, degrowth, population reduction, laissez-faire economics, the "soft energy" path, and central planning. Ecomodernism draws on American pragmatism, political ecology, evolutionary economics, and modernism. Diversity of ideas and dissent are claimed values in order to avoid the intolerance born of extremism and dogmatism.
Ecomodernist organisations have been established in many countries, including Germany, Finland, and Sweden. While the word 'ecomodernism' has only been used to describe modernist environmentalism since 2013, the term has a longer history in academic design writing and Ecomodernist ideas were developed within a number of earlier texts, including Martin Lewis's Green Delusions, Stewart Brand's Whole Earth Discipline and Emma Marris's Rambunctious Garden. In their 2015 manifesto, 18 self-professed ecomodernists—including scholars from the Breakthrough Institute, Harvard University, Jadavpur University, and the Long Now Foundation—sought to clarify the movement's vision: "we affirm one long-standing environmental ideal, that humanity must shrink its impacts on the environment to make more room for nature, while we reject another, that human societies must harmonize with nature to avoid economic and ecological collapse."
An Ecomodernist Manifesto
In April 2015, a group of 18 self-described ecomodernists collectively published An Ecomodernist Manifesto.
Reception and criticism
Some environmental journalists have praised An Ecomodernist Manifesto. At The New York Times, Eduardo Porter wrote approvingly of ecomodernism's alternative approach to sustainable development. In an article titled "Manifesto Calls for an End to 'People Are Bad' Environmentalism", Slate's Eric Holthaus wrote "It's inclusive, it's exciting, and it gives environmentalists something to fight for for a change." The science journal Nature editorialized the manifesto.
Ecomodernism has been criticized for inadequately recognizing what Holly Jean Buck, Assistant Professor of Environment and Sustainability, says is the exploitative, violent and unequal dimensions of technological modernisation. Sociologist Eileen Crist, Associate Professor Emerita, observed that ecomodernism is founded on a western philosophy of humanism with no regard to "nonhuman freedoms". Of the Manifesto Crist says Human Geographer Rosemary-Claire Collard and co-authors assert that ecomodernism is incompatible with neoliberal capitalism, despite the philosophy's claims to the contrary. By contrast, in his book "Ecomodernism: Technology, Politics and the Climate Crisis" Jonathan Symons argues that ecomodernism belongs in the social democratic tradition, promoting a third way between laissez-faire and anti-capitalism, and calling for transformative state investments in technological transformation and human development. Likewise, in "A sympathetic diagnosis of the Ecomodernist Manifesto", Paul Robbins and Sarah A. Moore describe the similarities and points of departure between ecomodernism and political ecology.
Another major strand of criticism towards ecomodernism comes from proponents of degrowth or the steady-state economy. Eighteen ecological economists published a long rejoinder titled "A Degrowth Response to an Ecomodernist Manifesto", writing "the ecomodernists provide neither a very inspiring blueprint for future development strategies nor much in the way of solutions to our environmental and energy woes."
At the Breakthrough Institute's annual Dialogue in June 2015, several environmental scholars offered a critique of ecomodernism. Bruno Latour argued that the modernity celebrated in An Ecomodernist Manifesto is a myth. Jenny Price argued that the manifesto offered a simplistic view of "humanity" and "nature", which she said are "made invisible" by talking about them in such broad terms.
See also
Bright green environmentalism
Earthship
Ecological civilization
Ecological modernization
Environmental technology
Reflexive modernization
Solarpunk
Technogaianism
Utopian architecture
Nuclear power proposed as renewable energy
References
External links
Bright green environmentalism
Environmentalism
Environmental social science concepts
Environmental philosophy | 0.786478 | 0.97995 | 0.770708 |
Absurdism | Absurdism is the philosophical theory that the universe is irrational and meaningless. It states that trying to find meaning leads people into conflict with a seemingly meaningless world. This conflict can be between rational man and an irrational universe, between intention and outcome, or between subjective assessment and objective worth, but the precise definition of the term is disputed. Absurdism claims that, due to one or more of these conflicts, existence as a whole is absurd. It differs in this regard from the less global thesis that some particular situations, persons, or phases in life are absurd.
Various components of the absurd are discussed in the academic literature and different theorists frequently concentrate their definition and research on different components. On the practical level, the conflict underlying the absurd is characterized by the individual's struggle to find meaning in a meaningless world. The theoretical component, on the other hand, emphasizes more the epistemic inability of reason to penetrate and understand reality. Traditionally, the conflict is characterized as a collision between an internal component, belonging to human nature, and an external component, belonging to the nature of the world. However, some later theorists have suggested that both components may be internal: the capacity to see through the arbitrariness of any ultimate purpose, on the one hand, and the incapacity to stop caring about such purposes, on the other hand. Certain accounts also involve a metacognitive component by holding that an awareness of the conflict is necessary for the absurd to arise.
Some arguments in favor of absurdism focus on the human insignificance in the universe, on the role of death, or on the implausibility or irrationality of positing an ultimate purpose. Objections to absurdism often contend that life is in fact meaningful or point out certain problematic consequences or inconsistencies of absurdism. Defenders of absurdism often complain that it does not receive the attention of professional philosophers it merits in virtue of the topic's importance and its potential psychological impact on the affected individuals in the form of existential crises. Various possible responses to deal with absurdism and its impact have been suggested. The three responses discussed in the traditional absurdist literature are suicide, religious belief in a higher purpose, and rebellion against the absurd. Of these, rebellion is usually presented as the recommended response since, unlike the other two responses, it does not escape the absurd and instead recognizes it for what it is. Later theorists have suggested additional responses, like using irony to take life less seriously or remaining ignorant of the responsible conflict. Some absurdists argue that whether and how one responds is insignificant. This is based on the idea that if nothing really matters then the human response toward this fact does not matter either.
The term "absurdism" is most closely associated with the philosophy of Albert Camus. However, important precursors and discussions of the absurd are also found in the works of Søren Kierkegaard. Absurdism is intimately related to various other concepts and theories. Its basic outlook is inspired by existentialist philosophy. However, existentialism includes additional theoretical commitments and often takes a more optimistic attitude toward the possibility of finding or creating meaning in one's life. Absurdism and nihilism share the belief that life is meaningless, but absurdists do not treat this as an isolated fact and are instead interested in the conflict between the human desire for meaning and the world's lack thereof. Being confronted with this conflict may trigger an existential crisis, in which unpleasant experiences like anxiety or depression may push the affected to find a response for dealing with the conflict. Recognizing the absence of objective meaning, however, does not preclude the conscious thinker from finding subjective meaning in arbitrary places.
Definition
Absurdism is the philosophical thesis that life, or the world in general, is absurd. There is wide agreement that the term "absurd" implies a lack of meaning or purpose but there is also significant dispute concerning its exact definition and various versions have been suggested. The choice of one's definition has important implications for whether the thesis of absurdism is correct and for the arguments cited for and against it: it may be true on one definition and false on another.
In a general sense, the absurd is that which lacks a sense, often because it involves some form of contradiction. The absurd is paradoxical in the sense that it cannot be grasped by reason. But in the context of absurdism, the term is usually used in a more specific sense. According to most definitions, it involves a conflict, discrepancy, or collision between two things. Opinions differ on what these two things are. For example, it is traditionally identified as the confrontation of rational man with an irrational world or as the attempt to grasp something based on reasons even though it is beyond the limits of rationality. Similar definitions see the discrepancy between intention and outcome, between aspiration and reality, or between subjective assessment and objective worth as the source of absurdity. Other definitions locate both conflicting sides within man: the ability to apprehend the arbitrariness of final ends and the inability to let go of commitments to them. In regard to the conflict, absurdism differs from nihilism since it is not just the thesis that nothing matters. Instead, it includes the component that things seem to matter to us nonetheless and that this impression cannot be shaken off. This difference is expressed in the relational aspect of the absurd in that it constitutes a conflict between two sides.
Various components of the absurd have been suggested and different researchers often focus their definition and inquiry on one of these components. Some accounts emphasize the practical components concerned with the individual seeking meaning while others stress the theoretical components about being unable to know the world or to rationally grasp it. A different disagreement concerns whether the conflict exists only internal to the individual or is between the individual's expectations and the external world. Some theorists also include the metacognitive component that the absurd entails that the individual is aware of this conflict.
An important aspect of absurdism is that the absurd is not limited to particular situations but encompasses life as a whole. There is a general agreement that people are often confronted with absurd situations in everyday life. They often arise when there is a serious mismatch between one's intentions and reality. For example, a person struggling to break down a heavy front door is absurd if the house they are trying to break into lacks a back wall and could easily be entered on this route. But the philosophical thesis of absurdism is much more wide-reaching since it is not restricted to individual situations, persons, or phases in life. Instead, it asserts that life, or the world as a whole, is absurd. The claim that the absurd has such a global extension is controversial, in contrast to the weaker claim that some situations are absurd.
The perspective of absurdism usually comes into view when the agent takes a step back from their individual everyday engagements with the world to assess their importance from a bigger context. Such an assessment can result in the insight that the day-to-day engagements matter a lot to us despite the fact that they lack real meaning when evaluated from a wider perspective. This assessment reveals the conflict between the significance seen from the internal perspective and the arbitrariness revealed through the external perspective. The absurd becomes a problem since there is a strong desire for meaning and purpose even though they seem to be absent. In this sense, the conflict responsible for the absurd often either constitutes or is accompanied by an existential crisis.
Components
Practical and theoretical
An important component of the absurd on the practical level concerns the seriousness people bring toward life. This seriousness is reflected in many different attitudes and areas, for example, concerning fame, pleasure, justice, knowledge, or survival, both in regard to ourselves as well as in regard to others. But there seems to be a discrepancy between how seriously we take our lives and the lives of others on the one hand, and how arbitrary they and the world at large seem to be on the other hand. This can be understood in terms of importance and caring: it is absurd that people continue to care about these matters even though they seem to lack importance on an objective level. The collision between these two sides can be defined as the absurd. This is perhaps best exemplified when the agent is seriously engaged in choosing between arbitrary options, none of which truly matters.
Some theorists characterize the ethical sides of absurdism and nihilism in the same way as the view that it does not matter how we act or that "everything is permitted". On this view, an important aspect of the absurd is that whatever higher end or purpose we choose to pursue, it can also be put into doubt since, in the last step, it always lacks a higher-order justification. But usually, a distinction between absurdism and nihilism is made since absurdism involves the additional component that there is a conflict between man's desire for meaning and the absence of meaning.
On a more theoretical view, absurdism is the belief that the world is, at its core, indifferent and impenetrable toward human attempts to uncover its deeper reason or that it cannot be known. According to this theoretical component, it involves the epistemological problem of the human limitations of knowing the world. This includes the thesis that the world is in critical ways ungraspable to humans, both in relation to what to believe and how to act. This is reflected in the chaos and irrationality of the universe, which acts according to its own laws in a manner indifferent to human concerns and aspirations. It is closely related to the idea that the world remains silent when we ask why things are the way they are. This silence arises from the impression that, on the most fundamental level, all things exist without a reason: they are simply there. An important aspect of these limitations to knowing the world is that they are essential to human cognition, i.e. they are not due to following false principles or accidental weaknesses but are inherent in the human cognitive faculties themselves.
Some theorists also link this problem to the circularity of human reason, which is very skilled at producing chains of justification linking one thing to another while trying and failing to do the same for the chain of justification as a whole when taking a reflective step backward. This implies that human reason is not just too limited to grasp life as a whole but that, if one seriously tried to do so anyway, its ungrounded circularity might collapse and lead to madness.
Internal and external
An important disagreement within the academic literature about the nature of absurdism and the absurd focuses specifically on whether the components responsible for the conflict are internal or external. According to the traditional position, the absurd has both internal and external components: it is due to the discrepancy between man's internal desire to lead a meaningful life and the external meaninglessness of the world. On this view, humans have among their desires some transcendent aspirations that seek a higher form of meaning in life. The absurd arises since these aspirations are ignored by the world, which is indifferent to our "need for validation of the importance of our concerns". This implies that the absurd "is not in man ... nor in the world, but in their presence together". This position has been rejected by some later theorists, who hold that the absurd is purely internal because it "derives not from a collision between our expectations and the world, but from a collision within ourselves".
The distinction is important since, on the latter view, the absurd is built into human nature and would prevail no matter what the world was like. So it is not just that absurdism is true in the actual world. Instead, any possible world, even one that was designed by a divine god and guided by them according to their higher purpose, would still be equally absurd to man. In this sense, absurdity is the product of the power of our consciousness to take a step back from whatever it is considering and reflect on the reason of its object. When this process is applied to the world as a whole including God, it is bound to fail its search for a reason or an explanation, no matter what the world is like. In this sense, absurdity arises from the conflict between features of ourselves: "our capacity to recognize the arbitrariness of our ultimate concerns and our simultaneous incapacity to relinquish our commitment to them". This view has the side-effect that the absurd depends on the fact that the affected person recognizes it. For example, people who fail to apprehend the arbitrariness or the conflict would not be affected.
Metacognitive
According to some researchers, a central aspect of the absurd is that the agent is aware of the existence of the corresponding conflict. This means that the person is conscious both of the seriousness they invest and of how it seems misplaced in an arbitrary world. It also implies that other entities that lack this form of consciousness, like non-organic matter or lower life forms, are not absurd and are not faced with this particular problem. Some theorists also emphasize that the conflict remains despite the individual's awareness of it, i.e. that the individual continues to care about their everyday concerns despite their impression that, on the large scale, these concerns are meaningless. Defenders of the metacognitive component have argued that it manages to explain why absurdity is primarily ascribed to human aspirations but not to lower animals: because they lack this metacognitive awareness. However, other researchers reject the metacognitive requirement based on the fact that it would severely limit the scope of the absurd to only those possibly few individuals who clearly recognize the contradiction while sparing the rest. Thus, opponents have argued that not recognizing the conflict is just as absurd as consciously living through it.
Arguments
For
Various popular arguments are often cited in favor of absurdism. Some focus on the future by pointing out that nothing we do today will matter in a million years. A similar line of argument points to the fact that our lives are insignificant because of how small they are in relation to the universe as a whole, both concerning their spatial and their temporal dimensions. The thesis of absurdism is also sometimes based on the problem of death, i.e. that there is no final end for us to pursue since we are all going to die. In this sense, death is said to destroy all our hard-earned achievements like career, wealth, or knowledge. This argument is mitigated to some extent by the fact that we may have positive or negative effects on the lives of other people as well. But this does not fully solve the issue since the same problem, i.e. the lack of an ultimate end, applies to their lives as well. Thomas Nagel has objected to these lines of argument based on the claim that they are circular: they assume rather than establish that life is absurd. For example, the claim that our actions today will not matter in a million years does not directly imply that they do not matter today. And similarly, the fact that a process does not reach a meaningful ultimate goal does not entail that the process as a whole is worthless since some parts of the process may contain their justification without depending on a justification external to them.
Another argument proceeds indirectly by pointing out how various great thinkers have obvious irrational elements in their systems of thought. These purported mistakes of reason are then taken as signs of absurdism that were meant to hide or avoid it. From this perspective, the tendency to posit the existence of a benevolent God may be seen as a form of defense mechanism or wishful thinking to avoid an unsettling and inconvenient truth. This is closely related to the idea that humans have an inborn desire for meaning and purpose, which is dwarfed by a meaningless and indifferent universe. For example, René Descartes aims to build a philosophical system based on the absolute certainty of the "I think, therefore I am" just to introduce without a proper justification the existence of a benevolent and non-deceiving God in a later step in order to ensure that we can know about the external world. A similar problematic step is taken by John Locke, who accepts the existence of a God beyond sensory experience, despite his strict empiricism, which demands that all knowledge be based on sensory experience.
Other theorists argue in favor of absurdism based on the claim that meaning is relational. In this sense, for something to be meaningful, it has to stand in relation to something else that is meaningful. For example, a word is meaningful because of its relation to a language or someone's life could be meaningful because this person dedicates their efforts to a higher meaningful project, like serving God or fighting poverty. An important consequence of this characterization of meaning is that it threatens to lead to an infinite regress: at each step, something is meaningful because something else is meaningful, which in its turn has meaning only because it is related to yet another meaningful thing, and so on. This infinite chain and the corresponding absurdity could be avoided if some things had intrinsic or ultimate meaning, i.e. if their meaning did not depend on the meaning of something else. For example, if things on the large scale, like God or fighting poverty, had meaning, then our everyday engagements could be meaningful by standing in the right relation to them. However, if these wider contexts themselves lack meaning then they are unable to act as sources of meaning for other things. This would lead to the absurd when understood as the conflict between the impression that our everyday engagements are meaningful even though they lack meaning because they do not stand in a relation to something else that is meaningful.
Another argument for absurdism is based on the attempt of assessing standards of what matters and why it matters. It has been argued that the only way to answer such a question is in reference to these standards themselves. This means that, in the end, it depends only on us, that "what seems to us important or serious or valuable would not seem so if we were differently constituted". The circularity and groundlessness of these standards themselves are then used to argue for absurdism.
Against
The most common criticism of absurdism is to argue that life in fact has meaning. Supernaturalist arguments to this effect are based on the claim that God exists and acts as the source of meaning. Naturalist arguments, on the other hand, contend that various sources of meaning can be found in the natural world without recourse to a supernatural realm. Some of them hold that meaning is subjective. On this view, whether a given thing is meaningful varies from person to person based on their subjective attitude toward this thing. Others find meaning in external values, for example, in morality, knowledge, or beauty. All these different positions have in common that they affirm the existence of meaning, in contrast to absurdism.
Another criticism of absurdism focuses on its negative attitude toward moral values. In the absurdist literature, the moral dimension is sometimes outright denied, for example, by holding that value judgments are to be discarded or that the rejection of God implies the rejection of moral values. On this view, absurdism brings with it a highly controversial form of moral nihilism. This means that there is a lack, not just of a higher purpose in life, but also of moral values. These two sides can be linked by the idea that without a higher purpose, nothing is worth pursuing that could give one's life meaning. This worthlessness seems to apply to morally relevant actions equally as to other issues. In this sense, "[b]elief in the meaning of life always implies a scale of values" while "[b]elief in the absurd ... teaches the contrary". Various objections to such a position have been presented, for example, that it violates common sense or that it leads to numerous radical consequences, like that no one is ever guilty of any blameworthy behavior or that there are no ethical rules.
But this negative attitude toward moral values is not always consistently maintained by absurdists and some of the suggested responses on how to deal with the absurd seem to explicitly defend the existence of moral values. Due to this ambiguity, other critics of absurdism have objected to it based on its inconsistency. The moral values defended by absurdists often overlap with the ethical outlook of existentialism and include traits like sincerity, authenticity, and courage as virtues. In this sense, absurdists often argue that it matters how the agent faces the absurdity of their situation and that the response should exemplify these virtues. This aspect is particularly prominent in the idea that the agent should rebel against the absurd and live their life authentically as a form of passionate revolt.
Some see the latter position as inconsistent with the idea that there is no meaning in life: if nothing matters then it should also not matter how we respond to this fact. So absurdists seem to be committed both to the claim that moral values exist and that they do not exist. Defenders of absurdism have tried to resist this line of argument by contending that, in contrast to other responses, it remains true to the basic insight of absurdism and the "logic of the absurd" by acknowledging the existence of the absurd instead of denying it. But this defense is not always accepted. One of its shortcomings seems to be that it commits the is-ought fallacy: absurdism presents itself as a descriptive claim about the existence and nature of the absurd but then goes on to posit various normative claims. Another defense of absurdism consists in weakening the claims about how one should respond to the absurd and which virtues such a response should exemplify. On this view, absurdism may be understood as a form of self-help that merely provides prudential advice. Such prudential advice may be helpful to certain people without pretending to have the status of universally valid moral values or categorical normative judgments. So the value of the prudential advice may merely be relative to the interests of some people but not valuable in a more general sense. This way, absurdists have tried to resolve the apparent inconsistency in their position.
Examples
According to absurdism, life in general is absurd: the absurd is not just limited to a few specific cases. Nonetheless, some cases are more paradigmatic examples than others. The Myth of Sisyphus is often treated as a key example of the absurd. In it, Zeus punishes King Sisyphus by compelling him to roll a massive boulder up a hill. Whenever the boulder reaches the top, it rolls down again, thereby forcing Sisyphus to repeat the same task all over again throughout eternity. This story may be seen as an absurdist parable for the hopelessness and futility of human life in general: just like Sisyphus, humans in general are condemned to toil day in and day out in the attempt to fulfill pointless tasks, which will be replaced by new pointless tasks once they are completed. It has been argued that a central aspect of Sisyphus' situation is not just the futility of his labor but also his awareness of the futility.
Another example of the absurdist aspect of the human condition is given in Franz Kafka's The Trial. In it, the protagonist Josef K. is arrested and prosecuted by an inaccessible authority even though he is convinced that he has done nothing wrong. Throughout the story, he desperately tries to discover what crimes he is accused of and how to defend himself. But in the end, he lets go of his futile attempts and submits to his execution without ever finding out what he was accused of. The absurd nature of the world is exemplified by the mysterious and impenetrable functioning of the judicial system, which seems indifferent to Josef K. and resists all of his attempts of making sense of it.
Importance
Philosophers of absurdism often complain that the topic of the absurd does not receive the attention of professional philosophers it merits, especially when compared to other perennial philosophical areas of inquiry. It has been argued, for example, that this can be seen in the tendency of various philosophers throughout the ages to include the epistemically dubitable existence of God in their philosophical systems as a source of ultimate explanation of the mysteries of existence. In that regard, this tendency may be seen as a form of defense mechanism or wishful thinking constituting a side-effect of the unacknowledged and ignored importance of the absurd. While some discussions of absurdism happen explicitly in the philosophical literature, it is often presented in a less explicit manner in the form of novels or plays. These presentations usually happen by telling stories that exemplify some of the key aspects of absurdism even though they may not explicitly discuss the topic.
It has been argued that acknowledging the existence of the absurd has important consequences for epistemology, especially in relation to philosophy but also when applied more widely to other fields. The reason for this is that acknowledging the absurd includes becoming aware of human cognitive limitations and may lead to a form of epistemic humbleness.
The impression that life is absurd may in some cases have serious psychological consequences like triggering an existential crisis. In this regard, an awareness both of absurdism itself and the possible responses to it can be central to avoiding or resolving such consequences.
Possible responses
Most researchers argue that the basic conflict posed by the absurd cannot be truly resolved. This means that any attempt to do so is bound to fail even though their protagonists may not be aware of their failure. On this view, there are still several possible responses, some better than others, but none able to solve the fundamental conflict. Traditional absurdism, as exemplified by Albert Camus, holds that there are three possible responses to absurdism: suicide, religious belief, or revolting against the absurd. Later researchers have suggested more ways of responding to absurdism.
A very blunt and simple response, though quite radical, is to commit suicide. According to Camus, for example, the problem of suicide is the only "really serious philosophical problem". It consists in seeking an answer to the question "Should I kill myself?". This response is motivated by the insight that, no matter how hard the agent tries, they may never reach their goal of leading a meaningful life, which can then justify the rejection of continuing to live at all. Most researchers acknowledge that this is one form of response to the absurd but reject it due to its radical and irreversible nature and argue instead for a different approach.
One such alternative response to the apparent absurdity of life is to assume that there is some higher ultimate purpose in which the individual may participate, like service to society, progress of history, or God's glory. While the individual may only play a small part in the realization of this overarching purpose, it may still act as a source of meaning. This way, the individual may find meaning and thereby escape the absurd. One serious issue with this approach is that the problem of absurdity applies to this alleged higher purpose as well. So just like the aims of a single individual life can be put into doubt, this applies equally to a larger purpose shared by many. And if this purpose is itself absurd, it fails to act as a source of meaning for the individual participating in it. Camus identifies this response as a form of suicide as well, pertaining not to the physical but to the philosophical level. It is a philosophical suicide in the sense that the individual just assumes that the chosen higher purpose is meaningful and thereby fails to reflect on its absurdity.
Traditional absurdists usually reject both physical and philosophical suicide as the recommended response to the absurd, usually with the argument that both these responses constitute some form of escape that fails to face the absurd for what it is. Despite the gravity and inevitability of the absurd, they recommend that we should face it directly, i.e. not escape from it by retreating into the illusion of false hope or by ending one's life. In this sense, accepting the reality of the absurd means rejecting any hopes for a happy afterlife free of those contradictions. Instead, the individual should acknowledge the absurd and engage in a rebellion against it. Such a revolt usually exemplifies certain virtues closely related to existentialism, like the affirmation of one's freedom in the face of adversity as well as accepting responsibility and defining one's own essence. An important aspect of this lifestyle is that life is lived passionately and intensely by inviting and seeking new experiences. Such a lifestyle might be exemplified by an actor, a conqueror, or a seduction artist who is constantly on the lookout for new roles, conquests, or attractive people despite their awareness of the absurdity of these enterprises. Another aspect lies in creativity, i.e. that the agent sees themselves as and acts as the creator of their own works and paths in life. This constitutes a form of rebellion in the sense that the agent remains aware of the absurdity of the world and their part in it but keeps on opposing it instead of resigning and admitting defeat. But this response does not solve the problem of the absurd at its core: even a life dedicated to the rebellion against the absurd is itself still absurd. Defenders of the rebellious response to absurdism have pointed out that, despite its possible shortcomings, it has one important advantage over many of its alternatives: it manages to accept the absurd for what it is without denying it by rejecting that it exists or by stopping one's own existence. Some even hold that it is the only philosophically coherent response to the absurd.
While these three responses are the most prominent ones in the traditional absurdist literature, various other responses have also been suggested. Instead of rebellion, for example, absurdism may also lead to a form of irony. This irony is not sufficient to escape the absurdity of life altogether, but it may mitigate it to some extent by distancing oneself to some degree from the seriousness of life. According to Thomas Nagel, there may be, at least theoretically, two responses to actually resolving the problem of the absurd. This is based on the idea that the absurd arises from the consciousness of a conflict between two aspects of human life: that humans care about various things and that the world seems arbitrary and does not merit this concern. The absurd would not arise if either of the conflicting elements would cease to exist, i.e. if the individual would stop caring about things, as some Eastern religions seem to suggest, or if one could find something that possesses a non-arbitrary meaning that merits the concern. For theorists who give importance to the consciousness of this conflict for the absurd, a further option presents itself: to remain ignorant of it to the extent that this is possible.
Other theorists hold that a proper response to the absurd may neither be possible nor necessary, that it just remains one of the basic aspects of life no matter how it is confronted. This lack of response may be justified through the thesis of absurdism itself: if nothing really matters on the grand scale, then this applies equally to human responses toward this fact. From this perspective, the passionate rebellion against an apparently trivial or unimportant state of affairs seems less like a heroic quest and more like a fool's errand. Jeffrey Gordon has objected to this criticism based on the claim that there is a difference between absurdity and lack of importance. So even if life as a whole is absurd, some facts about life may still be more important than others and the fact that life as a whole is absurd would be a good candidate for the more important facts.
History
Absurdism has its origins in the work of the 19th-century Danish philosopher Søren Kierkegaard, who chose to confront the crisis that humans face with the Absurd by developing his own existentialist philosophy. Absurdism as a belief system was born of the European existentialist movement that ensued, specifically when Camus rejected certain aspects of that philosophical line of thought and published his essay The Myth of Sisyphus. The aftermath of World War II provided the social environment that stimulated absurdist views and allowed for their popular development, especially in the devastated country of France. Foucault viewed Shakespearean theater as a precursor of absurdism.
Immanuel Kant
An idea very close to the concept of the absurd is due to Immanuel Kant, who distinguishes between phenomena and noumena. This distinction refers to the gap between how things appear to us and what they are like in themselves. For example, according to Kant, space and times are dimensions belonging to the realm of phenomena since this is how sensory impressions are organized by the mind, but may not be found on the level of noumena. The concept of the absurd corresponds to the thesis that there is such a gap and human limitations may limit the mind from ever truly grasping reality, i.e. that reality in this sense remains absurd to the mind.
Søren Kierkegaard
A century before Camus, the 19th-century Danish philosopher Søren Kierkegaard wrote extensively about the absurdity of the world. In his journals, Kierkegaard writes about the absurd:
Here is another example of the Absurd from his writings:
How can this absurdity be held or believed? Kierkegaard says:
Kierkegaard provides an example in Fear and Trembling (1843), which was published under the pseudonym Johannes de Silentio. In the story of Abraham in the Book of Genesis, Abraham is told by God to kill his son Isaac. Just as Abraham is about to kill Isaac, an angel stops Abraham from doing so. Kierkegaard believes that through virtue of the absurd, Abraham, defying all reason and ethical duties ("you cannot act"), got back his son and reaffirmed his faith ("where I have to act").
Another instance of absurdist themes in Kierkegaard's work appears in The Sickness Unto Death, which Kierkegaard signed with pseudonym Anti-Climacus. Exploring the forms of despair, Kierkegaard examines the type of despair known as defiance. In the opening quotation reproduced at the beginning of the article, Kierkegaard describes how such a man would endure such a defiance and identifies the three major traits of the Absurd Man, later discussed by Albert Camus: a rejection of escaping existence (suicide), a rejection of help from a higher power and acceptance of his absurd (and despairing) condition.
According to Kierkegaard in his autobiography The Point of View of My Work as an Author, most of his pseudonymous writings are not necessarily reflective of his own opinions. Nevertheless, his work anticipated many absurdist themes and provided its theoretical background.
Albert Camus
The philosophy of Albert Camus, or more precisely the “camusian absurd” (french : l'absurde camusien), refers with absurdism to the work and philosophical thought of the french writer Albert Camus. This philosophy is influenced by the author's political, libertarian, social and ecological ideas; and is inspired by previous philosophical trends, such as Greek philosophy, nihilism, the Nietzschean thought or existentialism. It revolves around three major cycles: “the absurd (l'absurde)”, “the revolt (la révolte)” and “love (l'amour)”. Each cycle is linked to a Greek myth (Sisyphus, Prometheus, Nemesis) and explores specific themes and objects; the common thread remaining the solitude and despair of the human, constantly driven by the tireless search for the meaning of the world and of life.The cycle of the absurd, or negation, primarily addresses suicide and the human condition. It is expressed through four Camus's works : the novel The Stranger and the essay The Myth of Sisyphus (1942), then the plays Caligula and The Misunderstanding (1944). By refusing the refuge of belief, Human becomes aware that his existence revolves around repetitive and meaningless acts. The certainty of death only reinforces, according to the writer, the feeling of uselessness of all existence. The absurd is therefore the feeling that man feels when confronted with the absence of meaning in the face of the Universe, the painful realization of his separation from the world. The question then arises of the legitimacy of suicide.
The cycle of revolt, called the positive, is a direct response to the absurd and is also expressed by four of his works: the novel The Plague (1947), the plays The State of Siege (1948) and The Just Assassins (1949), then the essay The Rebel (1951). Positive concept of affirmation of the individual, where only action and commitment count in the face of the tragedy of the world, revolt is for the writer the way of experiencing the absurd, knowing our fatal destiny and nevertheless facing it : “Man refuses the world as it is, without agreeing to escape it.” It is intelligence grappling with the “unreasonable silence of the world”. Depriving ourselves of eternal life frees us from the constraints imposed by an improbable future; Man gains freedom of action, lucidity and dignity.
The philosophy of Camus therefore has as its finitude a singular humanism. Advancing a message of lucidity, resilience and emancipation in the face of the absurdity of life, it encourages people to create their own meanings through personal choices and commitments, and to embrace their freedom to the fullest. Because he affirms that, even in the absurd, there is room for passion and rebellion; and although the Universe may be indifferent to our search for meaning, this search is in itself meaningful. In The Myth of Sisyphus, despite his absurd destiny, Sisyphus finds a form of liberation in his incessant work: “one must imagine Sisyphus happy”. With the cycle of love and the “midday thought” (french: la pensée de midi), the philosophy of the absurd is completed by a principle of measurement and pleasure, close to Epicureanism.
Though the notion of the 'absurd' pervades all Albert Camus's writing, The Myth of Sisyphus is his chief work on the subject. In it, Camus considers absurdity as a confrontation, an opposition, a conflict or a "divorce" between two ideals. Specifically, he defines the human condition as absurd, as the confrontation between man's desire for significance, meaning and clarity on the one hand—and the silent, cold universe on the other. He continues that there are specific human experiences evoking notions of absurdity. Such a realization or encounter with the absurd leaves the individual with a choice: suicide, a leap of faith, or recognition. He concludes that recognition is the only defensible option.
For Camus, suicide is a "confession" that life is not worth living; it is a choice that implicitly declares that life is "too much." Suicide offers the most basic "way out" of absurdity: the immediate termination of the self and its place in the universe.
The absurd encounter can also arouse a "leap of faith," a term derived from one of Kierkegaard's early pseudonyms, Johannes de Silentio (although the term was not used by Kierkegaard himself), where one believes that there is more than the rational life (aesthetic or ethical). To take a "leap of faith," one must act with the "virtue of the absurd" (as Johannes de Silentio put it), where a suspension of the ethical may need to exist. This faith has no expectations, but is a flexible power initiated by a recognition of the absurd. Camus states that because the leap of faith escapes rationality and defers to abstraction over personal experience, the leap of faith is not absurd. Camus considers the leap of faith as "philosophical suicide," rejecting both this and physical suicide.
Lastly, a person can choose to embrace the absurd condition. According to Camus, one's freedom—and the opportunity to give life meaning—lies in the recognition of absurdity. If the absurd experience is truly the realization that the universe is fundamentally devoid of absolutes, then we as individuals are truly free. "To live without appeal," as he puts it, is a philosophical move to define absolutes and universals subjectively, rather than objectively. The freedom of man is thus established in one's natural ability and opportunity to create their own meaning and purpose; to decide (or think) for oneself. The individual becomes the most precious unit of existence, representing a set of unique ideals that can be characterized as an entire universe in its own right. In acknowledging the absurdity of seeking any inherent meaning, but continuing this search regardless, one can be happy, gradually developing meaning from the search alone. "Happiness and the absurd are two sons of the same earth. They are inseparable."
Camus states in The Myth of Sisyphus: "Thus I draw from the absurd three consequences, which are my revolt, my freedom, and my passion. By the mere activity of consciousness I transform into a rule of life what was an invitation to death, and I refuse suicide." "Revolt" here refers to the refusal of suicide and search for meaning despite the revelation of the Absurd; "Freedom" refers to the lack of imprisonment by religious devotion or others' moral codes; "Passion" refers to the most wholehearted experiencing of life, since hope has been rejected, and so he concludes that every moment must be lived fully.
Relation to other concepts
Existentialism and nihilism
Absurdism originated from (as well as alongside) the 20th-century strains of existentialism and nihilism; it shares some prominent starting points with both, though also entails conclusions that are uniquely distinct from these other schools of thought. All three arose from the human experience of anguish and confusion stemming from existence: the apparent meaninglessness of a world in which humans, nevertheless, are compelled to find or create meaning. The three schools of thought diverge from there. Existentialists have generally advocated the individual's construction of their own meaning in life as well as the free will of the individual. Nihilists, on the contrary, contend that "it is futile to seek or to affirm meaning where none can be found." Absurdists, following Camus' formulation, hesitantly allow the possibility for some meaning or value in life, but are neither as certain as existentialists are about the value of one's own constructed meaning nor as nihilists are about the total inability to create meaning. Absurdists following Camus also devalue or outright reject free will, encouraging merely that the individual live defiantly and authentically in spite of the psychological tension of the Absurd.
Camus himself passionately worked to counter nihilism, as he explained in his essay "The Rebel", while he also categorically rejected the label of "existentialist" in his essay "Enigma" and in the compilation The Lyrical and Critical Essays of Albert Camus, though he was, and still is, often broadly characterized by others as an existentialist. Both existentialism and absurdism entail consideration of the practical applications of becoming conscious of the truth of existential nihilism: i.e., how a driven seeker of meaning should act when suddenly confronted with the seeming concealment, or downright absence, of meaning in the universe.
While absurdism can be seen as a kind of response to existentialism, it can be debated exactly how substantively the two positions differ from each other. The existentialist, after all, does not deny the reality of death. But the absurdist seems to reaffirm the way in which death ultimately nullifies our meaning-making activities, a conclusion the existentialists seem to resist through various notions of posterity or, in Sartre's case, participation in a grand humanist project.
Existential crisis
The basic problem of absurdism is usually not encountered through a dispassionate philosophical inquiry but as the manifestation of an existential crisis. Existential crises are inner conflicts in which the individual wrestles with the impression that life lacks meaning. They are accompanied by various negative experiences, such as stress, anxiety, despair, and depression, which can disturb the individual's normal functioning in everyday life. In this sense, the conflict underlying the absurdist perspective poses a psychological challenge to the affected. This challenge is due to the impression that the agent's vigorous daily engagement stands in incongruity with its apparent insignificance encountered through philosophical reflection. Realizing this incongruity is usually not a pleasant occurrence and may lead to estrangement, alienation, and hopelessness. The intimate relation to psychological crises is also manifested in the problem of finding the right response to this unwelcome conflict, for example, by denying it, by taking life less seriously, or by revolting against the absurd. But accepting the position of absurdism may also have certain positive psychological effects. In this sense, it can help the individual achieve a certain psychological distance from unexamined dogmas and thus help them evaluate their situation from a more encompassing and objective perspective. However, it brings with it the danger of leveling all significant differences and thereby making it difficult for the individual to decide what to do or how to live their life.
Epistemological skepticism
It has been argued that absurdism in the practical domain resembles epistemological skepticism in the theoretical domain. In the case of epistemology, we usually take for granted our knowledge of the world around us even though, when methodological doubt is applied, it turns out that this knowledge is not as unshakable as initially assumed. For example, the agent may decide to trust their perception that the sun is shining but its reliability depends on the assumption that the agent is not dreaming, which they would not know even if they were dreaming. In a similar sense in the practical domain, the agent may decide to take aspirin in order to avoid a headache even though they may be unable to give a reason for why they should be concerned with their own wellbeing at all. In both cases, the agent goes ahead with a form of unsupported natural confidence and takes life largely for granted despite the fact that their power to justify is only limited to a rather small range and fails when applied to the larger context, on which the small range depends.
Education
It has been argued that absurdism is opposed to various fundamental principles and assumptions guiding education, like the importance of truth and of fostering rationality in the students.
See also
Absurdist fiction
Credo quia absurdum
Discordianism
Existential nihilism
Existentialism
Irrationality
Is–ought problem
Kafkaesque
Fact–value distinction
Lottery of birth
Meaning of life
Nihilism
Non sequitur (literary device)
Pataphysics
Peter Wessel Zapffe
Philosophical pessimism
The Stranger (novel)
Theatre of the Absurd
Absurdistan
Church of the SubGenius
Terror management theory
Use–mention distinction
References
Further reading
OBERIU, edited by Eugene Ostashevsky. Northwestern University Press, 2005.
Thomas Nagel: Mortal Questions, 1991.
External links
Absurdist Monthly Review magazine
Existentialist concepts
Nihilism
Metaphysical theories
Philosophy of life | 0.771124 | 0.999439 | 0.770692 |
Cartesian doubt | Cartesian doubt is a form of methodological skepticism associated with the writings and methodology of René Descartes (March 31, 1596February 11, 1650). Cartesian doubt is also known as Cartesian skepticism, methodic doubt, methodological skepticism, universal doubt, systematic doubt, or hyperbolic doubt.
Cartesian doubt is a systematic process of being skeptical about (or doubting) the truth of one's beliefs, which has become a characteristic method in philosophy. Additionally, Descartes' method has been seen by many as the root of the modern scientific method. This method of doubt was largely popularized in Western philosophy by René Descartes, who sought to doubt the truth of all beliefs in order to determine which he could be certain were true. It is the basis for Descartes' statement, "Cogito ergo sum" (I think, therefore I am). A fuller version of his phrase: "dubito ergo cogito, cogito ergo sum" translates to "I doubt therefore I think, I think therefore I exist." Sum translated as "I exist" (per various Latin to English dictionaries) presents a much larger and clearer meaning to the phrase.
Methodological skepticism is distinguished from philosophical skepticism in that methodological skepticism is an approach that subjects all knowledge claims to scrutiny with the goal of sorting out true from false claims, whereas philosophical skepticism is an approach that questions the possibility of certain knowledge.
Characteristics
Cartesian doubt is methodological. It uses doubt as a route to certain knowledge by identifying what can't be doubted. The fallibility of sense data in particular is a subject of Cartesian doubt.
There are several interpretations as to the objective of Descartes' skepticism. Prominent among these is a foundationalist account, which claims that Descartes' skepticism aims to eliminate all belief that it is possible to doubt, thus leaving only basic beliefs (also known as foundational beliefs). From these indubitable basic beliefs, Descartes then attempts to derive further knowledge. It's an archetypal and significant example that epitomizes the Continental Rational schools of philosophy.
Mario Bunge argues that methodological skepticism presupposes that scientific theories and methods satisfy certain philosophical requirements: Idealism, materialism, realism, rationalism, empiricism, and systemism, that the data and hypotheses of science constitute a system.
Technique
Descartes' method of hyperbolic doubt included:
Accepting only information you know is true
Breaking down these truths into smaller units
Solving the simple problems first
Making complete lists of further problems
Hyperbolic doubt means having the tendency to doubt, since it is an extreme or exaggerated form of doubt. Knowledge in the Cartesian sense means to know something beyond not merely all reasonable doubt, but all possible doubt. In his Meditations on First Philosophy (1641), Descartes resolved to systematically doubt that any of his beliefs were true, in order to build, from the ground up, a belief system consisting of only certainly true beliefs; his end goal—or at least a major one—was to find an undoubtable basis for the sciences. Consider Descartes' opening lines of the Meditations:
Descartes' method
René Descartes, the originator of Cartesian doubt, put all beliefs, ideas, thoughts, and matter in doubt. He showed that his grounds, or reasoning, for any knowledge could just as well be false. Sensory experience, the primary mode of knowledge, is often erroneous and therefore must be doubted. For instance, what one is seeing may very well be a hallucination. There is nothing that proves it cannot be. In short, if there is any way a belief can be disproved, then its grounds are insufficient. From this, Descartes proposed two arguments, the dream and the demon.
The dream argument
Descartes, knowing that the context of our dreams, while possibly unbelievable, are often lifelike, hypothesized that humans can only believe that they are awake. There are no sufficient grounds to distinguish a dream experience from a waking experience. For instance, Subject A sits at the computer, typing this article. Just as much evidence exists to indicate that the act of composing this article is reality as there is evidence to demonstrate the opposite. Descartes conceded that we live in a world that can create such ideas as dreams. However, by the end of The Meditations, he concludes that we can distinguish dream from reality at least in retrospect:"But when I distinctly see where things come from and where and when they come to me, and when I can connect my perceptions of them with the whole of the rest of my life without a break, then I am quite certain that when I encounter these things I am not asleep but awake."—Descartes: Selected Philosophical Writings
The Evil Demon
Descartes reasoned that our very own experience may very well be controlled by an evil demon of sorts. This demon is as clever and deceitful as he is powerful. He could have created a superficial world that we may think we live in. As a result of this doubt, sometimes termed the Malicious Demon Hypothesis, Descartes found that he was unable to trust even the simplest of his perceptions.
In Meditation I, Descartes stated that if one were mad, even briefly, the insanity might have driven man into believing that what we thought was true could be merely our minds deceiving us. He also stated that there could be 'some malicious, powerful, cunning demon' that had deceived us, preventing us from judging correctly.
Descartes argued that all his senses were lying, and since your senses can easily fool you, his idea of an infinitely powerful being must be true—since that idea could have only been put there by an infinitely powerful being who would have no reason for deceit.
I think, therefore I am
While methodic doubt has a nature, one need not hold that knowledge is impossible to apply the method of doubt. Indeed, Descartes' attempt to apply the method of doubt to the existence of himself spawned the proof of his famous saying, "Cogito, ergo sum" (I think, therefore I am). That is, Descartes tried to doubt his own existence, but found that even his doubting showed that he existed, since he could not doubt if he did not exist.
See also
Academic skepticism
Bracketing (phenomenology)
Clinical equipoise
Egocentric predicament
Incontrovertible evidence
Suspension of judgment
Solipsism
Theory of justification
References
Further reading
Cottingham, Stoothoff, and Murdoch, trans. (1984). The Philosophical Writings of Descartes. Cambridge: Cambridge University Press.
Janet Broughton, Descartes's Method of Doubt, Princeton University Press, 2002.
Edwin M. Curley, Descartes against the Skeptics, Harvard University Press, 1978.
François-Xavier de Peretti. "Stop Doubting with Descartes". Topoi, Springer Nature (2022). https://doi.org/10.1007/s11245-022-09822-0
François-Xavier de Peretti, « Descartes sceptique malgré lui ? », International Journal for the Study of Skepticism, 11 (3), 2021, Brill, Leyde, pp. 177–192. Online publication date: 15 octobre 2020. Doi:https://doi.org/10.1163/22105700-bja10016
External links
Descartes, René. (1641). Meditations on First Philosophy. In Cottingham, et al. (eds.), 1984.
Dictionary of Philosophy of Mind
Skepticism
René Descartes
Philosophy of science
Philosophical methodology
Theory of mind
Epistemological theories
Epistemology of science
Doubt | 0.776303 | 0.992754 | 0.770678 |
Sensualism | In epistemology, sensualism is a doctrine whereby sensations and perception are the basic and most important form of true cognition. It may oppose abstract ideas.
This ideogenetic question was long ago put forward in Greek philosophy (Stoicism, Epicureanism) and further developed to the full by the British Sensualists (John Locke, David Hume) and the British Associationists (Thomas Brown, David Hartley, Joseph Priestley). In the 19th century it was very much taken up by the Positivists (Auguste Comte, Herbert Spencer, Hippolyte Taine, Émile Littré)
See also
Empiricism
Positivism
Solipsism
Spiritualism
Footnotes
Epistemological theories
Empiricism | 0.784125 | 0.982819 | 0.770653 |
Vaisheshika | Vaisheshika (IAST: Vaiśeṣika; ; ) is one of the six schools of Hindu philosophy from ancient India. In its early stages, the Vaiśeṣika was an independent philosophy with its own metaphysics, epistemology, logic, ethics, and soteriology. Over time, the Vaiśeṣika system became similar in its philosophical procedures, ethical conclusions and soteriology to the Nyāya school of Hinduism, but retained its difference in epistemology and metaphysics.
The epistemology of the Vaiśeṣika school of Hinduism, like Buddhism, accepted only two reliable means to knowledge: direct observation and inference. The Vaiśeṣika school and Buddhism both consider their respective scriptures as indisputable and valid means to knowledge, the difference being that the scriptures held to be a valid and reliable source by Vaiśeṣikas were the Vedas.
The Vaiśeṣika school is known for its insights in naturalism, a form of atomism in natural philosophy. It postulated that all objects in the physical universe are reducible to paramāṇu (atoms), and one's experiences are derived from the interplay of substance (a function of atoms, their number and their spatial arrangements), quality, activity, commonness, particularity and inherence. Everything was composed of atoms, qualities emerged from aggregates of atoms, but the aggregation and nature of these atoms was predetermined by cosmic forces. Ājīvika metaphysics included a theory of atoms which was later adapted in the Vaiśeṣika school.
According to the Vaiśeṣika school, knowledge and liberation were achievable by a complete understanding of the world of experience.
Vaiśeṣika darshana was founded by Kaṇāda Kashyapa around the 6th to 2nd century BC.
Overview
The name Vaiśeṣika comes from viśeṣa, the category that represents the individuality of innumerable existing objects.
Although the Vaiśeṣika system developed independently from the Nyāya philosophy of Hinduism, the two became similar and are often studied together. In its classical form, however, the Vaiśeṣika school differed from the Nyāya in one crucial respect: where Nyāya accepted four sources of valid knowledge, the Vaiśeṣika accepted only two.
The epistemology of Vaiśeṣika school of Hinduism accepted only two reliable means to knowledge – perception and inference.
Vaisheshika espouses a form of atomism, that the reality is composed of five substances (examples are earth, water, air, fire, and space). Each of these five are of two types, explains Ganeri: paramāṇu and composite. A paramāṇu is that which is indestructible, indivisible, and has a special kind of dimension, called "small" (aṇu). A composite is that which is divisible into paramāṇu. Whatever human beings perceive is composite, and even the smallest perceptible thing, namely, a fleck of dust, has parts, which are therefore invisible. The Vaiśeṣikas visualized the smallest composite thing as a "triad" (tryaṇuka) with three parts, each part with a "dyad" (dyaṇuka). Vaiśeṣikas believed that a dyad has two parts, each of which is an atom. Size, form, truths and everything that human beings experience as a whole is a function of parmanus, their number and their spatial arrangements.
Parama means "most distant, remotest, extreme, last" and aṇu means "atom, very small particle", hence paramāṇu is essentially "the most distant or last small (i.e. smallest) particle".
Vaiśeṣika postulated that what one experiences is derived from dravya (substance: a function of atoms, their number and their spatial arrangements), guna (quality), karma (activity), samanya (commonness), vishesha (particularity) and samavaya (inherence, inseparable connectedness of everything).
The followers of this philosophy are mostly Shaivas. Acharya Haribhadra Suri, in his work 'Ṣaḍdarśanasamuccaya' describes the followers of Vaiśeṣika as worshippers of Pashupati or Shiva.
Epistemology
Six pramāṇas (epistemically reliable means to accurate knowledge and to truths) are noted within different Indian philsophical schools: Pratyakṣa (perception), Anumāna (inference), Śabda or āgama "(word, testimony of past or present reliable experts), Upamāna (comparison and analogy), Arthāpatti (postulation, derivation from circumstances), and Anupalabdhi (non-perception, negative/cognitive proof). Of these epistemology considered only pratyakṣa (perception) and (inference) as reliable means of valid knowledge. Yoga accepts the first three of these six as pramāṇa; and the Nyaya school, related to Vaiśeṣika, accepts the first four out of these six.
Pratyakṣa (प्रत्यक्ष) means perception. It is of two types: external and internal. External perception is described as that arising from the interaction of five senses and worldly objects, while internal perception is described by this school as that of inner sense, the mind. The ancient and medieval texts of Hinduism identify four requirements for correct perception: Indriyarthasannikarsa (direct experience by one's sensory organ(s) with the object, whatever is being studied), Avyapadesya (non-verbal; correct perception is not through hearsay, according to ancient Indian scholars, where one's sensory organ relies on accepting or rejecting someone else's perception), Avyabhicara (does not wander; correct perception does not change, nor is it the result of deception because one's sensory organ or means of observation is drifting, defective, suspect) and Vyavasayatmaka (definite; correct perception excludes judgments of doubt, either because of one's failure to observe all the details, or because one is mixing inference with observation and observing what one wants to observe, or not observing what one does not want to observe). Some ancient scholars proposed "unusual perception" as pramāṇa and called it internal perception, a proposal contested by other Indian scholars. The internal perception concepts included pratibha (intuition), samanyalaksanapratyaksa (a form of induction from perceived specifics to a universal), and jnanalaksanapratyaksa (a form of perception of prior processes and previous states of a 'topic of study' by observing its current state). Further, the texts considered and refined rules of accepting uncertain knowledge from Pratyakṣa-pranama, so as to contrast nirnaya (definite judgment, conclusion) from anadhyavasaya (indefinite judgment).
Anumāna (अनुमान) means inference. It is described as reaching a new conclusion and truth from one or more observations and previous truths by applying reason. Observing smoke and inferring fire is an example of Anumana. In all except one Hindu philosophies, this is a valid and useful means to knowledge. The method of inference is explained by Indian texts as consisting of three parts: pratijna (hypothesis), hetu (a reason), and drshtanta (examples). The hypothesis must further be broken down into two parts, state the ancient Indian scholars: sadhya (that idea which needs to proven or disproven) and paksha (the object on which the sadhya is predicated). The inference is conditionally true if sapaksha (positive examples as evidence) are present, and if vipaksha (negative examples as counter-evidence) are absent. For rigor, the Indian philosophies also state further epistemic steps. For example, they demand Vyapti - the requirement that the hetu (reason) must necessarily and separately account for the inference in "all" cases, in both sapaksha and vipaksha. A conditionally proven hypothesis is called a nigamana (conclusion).
Syllogism
The syllogism of the school was similar to that of the Nyāya school of Hinduism, but the names given by to the 5 members of syllogism are different.
Literature
The earliest systematic exposition of the Vaisheshika is found in the of (or ). This treatise is divided into ten books. The two commentaries on the , and are no more extant. ’s (c. 4th century) is the next important work of this school. Though commonly known as of , this treatise is basically an independent work on the subject. The next Vaisheshika treatise, Candra’s (648) based on ’s treatise is available only in Chinese translation. The earliest commentary available on ’s treatise is ’s (8th century). The other three commentaries are ’s (991), Udayana’s (10th century) and ’s (11th century). ’s which also belongs to the same period, presents the and the principles as a part of one whole. ’s on is also an important work.
Metaphysics
The Categories or Padārtha
According to the Vaisheshika school, all things that exist, that can be cognized and named are s (literal meaning: the meaning of a word), the objects of experience. All objects of experience can be classified into six categories, dravya (substance), (quality), karma (activity), (generality), (particularity) and (inherence). Later s ( and Udayana and ) added one more category abhava (non-existence). The first three categories are defined as artha (which can perceived) and they have real objective existence. The last three categories are defined as (product of intellectual discrimination) and they are logical categories.
Dravya (substance): There are nine substances. They are, (earth), ap (water), tejas (fire), (air), (ether), (time), dik (space), (self or soul) and manas (mind). The first five are called s, the substances having some specific qualities so that they could be perceived by one or the other external senses.
Guṇa (quality): The mentions 17 s (qualities), to which added another 7. While a substance is capable of existing independently by itself, a (quality) cannot exist so. The original 17 s (qualities) are, (colour), rasa (taste), gandha (smell), (touch), (number), (size/dimension/quantity), (individuality), (conjunction/accompaniments), (disjunction), (priority), aparatva (posteriority), buddhi (knowledge), sukha (pleasure), (pain), (desire), (aversion) and prayatna (effort). To these added gurutva (heaviness), dravatva (fluidity), sneha (viscosity), dharma (merit), adharma (demerit), (sound) and (faculty).
Karma (activity): The karmas (activities) like s (qualities) have no separate existence, they belong to the substances. But while a quality is a permanent feature of a substance, an activity is a transient one. (ether), (time), dik (space) and (self), though substances, are devoid of karma (activity).
Sāmānya (generality): Since there are plurality of substances, there will be relations among them. When a property is found common to many substances, it is called .
(particularity): By means of , we are able to perceive substances as different from one another. As the ultimate atoms are innumerable so are the s.
(inherence): defined as the relation between the cause and the effect. defined it as the relationship existing between the substances that are inseparable, standing to one another in the relation of the container and the contained. The relation of is not perceivable but only inferable from the inseparable connection of the substances.
Abhava (non-existence)
Atomism
According to the school, a paramanu (atom) is an indestructible particle of matter. The atom is indivisible because it is a state at which no measurement can be attributed. They used invariance arguments to determine properties of the atoms. It also stated that anu can have two states—absolute rest and a state of motion.
They postulated four different kinds of atoms: two with mass, and two without. Each substance is supposed to consist of all four kinds of atoms. Atoms can be combined into s (triads) and (dyad)before they aggregate into bodies of a kind that can be perceived. Each (atom) possesses its own distinct (individuality)
The measure of the partless atoms is known as parimaṇḍala parimāṇa. It is eternal and it cannot generate the measure of any other substance. Its measure is its own absolutely.
See also
Darshanas
Hindu philosophy
Hinduism
Nyaya (philosophy)
Padārtha
Tarka-Sangraha
Kaṇāda
Vaiśeṣika Sūtra
Atomism
Pad%C4%81rtha
Nyaya#Sixteen categories (padārthas)
Categories (Aristotle)
Fakhr al-Din al-Razi
Notes
References
.
.
.
Further reading
Bimal Matilal (1977), A History of Indian Literature - Nyāya-Vaiśeṣika, Otto Harrassowitz Verlag, ,
Gopi Kaviraj (1961), Gleanings from the history and bibliography of the Nyaya-Vaisesika literature, Indian Studies: Past & Present, Volume 2, Number 4,
Kak, Subhash: Matter and Mind: The Vaiśeṣika Sūtra of Kaṇāda
External links
Vaisheshika-sutra with three commentaries English translation by Nandalal Sinha, 1923 (includes glossary)
A summary of Vaisheshika physics
Shastra Nethralaya - Vaisheshika
GRETIL e-text of the Vaiśeṣika Sūtras
Ancient Indian philosophy
Āstika
Atomism
Epistemology
Hindu philosophy
Logic
Metaphysics of religion | 0.776003 | 0.993057 | 0.770616 |
Sub specie aeternitatis | Sub specie aeternitatis (Latin for "under the aspect of eternity") is, from Baruch Spinoza onwards, an honorific expression denoting what is considered to be universally and eternally true, without any reference to or dependence upon temporal facets of reality. The Latin phrase can be rendered in English as "from the perspective of the eternal". More loosely, it is commonly used to refer to an objective (or theoretically possible alternative) point of view.
Spinoza's "eternal" perspective is reflected in his Ethics (Part V, Prop. XXIII, Scholium), where he applies Euclid's method (with the use of geometry) to philosophical inquiry, starting with God and nature, before moving to human emotions and the human intellect to reach an understanding of moral philosophy. By proceeding sub specie aeternitatis, Spinoza sought to arrive at an ethical theory that is as precise as Euclid's Elements. In the history of philosophy, this way of proceeding may be contrasted with Aristotle's. Aristotle's methodological differentiations in his "philosophy of human affairs" and his natural philosophy are grounded in the distinction between what is "better known to us" and things "better known in themselves", or what is "first for us" and what is "first by nature" (discussed, for example, at Metaphysics Z.3, 1029b3–12), a distinction that is deliberately discarded by Spinoza, and also by other modern philosophers.
Usage
Thomas Nagel, in The Absurd:
Yet humans have the special capacity to step back and survey themselves, and the lives to which they are committed... Without developing the illusion that they are able to escape from their highly specific and idiosyncratic position, they can view it sub specie aeternitatis—and the view is at once sobering and comical.
and later in that article:
If sub specie aeternitatis there is no reason to believe that anything matters, then that does not matter either, and we can approach our absurd lives with irony instead of heroism or despair.
Stephen Halliwell refers to Aristotle's development away from a view of life sub specie aeternitatis:
But if Aristotle has no room in his mature thinking for the decadence that refuses to take anything in life seriously, this needs to be distinguished from the fact that in his (probably early) Protrepticus he was able to adopt the platonising judgement that, sub specie aeternitatis, everything that supposedly matters in human life 'is a laughing-stock (gelōs) and worthless'.
Ludwig Wittgenstein, in Notebooks 1914-1916:
The work of art is the object seen sub specie aeternitatis; and the good life is the world seen sub specie aeternitatis. This is the connection between art and ethics.
Later, in Wittgenstein’s Tractatus Logico-Philosophicus:
6.45 To view the world sub specie aeterni is to view it as a whole—a limited whole.
Feeling the world as a limited whole—it is this that is mystical.
Viktor E. Frankl, in Man's Search for Meaning:
It is a peculiarity of man that he can only live by looking to the future—sub specie aeternitatis.
Dietrich Bonhoeffer wrote:
From all this it now follows that the content of ethical problems can never be discussed in a Christian light; the possibility of erecting generally valid principles simply does not exist, because each moment, lived in God’s sight, can bring an unexpected decision. Thus only one thing can be repeated again and again, also in our time: in ethical decisions a man must consider his action sub specie aeternitatis and then, no matter how it proceeds, it will proceed rightly.
In his novel The Ordeal of Gilbert Pinfold, Evelyn Waugh describes Pinfold: He wished no one ill, but looked at the world sub specie aeternitatis and he found it flat as a map; except when, rather often, personal annoyance intruded.
John Rawls wrote, in the final paragraph of A Theory of Justice:
Thus to see our place in society from the perspective of this position is to see it sub specie aeternitatis: it is to regard the human situation not only from all social but also from all temporal points of view.
Bernard Williams, in Utilitarianism: For and Against:
Philosophers ... repeatedly urge us to view the world sub specie aeternitatis, but for most human purposes that is not a good species to view it under.
Peter L. Berger, in The Sacred Canopy:
Just as institutions may be relativized and thus humanized when viewed sub specie aeternitatis, so may the roles representing these institutions.
Luciano Floridi, in The Philosophy of Information:
First, sub specie aeternitatis, science is still in its puberty, when some hiccups are not necessarily evidence of any serious sickness.
Christopher Dawson, in The Christian View of History:
For the Christian view of history is a vision of history sub specie aeternitatis, an interpretation of time in terms of eternity and of human events in the light of divine revelation. And thus Christian history is inevitably apocalyptic, and the apocalypse is the Christian substitute for the secular philosophies of history.
Michael Oakeshott, in Historical Experience:
Pretending to organize and elucidate the real world of experience sub specie aeternitatis, history succeeds only in organizing it sub specie praeteritorum.
Carl Jung, in Memories, Dreams, Reflections:
What we are to our inward vision, and what man appears to be sub specie aeternitatis, can only be expressed by way of myth.
Philip K. Dick, in Galactic Pot-Healer & Do androids dream of electric sheep :
The stewardess began setting up the SSA machine in a rapid, efficient fashion, meanwhile explaining it. "SSA stands for sub specie aeternitatis; that is, something seen outside of time. Now, many individuals imagine that an SSA machine can see into the future, that it is precognitive. This is not true. The mechanism, basically a computer, is attached via electrodes to both your brains and it swiftly stores up immense quantities of data about each of you. It then synthesizes these data and, on a probability basis, extrapolates as to what would most likely become of you both if you were, for example, joined in marriage, or perhaps living together.
Ludwig von Mises, in Human Action: A Treatise on Economics:
It is customary to blame the economists for an alleged disregard of history. The economists, it is contended, consider the market economy as the ideal and eternal pattern of social cooperation. They concentrate their studies upon investigating the conditions of the market economy and neglect everything else. They do not bother about the fact that capitalism emerged only in the last two hundred years and that even today it is restricted to a comparatively small area of the earth's surface and to a minority of peoples. There were and are, say these critics, other civilizations with a different mentality and different modes of conducting economic affairs. Capitalism is, when seen sub specie aeternitatis, a passing phenomenon, an ephemeral stage of historical evolution, just the transition from precapitalistic ages to a postcapitalistic future. All these criticisms are spurious....
Tomáš Garrigue Masaryk, in Talks with T.G. Masaryk by Karel Čapek:
Many modern man is afraid of death, he is too luxurious—his life is not great drama, he just wants to food and enjoyment; unbeliever is not there enough trust and dedication. Modern suicide and the fear of death—these two are related, as related to fear and escape. But it would be a problem for themselves. When I think of immortallity, so I do not think of death and what happens after, but rather on life and its contents. It immortally stems from the richness and value of human life, the human soul. The man himself, one man is more valuable as a spiritual being. And immortal soul also follows from the recognition of God, faith in the world order and justice. It would not be justice, there would be perfect equality without eternal souls. Immortality are experiencing now, in this life; we have no experience of life after death, but we have, we have the experience now that life truly and fully human only live sub specie aeternitatis. That experience ultimately depends on us, on how we live, what we are full, and what of his life here, we look to do. Just as the soul of the eternal souls live life fully and honestly. The existence of the soul is the true foundation of democracy: the eternal can not be indifferent to the eternal, immortal immortal is equal. From charity receives its special—he is said metaphysical—sense.
Rebecca Goldstein in Plato at the Googleplex: Why Philosophy Won't Go Away:
We are immortal only to the extent that we allow our own selves to be rationalized by the sublime ontological rationality, ordering our own processes of thinking, desiring, and acting in accordance with the perfect proportions realized in the cosmos. We are then, while in this life, living sub specie aeternitatis, as Spinoza was to put it, expanding our finitude to encapture as much of infinity as we are able.
In the article on Spinoza from the Stanford Encyclopedia of Philosophy:
Sense experience alone could never provide the information conveyed by an adequate idea. The senses present things only as they appear from a given perspective at a given moment in time. An adequate idea, on the other hand, by showing how a thing follows necessarily from one or another of God's attributes, presents it in its "eternal" aspects—sub specie aeternitatis, as Spinoza puts it—without any relation to time. "It is of the nature of Reason to regard things as necessary and not as contingent. And Reason perceives this necessity of things truly, i.e., as it is in itself. But this necessity of things is the very necessity of God's eternal nature. Therefore, it is of the nature of Reason to regard things under this species of eternity" (IIp44). The third kind of knowledge, intuition, takes what is known by Reason and grasps it in a single act of the mind.
As a play on the expression, J. L. Austin puns on it in order to discuss the very fallibility of human knowledge:
'Being sure it's real' is no more proof against miracles or outrages of nature than anything else is or, sub specie humanitatis, can be. If we have made sure it's a goldfinch, and a real goldfinch, and then in the future it does something outrageous (explodes, quotes Mrs. Woolf, or what not), we don't say we were wrong to say it was a goldfinch, we don't know what to say. Words literally fail us: 'What would you have said?' 'What are we to say now?' 'What would you say?' When I have made sure it's a real goldfinch (not stuffed, corroborated by the disinterested, &c.) then I am not 'predicting' in saying it's a real goldfinch, and in a very good sense I can't be proved wrong whatever happens. It seems a serious mistake to suppose that language (or most language, language about real things) is 'predictive' in such a way that the future can always prove it wrong. What the future can always do, is to make us revise our ideas about goldfinches or real goldfinches or anything else.
Julian Huxley suggested an alternative: "in the light of evolution".
References
Latin philosophical phrases
Spinozism | 0.781356 | 0.986201 | 0.770574 |
Thaumaturgy | Thaumaturgy, derived from the Greek words thauma (wonder) and ergon (work), refers to the practical application of magic to effect change in the physical world. Historically, thaumaturgy has been associated with the manipulation of natural forces, the creation of wonders, and the performance of magical feats through esoteric knowledge and ritual practice. Unlike theurgy, which focuses on invoking divine powers, thaumaturgy is more concerned with utilizing occult principles to achieve specific outcomes, often in a tangible and observable manner. It is sometimes translated into English as wonderworking.
This concept has evolved from its ancient roots in magical traditions to its incorporation into modern Western esotericism. Thaumaturgy has been practiced by individuals seeking to exert influence over the material world through both subtle and overt magical means. It has played a significant role in the development of magical systems, particularly those that emphasize the practical aspects of esoteric work.
In modern times, thaumaturgy continues to be a subject of interest within the broader field of occultism, where it is studied and practiced as part of a larger system of magical knowledge. Its principles are often applied in conjunction with other forms of esoteric practice, such as alchemy and Hermeticism, to achieve a deeper understanding and mastery of the forces that govern the natural and supernatural worlds.
A practitioner of thaumaturgy is a "thaumaturge", "thaumaturgist", "thaumaturgus", "miracle worker", or "wonderworker".
Etymology
The word thaumaturgy derives from Greek thaûma, meaning "miracle" or "marvel" (final t from genitive thaûmatos) and érgon, meaning "work". In the 16th century, the word thaumaturgy entered the English language meaning miraculous or magical powers. The word was first anglicized and used in the magical sense in John Dee's book The Mathematicall Praeface to Elements of Geometrie of Euclid of Megara (1570). He mentions an "art mathematical" called "thaumaturgy... which giveth certain order to make strange works, of the sense to be perceived and of men greatly to be wondered at".
Historical development
Ancient roots
The origins of thaumaturgy can be traced back to ancient civilizations where magical practices were integral to both religious rituals and daily life. In ancient Egypt, priests were often regarded as thaumaturges, wielding their knowledge of rituals and incantations to influence natural and supernatural forces. These practices were aimed at protecting the Pharaoh, ensuring a successful harvest, or even controlling the weather. Similarly, in ancient Greece, certain figures were believed to possess the ability to perform miraculous feats, often attributed to their deep understanding of the mysteries of the gods and nature. This blending of religious and magical practices laid the groundwork for what would later be recognized as thaumaturgy in Western esotericism.
In Greek writings, the term thaumaturge also referred to several Christian saints. In this context, the word is usually translated into English as 'wonderworker'. Notable early Christian thaumaturges include Gregory Thaumaturgus (c. 213–270), Saint Menas of Egypt (285–c. 309), Saint Nicholas (270–343), and Philomena ( 300 (?)).
Medieval and Renaissance Europe
During the medieval period, thaumaturgy evolved within the context of Christian mysticism and early scientific thought. The medieval understanding of thaumaturgy was closely linked to the idea of miracles, with saints and holy men often credited with thaumaturgic powers. The seventeenth-century Irish Franciscan editor John Colgan called the three early Irish saints, Patrick, Brigid, and Columba, thaumaturges in his Acta Triadis Thaumaturgae (Louvain, 1647). Later notable medieval Christian thaumaturges include Anthony of Padua (1195–1231) and the bishop of Fiesole, Andrew Corsini of the Carmelites (1302–1373), who was called a thaumaturge during his lifetime. This period also saw the development of grimoires—manuals for magical practices—where rituals and spells were documented, often blending Christian and pagan traditions.
In the Renaissance, the concept of thaumaturgy expanded as scholars like John Dee explored the intersections between magic, science, and religion. Dee's Mathematicall Praeface to Elements of Geometrie of Euclid of Megara (1570) is one of the earliest English texts to discuss thaumaturgy, describing it as the art of creating "strange works" through a combination of natural and mathematical principles. Dee's work reflects the Renaissance pursuit of knowledge that blurred the lines between the magical and the mechanical, as thaumaturges were often seen as early scientists who harnessed the hidden powers of nature.
In Dee's time, "the Mathematicks" referred not merely to the abstract computations associated with the term today, but to physical mechanical devices which employed mathematical principles in their design. These devices, operated by means of compressed air, springs, strings, pulleys or levers, were seen by unsophisticated people (who did not understand their working principles) as magical devices which could only have been made with the aid of demons and devils.
By building such mechanical devices, Dee earned a reputation as a conjurer "dreaded" by neighborhood children. He complained of this assessment in his Mathematicall Praeface:
Notable Renaissance and Age of Enlightenment Christian thaumaturges of the period include Gerard Majella (1726–1755), Ambrose of Optina (1812–1891), and John of Kronstadt (1829–1908).
Incorporation into modern esotericism
The transition into modern esotericism saw thaumaturgy taking on a more structured role within various magical systems, particularly those developed in the 18th and 19th centuries. In Hermeticism and the Western occult tradition, thaumaturgy was often practiced alongside alchemy and theurgy, with a focus on manipulating the material world through ritual and symbolic action. The Hermetic Order of the Golden Dawn, a prominent magical order founded in the late 19th century, incorporated thaumaturgy into its curriculum, emphasizing the importance of both theory and practice in the mastery of magical arts.
Thaumaturgy's role in modern esotericism also intersects with the rise of ceremonial magic, where it is often employed to achieve specific, practical outcomes—ranging from healing to the invocation of spirits. Contemporary magicians continue to explore and adapt thaumaturgic practices, often drawing from a wide range of historical and cultural sources to create eclectic and personalized systems of magic.
Core principles and practices
Principles of sympathy and contagion
Thaumaturgy is often governed by two key magical principles: the Principle of Sympathy and the Principle of Contagion. These principles are foundational in understanding how thaumaturges influence the physical world through magical means. The Principle of Sympathy operates on the idea that "like affects like", meaning that objects or symbols that resemble each other can influence each other. For example, a miniature representation of a desired outcome, such as a model of a bridge, could be used in a ritual to ensure the successful construction of an actual bridge. The Principle of Contagion, on the other hand, is based on the belief that objects that were once in contact continue to influence each other even after they are separated. This principle is often employed in the use of personal items, such as hair or clothing, in rituals to affect the person to whom those items belong.
These principles are not unique to thaumaturgy but are integral to many forms of magic across cultures. However, in the context of thaumaturgy, they are particularly important because they provide a theoretical framework for understanding how magical actions can produce tangible results in the material world. This focus on practical outcomes distinguishes thaumaturgy from other forms of magic that may be more concerned with spiritual or symbolic meanings.
Tools and rituals
Thaumaturgical practices often involve the use of specific tools and rituals designed to channel and direct magical energy. Common tools include wands, staffs, talismans, and ritual knives, each of which serves a particular purpose in the practice of magic. For instance, a wand might be used to direct energy during a ritual, while a talisman could serve as a focal point for the thaumaturge's intent. The creation and consecration of these tools are themselves ritualized processes, often requiring specific materials and astrological timing to ensure their effectiveness.
Rituals in thaumaturgy are typically elaborate and may involve the recitation of incantations, the drawing of protective circles, and the invocation of spirits or deities. These rituals are designed to create a controlled environment in which the thaumaturge can manipulate natural forces according to their will. The complexity of these rituals varies depending on the desired outcome, with more significant or ambitious goals requiring more intricate and time-consuming procedures.
Energy manipulation
At the heart of thaumaturgy is the metaphor of energy manipulation. Thaumaturges believe that the world is filled with various forms of energy that can be harnessed and directed through magical practices. This energy is often conceptualized as a natural force that permeates the universe, and through the use of specific techniques, thaumaturges believe that they can influence this energy to bring about desired changes in the physical world.
Energy manipulation in thaumaturgy involves both drawing energy from the surrounding environment and directing it toward a specific goal. This process often requires a deep understanding of the natural world, as well as the ability to focus and control one's own mental and spiritual energies. In many traditions, this energy is also linked to the practitioner's life force, meaning that the act of performing thaumaturgy can be physically and spiritually taxing. As a result, practitioners often undergo rigorous training and preparation to build their capacity to manipulate energy effectively and safely.
In esoteric traditions
Hermetic Qabalah
In Hermetic Qabalah, thaumaturgy occupies a significant role as it involves the practical application of mystical principles to influence the physical world. This tradition is deeply rooted in the concept of correspondences, where different elements of the cosmos are seen as interconnected. In the Hermetic tradition, a thaumaturge seeks to manipulate these correspondences to bring about desired changes. The sephiroth on the Tree of Life serve as a map for these interactions, with specific rituals and symbols corresponding to different sephiroth and their associated powers. For example, a ritual focusing on Yesod (the sephirah of the Moon) might involve elements such as silver, the color white, and the invocation of lunar deities to influence matters of intuition, dreams, or the subconscious mind.
The manipulation of these correspondences through ritual is not just symbolic but is believed to produce real effects in the material world. Practitioners use complex rituals that might include the use of sacred geometry, invocations, and the creation of talismans. These practices are believed to align the practitioner with the forces they wish to control, creating a sympathetic connection that enables them to direct these forces effectively. Aleister Crowley's Magick (Book 4) provides an extensive discussion on the use of ritual tools such as the wand, cup, and sword, each of which corresponds to different elements and powers within the Qabalistic system, emphasizing the practical aspect of these tools in thaumaturgic practices.
Alchemy and thaumaturgy
Alchemy and thaumaturgy are often intertwined, particularly in the context of spiritual transformation and the pursuit of enlightenment. Alchemy, with its focus on the transmutation of base metals into gold and the quest for the philosopher's stone, can be seen as a form of thaumaturgy where the practitioner seeks to transform not just physical substances but also the self. This process, known as the Great Work, involves the purification and refinement of both matter and spirit. Thaumaturgy comes into play as the practical aspect of alchemy, where rituals, symbols, and substances are used to facilitate these transformations.
The alchemical process is heavily laden with symbolic meanings, with each stage representing a different phase of transformation. The stages of nigredo (blackening), albedo (whitening), citrinitas (yellowing), and rubedo (reddening) correspond not only to physical changes in the material being worked on but also to stages of spiritual purification and enlightenment. Thaumaturgy, in this context, is the application of these principles to achieve tangible results, whether in the form of creating alchemical elixirs, talismans, or achieving spiritual goals. Crowley also elaborates on these alchemical principles in Magick (Book 4), particularly in his discussions on the symbolic and practical uses of alchemical symbols and processes within magical rituals.
Other esoteric systems
Thaumaturgy also plays a role in various other esoteric systems, where it is often viewed as a means of bridging the gap between the mundane and the divine. In Theosophy, for example, thaumaturgy is seen as part of the esoteric knowledge that allows practitioners to manipulate spiritual and material forces. Theosophical teachings emphasize the unity of all life and the interconnection of the cosmos, with thaumaturgy being a practical tool for engaging with these truths. Rituals and meditative practices are used to align the practitioner's will with higher spiritual forces, enabling them to effect change in the physical world.
In Rosicrucianism, thaumaturgy is similarly regarded as a method of spiritual practice that leads to the mastery of natural and spiritual laws. Rosicrucians believe that through the study of nature and the application of esoteric principles, one can achieve a deep understanding of the cosmos and develop the ability to influence it. This includes the use of rituals, symbols, and sacred texts to bring about spiritual growth and material success.
In the introduction of his translation of the "Spiritual Powers (神通 Jinzū)" chapter of Dōgen's Shōbōgenzō, Carl Bielefeldt refers to the powers developed by adepts of Esoteric Buddhism as belonging to the "thaumaturgical tradition". These powers, known as siddhi or abhijñā, were ascribed to the Buddha and subsequent disciples. Legendary monks like Bodhidharma, Upagupta, Padmasambhava, and others were depicted in popular legends and hagiographical accounts as wielding various supernatural powers.
Misconceptions and modern interpretations
Distinction from theurgy
A common misconception about thaumaturgy is its conflation with theurgy. While both involve the practice of magic, they serve distinct purposes and operate on different principles. Theurgy is primarily concerned with invoking divine or spiritual beings to achieve union with the divine, often for purposes of spiritual ascent or enlightenment. Thaumaturgy, on the other hand, focuses on the manipulation of natural forces to produce tangible effects in the physical world. This distinction is crucial in understanding the differing objectives of these practices: theurgy is inherently religious and mystical, while thaumaturgy is more pragmatic and results-oriented.
Aleister Crowley, in his Magick (Book 4), emphasizes the importance of understanding these differences, noting that while theurgic practices seek to align the practitioner with divine will, thaumaturgy allows the practitioner to exert their will over the material world through the application of esoteric knowledge and ritual.
Modern misunderstandings
In modern times, thaumaturgy is often misunderstood, particularly in popular culture where it is sometimes depicted as synonymous with fantasy magic or "miracle-working" in a religious sense. These portrayals can dilute the rich historical and esoteric significance of thaumaturgy, reducing it to a mere trope of magical fiction. For instance, the term is frequently used in fantasy literature and role-playing games to describe a generic form of magic, without consideration for its historical roots or the complex practices associated with it in esoteric traditions.
This modern misunderstanding is partly due to the broadening of the term "thaumaturgy" in contemporary discourse, where it is often detached from its original context and used more loosely. As a result, the nuanced distinctions between different types of magic, such as thaumaturgy and theurgy, are often overlooked, leading to a homogenized view of magical practices.
In popular culture
The term thaumaturgy is used in various games as a synonym for magic, a particular sub-school (often mechanical) of magic, or as the "science" of magic.
Thaumaturgy is defined as the "science" or "physics" of magic by Isaac Bonewits in his 1971 book Real Magic, a definition he also used in creating an RPG reference called Authentic Thaumaturgy (1978, 1998, 2005).
See also
; for example, the sigils of the Behenian fixed stars
References
Works cited
External links
Alchemy
Ceremonial magic
Hermetic Qabalah
Hermeticism
Magic (supernatural)
Magical terminology
Rosicrucianism
Thelema
Theosophy
Vajrayana
Zen | 0.772415 | 0.99755 | 0.770523 |
Normativity | A prescriptive or normative statement is one that evaluates certain kinds of words, decisions, or actions as either correct or incorrect, or one that sets out guidelines for what a person "should" do.
Normativity is the phenomenon in human societies of designating some actions or outcomes as good, desirable, or permissible, and others as bad, undesirable, or impermissible. A norm in this sense means a standard for evaluating or making judgments about behavior or outcomes. "Normative" is sometimes also used, somewhat confusingly, to mean relating to a descriptive standard: doing what is normally done or what most others are expected to do in practice. In this sense a norm is not evaluative, a basis for judging behavior or outcomes; it is simply a fact or observation about behavior or outcomes, without judgment. Many researchers in science, law, and philosophy try to restrict the use of the term "normative" to the evaluative sense and refer to the description of behavior and outcomes as positive, descriptive, predictive, or empirical.
Normative has specialized meanings in different academic disciplines such as philosophy, social sciences, and law. In most contexts, normative means 'relating to an evaluation or value judgment.' Normative propositions tend to evaluate some object or some course of action. Normative content differs from descriptive content.
Though philosophers disagree about how normativity should be understood; it has become increasingly common to understand normative claims as claims about reasons. As Derek Parfit explains:
Philosophy
In philosophy, normative theory aims to make moral judgments on events, focusing on preserving something they deem as morally good, or preventing a change for the worse. The theory has its origins in Greece. Normative statements of such a type make claims about how institutions should or ought to be designed, how to value them, which things are good or bad, and which actions are right or wrong. Claims are usually contrasted with positive (i.e. descriptive, explanatory, or constative) claims when describing types of theories, beliefs, or propositions. Positive statements are (purportedly) factual, empirical statements that attempt to describe reality.
For example, "children should eat vegetables", and "those who would sacrifice liberty for security deserve neither" are philosophically normative claims. On the other hand, "vegetables contain a relatively high proportion of vitamins", and "a common consequence of sacrificing liberty for security is a loss of both" are positive claims. Whether a statement is philosophically normative is logically independent of whether it is verified, verifiable, or popularly held.
There are several schools of thought regarding the status of philosophically normative statements and whether they can be rationally discussed or defended. Among these schools are the tradition of practical reason extending from Aristotle through Kant to Habermas, which asserts that they can, and the tradition of emotivism, which maintains that they are merely expressions of emotions and have no cognitive content.
There is large debate in philosophy surrounding whether one can get a normative statement of such a type from an empirical one (i.e. whether one can get an 'ought' from an 'is', or a 'value' from a 'fact'). Aristotle is one scholar who believed that one could in fact get an ought from an is. He believed that the universe was teleological and that everything in it has a purpose. To explain why something is a certain way, Aristotle believed one could simply say that it is trying to be what it ought to be. On the contrary, David Hume believed one cannot get an ought from an is because no matter how much one thinks something ought to be a certain way it will not change the way it is. Despite this, Hume used empirical experimental methods whilst looking at the philosophically normative. Similar to this was Kames, who also used the study of facts and the objective to discover a correct system of morals. The assumption that 'is' can lead to 'ought' is an important component of the philosophy of Roy Bhaskar.
Philosophically normative statements and norms, as well as their meanings, are an integral part of human life. They are fundamental for prioritizing goals and organizing and planning. Thought, belief, emotion, and action are the basis of much ethical and political discourse; indeed, normativity of such a type is arguably the key feature distinguishing ethical and political discourse from other discourses (such as natural science).
Much modern moral/ethical philosophy takes as its starting point the apparent variance between peoples and cultures regarding the ways they define what is considered to be appropriate/desirable/praiseworthy/valuable/good etc. (In other words, variance in how individuals, groups and societies define what is in accordance with their philosophically normative standards.) This has led philosophers such as A.J. Ayer and J.L. Mackie (for different reasons and in different ways) to cast doubt on the meaningfulness of normative statements of such a type. However, other philosophers, such as Christine Korsgaard, have argued for a source of philosophically normative value which is independent of individuals' subjective morality and which consequently attains (a lesser or greater degree of) objectivity.
Social sciences
In the social sciences, the term "normative" has broadly the same meaning as its usage in philosophy, but may also relate, in a sociological context, to the role of cultural 'norms'; the shared values or institutions that structural functionalists regard as constitutive of the social structure and social cohesion. These values and units of socialization thus act to encourage or enforce social activity and outcomes that ought to (with respect to the norms implicit in those structures) occur, while discouraging or preventing social activity that ought not occur. That is, they promote social activity that is socially valued (see philosophy above). While there are always anomalies in social activity (typically described as "crime" or anti-social behaviour, see also normality (behavior)) the normative effects of popularly endorsed beliefs (such as "family values" or "common sense") push most social activity towards a generally homogeneous set. From such reasoning, however, functionalism shares an affinity with ideological conservatism.
Normative economics deals with questions of what sort of economic policies should be pursued, in order to achieve desired (that is, valued) economic outcomes.
Politics
The use of normativity and normative theory in the study of politics has been questioned, particularly since the rise in popularity of logical positivism. It has been suggested by some that normative theory is not appropriate to be used in the study of politics, because of its value based nature, and a positive, value neutral approach should be taken instead, applying theory to what is, not to what ought to be. Others have argued, however, that to abandon the use of normative theory in politics is misguided, if not pointless, as not only is normative theory more than a projection of a theorist's views and values, but also this theory provides important contributions to political debate. Pietrzyk-Reeves discussed the idea that political science can never truly be value free, and so to not use normative theory is not entirely helpful. Furthermore, perhaps the normative dimension political study has is what separates it from many branches of social sciences.
International relations
In the academic discipline of International relations, Smith, Baylis & Owens in the Introduction to their 2008 book make the case that the normative position or normative theory is to make the world a better place and that this theoretical worldview aims to do so by being aware of implicit assumptions and explicit assumptions that constitute a non-normative position, and align or position the normative towards the loci of other key socio-political theories such as political liberalism, Marxism, political constructivism, political realism, political idealism and political globalization.
Law
In law, as an academic discipline, the term "normative" is used to describe the way something ought to be done according to a value position. As such, normative arguments can be conflicting, insofar as different values can be inconsistent with one another. For example, from one normative value position the purpose of the criminal process may be to repress crime. From another value position, the purpose of the criminal justice system could be to protect individuals from the moral harm of wrongful conviction.
Standards documents
The CEN-CENELEC Internal Regulations describe "normative" as applying to a document or element "that provides rules, guidelines or characteristics for activities or their results" which are mandatory.
Normative elements are defined in International Organization for Standardization Directives Part 2 as "elements that describe the scope of the document, and which set out provisions". Provisions include "requirements", which are criteria that must be fulfilled and cannot be deviated from, and "recommendations" and "statements", which are not necessary to comply with.
See also
Conformity
Decision theory
Economics
Hypothesis
Is-ought problem
Linguistic prescription
Norm (philosophy)
Normative economics
Normative ethics
Normative science
Philosophy of law
Political science
Scientific method
Value
References
Further reading
Canguilhem, Georges, The Normal and the Pathological, .
Andreas Dorschel, 'Is there any normative claim internal to stating facts?', in: Communication & Cognition XXI (1988), no. 1, pp. 5–16.
Concepts in ethics
Social sciences
Philosophy of law | 0.775244 | 0.993864 | 0.770487 |
Integrity | Integrity is the quality of being honest and showing a consistent and uncompromising adherence to strong moral and ethical principles and values.
In ethics, integrity is regarded as the honesty and truthfulness or of one's actions. Integrity can stand in opposition to hypocrisy. It regards internal consistency as a virtue, and suggests that people who hold apparently conflicting values should account for the discrepancy or alter those values.
The word integrity evolved from the Latin adjective , meaning whole or complete. In this context, integrity is the inner sense of "wholeness" deriving from qualities such as honesty and consistency of character.
In ethics
In ethics, a person is said to possess the virtue of integrity if the person's actions are based upon an internally consistent framework of principles. These principles should uniformly adhere to sound logical axioms or postulates. A person has ethical integrity to the extent that the person's actions, beliefs, methods, measures, and principles align with a well-integrated core group of values. A person must, therefore, be flexible and willing to adjust these values to maintain consistency when these values are challenged—such as when observed results are incongruous with expected outcomes. Because such flexibility is a form of accountability, it is regarded as a moral responsibility as well as a virtue.
A person's value system provides a framework within which the person acts in ways that are consistent and expected. Integrity can be seen as the state of having such a framework and acting congruently within it.
One essential aspect of a consistent framework is its avoidance of any unwarranted (arbitrary) exceptions for a particular person or group—especially the person or group that holds the framework. In law, this principle of universal application requires that even those in positions of official power can be subjected to the same laws as pertain to their fellow citizens. In personal ethics, this principle requires that one should not act according to any rule that one would not wish to see universally followed. For example, one should not steal unless one would want to live in a world in which everyone was a thief. The philosopher Immanuel Kant formally described the principle of universality of application for one's motives in his categorical imperative.
The concept of integrity implies a wholeness—a comprehensive corpus of beliefs often referred to as a worldview. This concept of wholeness emphasizes honesty and authenticity, requiring that one act at all times in accordance with one's worldview.
Ethical integrity is not synonymous with the good, as Zuckert and Zuckert show about Ted Bundy:
In politics
Politicians are given power to make, execute, or control policy, which can have important consequences. They typically promise to exercise this power in a way that serves society, but may not do so, which opposes the notion of integrity. Aristotle said that because rulers have power they will be tempted to use it for personal gain.
In the book The Servant of the People, Muel Kaptein says integrity should start with politicians knowing what their position entails, because the consistency required by integrity applies also to the consequences of one's position. Integrity also demands knowledge and compliance with both the letter and the spirit of the written and unwritten rules. Integrity is also acting consistently not only with what is generally accepted as moral, what others think, but primarily with what is ethical, what politicians should do based on reasonable arguments.
Important virtues of politicians are faithfulness, humility, and accountability. Furthermore, they should be authentic and a role model. Aristotle identified dignity (, variously translated as proper pride, greatness of soul, and magnanimity) as the crown of the virtues, distinguishing it from vanity, temperance, and humility.
In psychological/work-selection tests
"Integrity tests" or (more confrontationally) "honesty tests" aim to identify prospective employees who may hide perceived negative or derogatory aspects of their past, such as a criminal conviction or drug abuse. Identifying unsuitable candidates can save the employer from problems that might otherwise arise during their term of employment. Integrity tests make certain assumptions, specifically:
that persons who have "low integrity" report more dishonest behaviour
that persons who have "low integrity" try to find reasons to justify such behaviour
that persons who have "low integrity" think others more likely to commit crimes—like theft, for example. (Since people seldom sincerely declare to prospective employers their past deviance, the "integrity" testers adopted an indirect approach: letting the work-candidates talk about what they think of the deviance of other people, considered in general, as a written answer demanded by the questions of the "integrity test".)
that persons who have "low integrity" exhibit impulsive behaviour
that persons who have "low integrity" tend to think that society should severely punish deviant behaviour (specifically, "integrity tests" assume that people who have a history of deviance report within such tests that they support harsher measures applied to the deviance exhibited by other people.)
The claim that such tests can detect "fake" answers plays a crucial role in detecting people who have low integrity. Naive respondents really believe this pretense and behave accordingly, reporting some of their past deviance and their thoughts about the deviance of others, fearing that if they do not answer truthfully their untrue answers will reveal their "low integrity". These respondents believe that the more candid they are in their answers, the higher their "integrity score" will be.
In other disciplines
Disciplines and fields with an interest in integrity include philosophy of action, philosophy of medicine, mathematics, the mind, cognition, consciousness, materials science, structural engineering, and politics. Popular psychology identifies personal integrity, professional integrity, artistic integrity, and intellectual integrity.
For example, to behave with scientific integrity, a scientific investigation shouldn't determine the outcome in advance of the actual results. As an example of a breach of this principle, Public Health England, a UK Government agency, stated that they upheld a line of government policy in advance of the outcome of a study that they had commissioned.
The concept of integrity may also feature in business contexts that go beyond the issues of employee/employer honesty and ethical behavior, notably in marketing or branding contexts. Brand "integrity" gives a company's brand a consistent, unambiguous position in the mind of their audience. This is established for example via consistent messaging and a set of graphics standards to maintain visual integrity in marketing communications. Kaptein and Wempe developed a theory of corporate integrity that includes criteria for businesses dealing with moral dilemmas.
Another use of the term "integrity" appears in Michael Jensen's and Werner Erhard's paper, "Integrity: A Positive Model that Incorporates the Normative Phenomenon of Morality, Ethics, and Legality". The authors model integrity as the state of being whole and complete, unbroken, unimpaired, sound, and in perfect condition. They posit a model of integrity that provides access to increased performance for individuals, groups, organizations, and societies. Their model "reveals the causal link between integrity and increased performance, quality of life, and value-creation for all entities, and provides access to that causal link."
According to Muel Kaptein, integrity is not a one-dimensional concept. In his book he presents a multifaceted perspective of integrity. Integrity relates, for example, to compliance to the rules as well as to social expectations, to morality as well as to ethics, and to actions as well as to attitude.
Electronic signals are said to have integrity when there is no corruption of information between one domain and another, such as from a disk drive to a computer display. Such integrity is a fundamental principle of information assurance. Corrupted information is untrustworthy; uncorrupted information is of value.
See also
Notes
External links
Concepts in ethics
Evaluation
Virtue | 0.771448 | 0.998521 | 0.770307 |
Science studies | Science studies is an interdisciplinary research area that seeks to situate scientific expertise in broad social, historical, and philosophical contexts. It uses various methods to analyze the production, representation and reception of scientific knowledge and its epistemic and semiotic role.
Similarly to cultural studies, science studies are defined by the subject of their research and encompass a large range of different theoretical and methodological perspectives and practices. The interdisciplinary approach may include and borrow methods from the humanities, natural and formal sciences, from scientometrics to ethnomethodology or cognitive science.
Science studies have a certain importance for evaluation and science policy. Overlapping with the field of science, technology and society, practitioners study the relationship between science and technology, and the interaction of expert and lay knowledge in the public realm.
Scope
The field started with a tendency toward navel-gazing: it was extremely self-conscious in its genesis and applications. From early concerns with scientific discourse, practitioners soon started to deal with the relation of scientific expertise to politics and lay people. Practical examples include bioethics, bovine spongiform encephalopathy (BSE), pollution, global warming, biomedical sciences, physical sciences, natural hazard predictions, the (alleged) impact of the Chernobyl disaster in the UK, generation and review of science policy and risk governance and its historical and geographic contexts. While staying a discipline with multiple metanarratives, the fundamental concern is about the role of the perceived expert in providing governments and local authorities with information from which they can make decisions.
The approach poses various important questions about what makes an expert and how experts and their authority are to be distinguished from the lay population and interacts with the values and policy making process in liberal democratic societies.
Practitioners examine the forces within and through which scientists investigate specific phenomena such as
technological milieus, epistemic instruments and cultures and laboratory life (compare Karin Knorr-Cetina, Bruno Latour, Hans-Jörg Rheinberger)
science and technology (e.g. Wiebe Bijker, Trevor Pinch, Thomas P. Hughes)
science, technology and society (e.g. Peter Weingart, Ulrike Felt, Helga Nowotny and Reiner Grundmann)
language and rhetoric of science (e.g. Charles Bazerman, Alan G. Gross, Greg Myers)
aesthetics of science and visual culture in science (u.a. Peter Geimer), the role of aesthetic criteria in scientific practice (compare mathematical beauty) and the relation between emotion, cognition and rationality in the development of science.
semiotic studies of creative processes, as in the discovery, conceptualization, and realization of new ideas. or the interaction and management of different forms of knowledge in cooperative research.
large-scale research and research institutions, e.g. particle colliders (Sharon Traweek)
research ethics, science policy, and the role of the university.
History of the field
In 1935, in a celebrated paper, the Polish sociologist couple Maria Ossowska and Stanisław Ossowski proposed the founding of a "science of science" to study the scientific enterprise, its practitioners, and the factors influencing their work. Earlier, in 1923, the Polish sociologist Florian Znaniecki had made a similar proposal.
Fifty years before Znaniecki, in 1873, Aleksander Głowacki, better known in Poland by his pen name "Bolesław Prus", had delivered a public lecture – later published as a booklet – On Discoveries and Inventions, in which he said:
It is striking that, while early 20th-century sociologist proponents of a discipline to study science and its practitioners wrote in general theoretical terms, Prus had already half a century earlier described, with many specific examples, the scope and methods of such a discipline.
Thomas Kuhn's Structure of Scientific Revolutions (1962) increased interest both in the history of science and in science's philosophical underpinnings. Kuhn posited that the history of science was less a linear succession of discoveries than a succession of paradigms within the philosophy of science. Paradigms are broader, socio-intellectual constructs that determine which types of truth claims are permissible.
Science studies seeks to identify key dichotomies – such as those between science and technology, nature and culture, theory and experiment, and science and fine art – leading to the differentiation of scientific fields and practices.
The sociology of scientific knowledge arose at the University of Edinburgh, where David Bloor and his colleagues developed what has been termed "the strong programme". It proposed that both "true" and "false" scientific theories should be treated the same way. Both are informed by social factors such as cultural context and self-interest.
Human knowledge, abiding as it does within human cognition, is ineluctably influenced by social factors.
It proved difficult, however, to address natural-science topics with sociological methods, as was abundantly evidenced by the US science wars. Use of a deconstructive approach (as in relation to works on arts or religion) to the natural sciences risked endangering not only the "hard facts" of the natural sciences, but the objectivity and positivist tradition of sociology itself. The view on scientific knowledge production as a (at least partial) social construct was not easily accepted. Latour and others identified a dichotomy crucial for modernity, the division between nature (things, objects) as being transcendent, allowing to detect them, and society (the subject, the state) as immanent as being artificial, constructed. The dichotomy allowed for mass production of things (technical-natural hybrids) and large-scale global issues that endangered the distinction as such. E.g. We Have Never Been Modern asks to reconnect the social and natural worlds, returning to the pre-modern use of "thing"—addressing objects as hybrids made and scrutinized by the public interaction of people, things, and concepts.
Science studies scholars such as Trevor Pinch and Steve Woolgar started already in the 1980s to involve "technology", and called their field "science, technology and society". This "turn to technology" brought science studies into communication with academics in science, technology, and society programs.
More recently, a novel approach known as mapping controversies has been gaining momentum among science studies practitioners, and was introduced as a course for students in engineering, and architecture schools. In 2002 Harry Collins and Robert Evans asked for a third wave of science studies (a pun on The Third Wave), namely studies of expertise and experience answering to recent tendencies to dissolve the boundary between experts and the public.
Application to natural and man-made hazards
Sheepfarming after Chernobyl
A showcase of the rather complex problems of scientific information and its interaction with lay persons is Brian Wynne's study of Sheepfarming in Cumbria after the Chernobyl disaster. He elaborated on the responses of sheep farmers in Cumbria, who had been subjected to administrative restrictions because of radioactive contamination, allegedly caused by the nuclear accident at Chernobyl in 1986. The sheep farmers suffered economic losses, and their resistance against the imposed regulation was being deemed irrational and inadequate. It turned out that the source of radioactivity was actually the Sellafield nuclear reprocessing complex; thus, the experts who were responsible for the duration of the restrictions were completely mistaken. The example led to attempts to better involve local knowledge and lay-persons' experience and to assess its often highly geographically and historically defined background.
Science studies on volcanology
Donovan et al. (2012) used social studies of volcanology to investigate the generation of knowledge and expert advice on various active volcanoes. It contains a survey of volcanologists carried out during 2008 and 2009 and interviews with scientists in the UK, Montserrat, Italy and Iceland during fieldwork seasons. Donovan et al. (2012) asked the experts about the felt purpose of volcanology and what they considered the most important eruptions in historical time. The survey tries to identify eruptions that had an influence on volcanology as a science and to assess the role of scientists in policymaking.
A main focus was on the impact of the Montserrat eruption 1997. The eruption, a classical example of the black swan theory directly killed (only) 19 persons. However the outbreak had major impacts on the local society and destroyed important infrastructure, as the island's airport. About 7,000 people, or two-thirds of the population, left Montserrat; 4,000 to the United Kingdom.
The Montserrat case put immense pressure on volcanologists, as their expertise suddenly became the primary driver of various public policy approaches. The science studies approach provided valuable insights in that situation. There were various miscommunications among scientists. Matching scientific uncertainty (typical of volcanic unrest) and the request for a single unified voice for political advice was a challenge. The Montserrat Volcanologists began to use statistical elicitation models to estimate the probabilities of particular events, a rather subjective method, but allowing to synthesizing consensus and experience-based expertise step by step. It involved as well local knowledge and experience.
Volcanology as a science currently faces a shift of its epistemological foundations of volcanology. The science started to involve more research into risk assessment and risk management. It requires new, integrated methodologies for knowledge collection that transcend scientific disciplinary boundaries but combine qualitative and quantitative outcomes in a structured whole.
Experts and democracy
Science has become a major force in Western democratic societies, which depend on innovation and technology (compare Risk society) to address its risks. Beliefs about science can be very different from those of the scientists themselves, for reasons of e.g. moral values, epistemology or political motivations.The designation of expertise as authoritative in the interaction with lay people and decision makers of all kind is nevertheless challenged in contemporary risk societies, as suggested by scholars who follow Ulrich Beck's theorisation. The role of expertise in contemporary democracies is an important theme for debate among science studies scholars. Some argue for a more widely distributed, pluralist understanding of expertise (Sheila Jasanoff and Brian Wynne, for example), while others argue for a more nuanced understanding of the idea of expertise and its social functions (Collins and Evans, for example).
See also
Logology (study of science)
Merton thesis
Public awareness of science
Science and technology studies
Science and technology studies in India
Social construction of technology
Sociology of scientific knowledge
Sokal affair
References
Bibliography
Science studies, general
Bauchspies, W., Jennifer Croissant and Sal Restivo: Science, Technology, and Society: A Sociological Perspective (Oxford: Blackwell, 2005).
Biagioli, Mario, ed. The Science Studies Reader (New York: Routledge, 1999).
Bloor, David; Barnes, Barry & Henry, John, Scientific knowledge: a sociological analysis (Chicago: University Press, 1996).
Gross, Alan. Starring the Text: The Place of Rhetoric in Science Studies. Carbondale: SIU Press, 2006.
Fuller, Steve, The Philosophy of Science and Technology Studies (New York: Routledge, 2006).
Hess, David J. Science Studies: An Advanced Introduction (New York: NYU Press, 1997).
Jasanoff, Sheila, ed. Handbook of science and technology studies (Thousand Oaks, Calif.: SAGE Publications, 1995).
Latour, Bruno, "The Last Critique," Harper's Magazine (April 2004): 15–20.
Latour, Bruno. Science in Action. Cambridge. 1987.
Latour, Bruno, "Do You Believe in Reality: News from the Trenches of the Science Wars," in Pandora's Hope (Cambridge: Harvard University Press, 1999)
Vinck, Dominique. The Sociology of Scientific Work. The Fundamental Relationship between Science and Society (Cheltenham: Edward Elgar, 2010).
Wyer, Mary; Donna Cookmeyer; Mary Barbercheck, eds. Women, Science and Technology: A Reader in Feminist Science Studies, Routledge 200
Haraway, Donna J. "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective," in Simians, Cyborgs, and Women: the Reinvention of Nature (New York: Routledge, 1991), 183–201. Originally published in Feminist Studies, Vol. 14, No. 3 (Autumn, 1988), pp. 575–599. (available online)
Foucault, Michel, "Truth and Power," in Power/Knowledge (New York: Pantheon Books, 1997), 109–133.
Porter, Theodore M. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton: Princeton University Press, 1995).
Restivo, Sal: "Science, Society, and Values: Toward a Sociology of Objectivity" (Lehigh PA: Lehigh University Press, 1994).
Medicine and biology
Media, culture, society and technology
Hancock, Jeff. Deception and design: the impact of communication technology on lying behavior
Lessig, Lawrence. Free Culture. Penguin USA, 2004.
MacKenzie, Donald. The Social Shaping of Technology Open University Press: 2nd ed. 1999.
Mitchell, William J. Rethinking Media Change Thorburn and Jennings eds. Cambridge, Massachusetts : MIT Press, 2003.
Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Penguin USA, 1985.
Rheingold, Howard. Smart Mobs: The Next Social Revolution. Cambridge: Mass., Perseus Publishing. 2002.
External links
Sociology of Science, an introductory article by Joseph Ben-David & Teresa A. Sullivan, Annual Review of Sociology, 1975
The Incommensurability of Scientific and Poetic Knowledge
University of Washington Science Studies Network
Historiography of science
Philosophy of science
Pedagogy
Science and technology studies | 0.788985 | 0.9763 | 0.770286 |
Immanence | The doctrine or theory of immanence holds that the divine encompasses or is manifested in the material world. It is held by some philosophical and metaphysical theories of divine presence. Immanence is usually applied in monotheistic, pantheistic, pandeistic, or panentheistic faiths to suggest that the spiritual world permeates the mundane. It is often contrasted with theories of transcendence, in which the divine is seen to be outside the material world.
Major faiths commonly devote significant philosophical efforts to explaining the relationship between immanence and transcendence but do so in different ways, such as:
casting immanence as a characteristic of a transcendent God (common in Abrahamic religions),
subsuming immanent personal gods in a greater transcendent being (such as with Brahman in Hinduism), or
approaching the question of transcendence as something which can only be answered through an appraisal of immanence.
Western Esotericism
Another meaning of immanence is the quality of being contained within, or remaining within the boundaries of a person, of the world, or of the mind. This meaning is more common within Christian and other monotheist theology, in which the one God is considered to transcend his creation. Pythagoreanism says that the nous is an intelligent principle of the world acting with a specific intention. This is the divine reason regarded in Neoplatonism as the first emanation of the divine. From the nous emerges the world soul, which gives rise to the manifest realm. Neoplatonic gnosticism goes on to say the Godhead is the Father, Mother, and Son (Zeus). In the mind of Zeus, the ideas are distinctly articulated and become the Logos by which he creates the world. These ideas become active in the Mind (nous) of Zeus. With him is the Power and from him is the nous. This theology further explains that Zeus is called Demiurge (Dêmiourgos, Creator), Maker (Poiêtês), and Craftsman (Technitês). The nous of the demiurge proceeds outward into manifestation, becoming living ideas. They give rise to a lineage of mortal human souls. The components of the soul are 1) the higher soul, seat of the intuitive mind (divine nous); 2) the rational soul (logistikon) (seat of discursive reason / dianoia); 3) the nonrational soul (alogia), responsible for the senses, appetites, and motion. Zeus thinks the articulated ideas (logos). The idea of ideas (eidos - eidôn), provides a model of the Paradigm of the Universe, which the Demiurge contemplates in his articulation of the ideas and his creation of the world according to the Logos.
Buddhism
Tantric Buddhism and Dzogchen posit a non-dual basis for both experience and reality that could be considered an exposition of a philosophy of immanence that has a history on the subcontinent of India from early CE to the present. A paradoxical non-dual awareness or rigpa (Tibetan — vidya in Sanskrit) — is said to be the 'self-perfected state' of all beings. Scholarly works differentiate these traditions from monism. The non-dual is said to be not immanent and not transcendent, not neither, nor both. One classical exposition is the Madhyamaka refutation of extremes that the philosopher-adept Nagarjuna propounded.
Exponents of this non-dual tradition emphasize the importance of a direct experience of non-duality through both meditative practice and philosophical investigation. In one version, one maintains awareness as thoughts arise and dissolve within the 'field' of mind; one does not accept or reject them, rather one lets the mind wander as it will until a subtle sense of immanence dawns. Vipassana, or insight, is the integration of one's 'presence of awareness' with that which arises in the mind. Non-duality or rigpa is said to be the recognition that both the quiet, calm, abiding state as found in samatha and the movement or arising of phenomena as found in vipassana are not separate.
Christianity
Catholicism, Protestantism, and Eastern Christianity
According to Christian theology, the transcendent God, who cannot be approached or seen in essence or being, becomes immanent primarily in the God-man Jesus the Christ, who is the incarnate Second Person of the Trinity. In Byzantine Rite theology the immanence of God is expressed as the hypostases or energies of God, who in his essence is incomprehensible and transcendent. In Catholic theology, Christ and the Holy Spirit immanently reveal themselves; God the Father only reveals himself immanently vicariously through the Son and Spirit, and the divine nature, the Godhead is wholly transcendent and unable to be comprehended.
This is expressed in St. Paul's letter to the Philippians, where he writes:
The Holy Spirit is also expressed as an immanence of God.
The immanence of the triune God is celebrated in the Catholic Church, traditional Protestant Churches, and Eastern Churches during the liturgical feast of the Theophany of God, known in Western Christianity as the Epiphany.
Pope Pius X wrote at length about philosophical-theological controversies over immanence in his encyclical Pascendi dominici gregis.
Mormonism
According to Latter Day Saint theology, all of material creation is filled with immanence, known as the light of Christ. It is also responsible for the intuitive conscience born into man. The Light of Christ is the source of intellectual and spiritual enlightenment, and is the means by which God is in and through all things. LDS scriptures identify the divine Light with the mind of God, the source of all truth and conveyor of the characteristics of the divine nature through God's goodness. The experienced brilliance of God reflects the “fullness” of this spirit within God's being. Similarly, mankind can incorporate this spiritual light or divine mind and thus become one with God. This immanent spirit of light bridges the scientific and spiritual conceptualizations of the universe.
Judaism
Traditional Jewish religious thought can be divided into Nigleh ("Revealed") and Nistar ("Hidden") dimensions. Hebrew Scripture is, in the Kabbalistic tradition, explained using the four level exegesis method of Pardes. In this system, the first three approaches, Simple, Hinted and Homiletical interpretations, characterise the revealed aspects. The fourth approach, the Secret meaning, characterises a hidden aspect. Among the classic texts of Jewish tradition, some Jewish Bible commentators, the Midrash, the Talmud, and mainstream Jewish philosophy use revealed approaches. Other Bible commentators, the Kabbalah, and Hasidic philosophy, use hidden approaches. Both dimensions are seen by adherents as united and complementary. In this way, ideas in Jewish thought are given a variety of ascending meanings. Explanations of a concept in Nigleh are given inherent, inner, mystical contexts from Nistar.
Descriptions of divine immanence can be seen in Nigleh, from the Bible to Rabbinic Judaism. In Genesis, God makes a personal covenant with the forefathers Abraham, Isaac and Jacob. Daily Jewish prayers refer to this inherited closeness and personal relationship with the divine, for their descendants, as "the God of Abraham, Isaac and Jacob". To Moses, God reveals his Tetragrammaton name, that more fully captures divine descriptions of transcendence. Each of the Biblical names for God describe different divine manifestations. The most important prayer in Judaism, that forms part of the Scriptural narrative to Moses, says "Hear O Israel, the Lord is our God, the Lord is One." This declaration combines different divine names, and themes of immanence and transcendence. Perhaps the most personal example of a Jewish prayer that combines both themes is the invocation repeatedly voiced during the time in the Jewish calendar devoted to Teshuva (Return, often inaccurately translated as Repentance), Avinu Malkeinu ("Our Father, Our King"). Much of the later Hebrew Biblical narrative recounts the reciprocal relationship and national drama of the unfolding of themes of immanence and transcendence. Kabbalistic, or Hasidic Jewish thought and philosophy describe and articulate these interconnected aspects of the divine-human relationship.
Jewish mysticism gives explanations of greater depth and spirituality to the interconnected aspects of God's immanence and transcendence. The main expression of mysticism, the Kabbalah, began to be taught in 12th-Century Europe, and reached a new systemisation in 16th-Century Israel. The Kabbalah gives the full, subtle, traditional system of Jewish metaphysics. In the Medieval Kabbalah, new doctrines described the 10 Sephirot (divine emanations) through which the Infinite, unknowable divine essence reveals, emanates, and continuously creates existence. The Kabbalists identified the final, feminine Sefirah with the earlier, traditional Jewish concept of the Shekhinah (immanent divine presence). This gave great spirituality to earlier ideas in Jewish thought, such as the theological explanations of suffering (theodicy). In this example, the Kabbalists described the Shekhinah accompanying the children of Israel in their exile, being exiled alongside them, and yearning for Her redemption. Such a concept derives from the Kabbalistic theology that the physical World, and also the Upper spiritual Worlds, are continuously recreated from nothing by the Shefa (flow) of divine will, which emanates through the Sefirot. As a result, within all creations are divine sparks of vitality that sustain them. Medieval Kabbalah describes two forms of divine emanation, a "light that fills all worlds", representing this immanent divine creative power, and a "light that surrounds all worlds", representing transcendent expressions of Divinity.
The new doctrines of Isaac Luria in the 16th Century completed the Kabbalistic system of explanation. Lurianic Kabbalah describes the process of Tzimtzum (צמצום meaning "Contraction" or "Constriction") in the Kabbalistic theory of creation, where God "contracted" his infinite essence in order to allow for a "conceptual space" in which a finite, independent world could exist. This has received different later interpretations in Jewish mysticism, from the literal to the metaphorical. In this process, creation unfolds within the divine reality. Luria offered a daring cosmic theology that explained the reasons for the Tzimtzum, the primordial catastrophe of Shevirat Hakelim (the "Breaking of the Vessels" of the Sefirot in the first existence), and the messianic Tikkun ("Fixing") of this by every individual through their sanctification of physicality. The concept of Tzimtzum contains a built-in paradox, as it requires that God be simultaneously transcendent and immanent:
On the one hand, if the Infinite did not "restrict itself", then nothing could exist. There would be no limits, as the infinite essence of God, and also His primordial infinite light (Kabbalistic sources discuss God being able to reign alone, a revealed 'light' of the Sefirah of Kingship, "before" creation) would comprise all reality. Any existence would be nullified into the divine infinity. Therefore, we could not have the variety of limited, finite things that comprise the creations in the universe that we inhabit. (The number of such creations could still be potentially limitless, if the physical universe, or Multiverse had no end). Because each limited thing results from a restriction of God's completeness, God Himself must transcend (exist beyond) these various limited things. This idea can be interpreted in various ways. In its ultimate articulation, by the Hasidic leader Shneur Zalman of Liadi, in the intellectual Hasidic method of Chabad, the Tzimtzum is only metaphorical, an illusion from the perspective of man. Creation is panentheistic (taking place fully "within God"), and acosmic (Illusionary) from the divine perspective. God himself, and even his light, is unrestricted by Tzimtzum, from God's perspective. The Tzimtzum is merely the hiding of this unchanged reality from creation. Shneur Zalman distinguishes between the "Upper Level Unity" of God's existence from the divine perspective, with the "Lower Level Unity" of God's existence as creation perceives him. Because God can be above logic, both perspectives of this paradox are true, from their alternative views. The dimension of the Tzimtzum, which implies divine transcendence, corresponds to the Upper Level Unity. In this perspective, because God is the true, ultimate infinity, then creation (even if its physical and spiritual realms should extend without limit) is completely nullified into literal non-existence by the divine. There is no change in the complete unity of God as all Reality, before or after creation. This is the ultimate level of divine transcendence.
On the other hand, in Lurianic Kabbalah, the Tzimtzum has an immanent divine dimension. The Tzimtzum formed a "space" (in Lurianic terminology, the Halal, "Vacuum") in which to allow creation to take place. The first act of creation was the emanation of a new light (Kav, "Ray") into the vacated space, from the ultimate divine reality "outside", or unaffected, by the space. The purpose of the Tzimtzum was that the vacated space allowed this new light to be suited to the needs and capacities of the new creations, without their being subsumed in the primordial divine infinity. Kabbalistic theology offers metaphysical explanations of how divine and spiritual processes unfold. In earlier, mainstream Jewish philosophy, logical descriptions of creation ex nihilo (from nothing) describe the new existence of creation, compared to the preceding absence. Kabbalah, however, seeks to explain how the spiritual, metaphysical processes unfold. Therefore, in the Kabbalistic system, God is the ultimate reality, so that creation only exists because it is continuously sustained by the will of God. Creation is formed from the emanated "light" of the divine Will, as it unfolds through the later Sefirot. The light that originated with the Kav later underwent further contractions that diminished it, so that this immanent expression of Divinity could itself create the various levels of Spiritual, and ultimately, Physical existence. The terms of "light" and temporal descriptions of time are metaphorical, in a language accessible to grasp. In this immanent divine dimension, God continuously maintains the existence of, and is thus not absent from, the created universe. In Shneur Zalman's explanation, this corresponds to the conscious perception by Creation of "Lower Level Unity" of God. In this perspective, Creation is real, and not an illusion, but is utterly nullified to the immanent divine life force that continuously sustains and recreates it. It may not perceive its complete dependence on Divinity, as in our present World, that feels its own existence as independent reality. However, this derives from the great concealments of Godliness in our present World. "The Divine life-force which brings all creatures into existence must constantly be present within them ... were this life-force to forsake any created being for even one brief moment, it would revert to a state of utter nothingness, as before the creation ...". (Tanya, Shaar Hayichud, Chapter 2–3. Shneur Zalman of Liadi).
Continental philosophy
Giordano Bruno, Baruch Spinoza and possibly Hegel espoused philosophies of immanence versus philosophies of transcendence such as Thomism or Aristotelian tradition. Kant's "transcendental" critique can be contrasted to Hegel's "immanent dialectics."<ref>For further information on Hegel's immanent dialectics, see J. T. Fraser, F. C. Haber, G. H. Müller (eds.), The Study of Time: Proceedings of the First Conference of the International Society for the Study of Time Oberwolfach (Black Forest) — West Germany, Springer Science & Business Media, 2012, p. 437.</ref>
Thomas Carlyle's idea of "Natural Supernaturalism" posited the immanence of the divine in nature, history and man. Clement Charles Julian Webb explained that "Carlyle had done more than any other nineteenth-century writer to undermine belief in the transcendence of God and the origin of the material world in an act of creation in time, and to put in its place an 'essentially immanentist' theology, drawn largely from the writings of the German Idealists." Carlyle's "Natural Supernaturalism" was highly influential on American Transcendentalism and British Idealism.
Giovanni Gentile's actual idealism, sometimes called "philosophy of immanence" and the metaphysics of the "I", "affirms the organic synthesis of dialectical opposites that are immanent within actual or present awareness". His so-called method of immanence "attempted to avoid: (1) the postulate of an independently existing world or a Kantian Ding-an-sich (thing-in-itself), and (2) the tendency of neo-Hegelian philosophy to lose the particular self in an Absolute that amounts to a kind of mystical reality without distinctions."
Political theorist Carl Schmitt used the term in his book Politische Theologie (1922), meaning a power within some thought, which makes it obvious for the people to accept it, without needing to claim being justified. The immanence of some political system or a part of it comes from the reigning contemporary definer of Weltanschauung, namely religion (or any similar system of beliefs, such as rationalistic or relativistic world-view). Many hold Schmitt to be interested in an immanent polity without anything transcendent involved in its vital operations beyond the very border that separates it from the enemy outside. As such he might have ironically secularized politics in a way that liberalism never could have. But this is a contentious issue.
The French 20th-century philosopher Gilles Deleuze used the term immanence to refer to his "empiricist philosophy", which was obliged to create action and results rather than establish transcendents. His final text was titled Pure Immanence: Essays on a Life'' and spoke of a plane of immanence.
Furthermore, the Russian Formalist film theorists perceived immanence as a specific method of discussing the limits of ability for a technological object. Specifically, this is the scope of potential uses of an object outside of the limits prescribed by culture or convention, and is instead simply the empirical spectrum of function for a technological artifact.
See also
Buddha-nature
Hasidic Judaism
Iman (concept)
Immanent evaluation
Immanentize the eschaton
Metaphysical naturalism
Plane of immanence
Substance
Theophany
Transcendence (philosophy)
References
External links
Catholic encyclopedia: Immanence
"Immanence and Deterritorialization: The Philosophy of Gilles Deleuze and Félix Guattari"
"the culture of Immanence", Ricardo Barreto and Paula Perissinotto
Pantheism
Religious philosophical concepts
Metaphysical properties
Mysticism
Attributes of God in Christian theology
Nature of Jesus Christ
Divinity | 0.77345 | 0.995904 | 0.770281 |
Meaning (philosophy) | In philosophymore specifically, in its sub-fields semantics, semiotics, philosophy of language, metaphysics, and metasemanticsmeaning "is a relationship between two sorts of things: signs and the kinds of things they intend, express, or signify".
The types of meanings vary according to the types of the thing that is being represented. There are:
the things, which might have meaning;
things that are also signs of other things, and therefore are always meaningful (i.e., natural signs of the physical world and ideas within the mind);
things that are necessarily meaningful, such as words and nonverbal symbols.
The major contemporary positions of meaning come under the following partial definitions of meaning:
psychological theories, involving notions of thought, intention, or understanding;
logical theories, involving notions such as intension, cognitive content, or sense, along with extension, reference, or denotation;
message, content, information, or communication;
truth conditions;
usage, and the instructions for usage;
measurement, computation, or operation.
Truth and meaning
The question of what is a proper basis for deciding how words, symbols, ideas and beliefs may properly be considered to truthfully denote meaning, whether by a single person or by an entire society, has been considered by five major types of theory of meaning and truth. Each type is discussed below, together with its principal exponents.
Substantive theories of meaning
Correspondence theory
Correspondence theories emphasise that true beliefs and true statements of meaning correspond to the actual state of affairs and that associated meanings must be in agreement with these beliefs and statements. This type of theory stresses a relationship between thoughts or statements on one hand, and things or objects on the other. It is a traditional model tracing its origins to ancient Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined in principle entirely by how it relates to "things", by whether it accurately describes those "things". An example of correspondence theory is the statement by the thirteenth-century philosopher/theologian Thomas Aquinas: Veritas est adaequatio rei et intellectus ("Truth is the equation [or adequation] of things and intellect"), a statement which Aquinas attributed to the ninth-century neoplatonist Isaac Israeli. Aquinas also restated the theory as: "A judgment is said to be true when it conforms to the external reality".
Correspondence theory centres heavily around the assumption that truth and meaning are a matter of accurately copying what is known as "objective reality" and then representing it in thoughts, words and other symbols. Many modern theorists have stated that this ideal cannot be achieved without analysing additional factors. For example, language plays a role in that all languages have words to represent concepts that are virtually undefined in other languages. The German word Zeitgeist is one such example: one who speaks or understands the language may "know" what it means, but any translation of the word apparently fails to accurately capture its full meaning (this is a problem with many abstract words, especially those derived in agglutinative languages). Thus, some words add an additional parameter to the construction of an accurate truth predicate. Among the philosophers who grappled with this problem is Alfred Tarski, whose semantic theory is summarized further below in this article.
Coherence theory
For coherence theories in general, the assessment of meaning and truth requires a proper fit of elements within a whole system. Very often, though, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. So, for example, the completeness and comprehensiveness of the underlying set of concepts is a critical factor in judging the validity and usefulness of a coherent system. A pervasive tenet of coherence theories is the idea that truth is primarily a property of whole systems of propositions, and can be ascribed to individual propositions only according to their coherence with the whole. Among the assortment of perspectives commonly regarded as coherence theory, theorists differ on the question of whether coherence entails many possible true systems of thought or only a single absolute system.
Some variants of coherence theory are claimed to describe the essential and intrinsic properties of formal systems in logic and mathematics. However, formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by sidefor example, the various alternative geometries. On the whole, coherence theories have been rejected for lacking justification in their application to other areas of truthespecially with respect to assertions about the natural world, empirical data in general, assertions about practical matters of psychology and societyparticularly when used without support from the other major theories of truth.
Coherence theories distinguish the thought of rationalist philosophers, particularly of Spinoza, Leibniz, and G.W.F. Hegel, along with the British philosopher F.H. Bradley. Other alternatives may be found among several proponents of logical positivism, notably Otto Neurath and Carl Hempel.
Constructivist theory
Social constructivism holds that meaning and truth are constructed by social processes, are historically and culturally specific, and are in part shaped through power struggles within a community. Constructivism views all of our knowledge as "constructed", because it does not reflect any external "transcendent" realities (as a pure correspondence theory might hold). Rather, perceptions of truth are viewed as contingent on convention, human perception, and social experience. It is believed by constructivists that representations of physical and biological reality, including race, sexuality, and gender, are socially constructed.
Giambattista Vico was among the first to claim that history and culture, along with their meaning, are human products. Vico's epistemological orientation gathers the most diverse rays and unfolds in one axiomverum ipsum factum"truth itself is constructed". Hegel and Marx were among the other early proponents of the premise that truth is, or can be, socially constructed. Marx, like many critical theorists who followed, did not reject the existence of objective truth but rather distinguished between true knowledge and knowledge that has been distorted through power or ideology. For Marx, scientific and true knowledge is "in accordance with the dialectical understanding of history" and ideological knowledge is "an epiphenomenal expression of the relation of material forces in a given economic arrangement".
Consensus theory
Consensus theory holds that meaning and truth are whatever is agreed uponor, in some versions, might come to be agreed uponby some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.
Among the current advocates of consensus theory as a useful accounting of the concept of "truth" is the philosopher Jürgen Habermas. Habermas maintains that truth is what would be agreed upon in an ideal speech situation. Among the recent strong critics of consensus theory has been the philosopher Nicholas Rescher.
Pragmatic theory
The three most influential forms of the pragmatic theory of truth and meaning were introduced around the turn of the 20th century by Charles Sanders Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that meaning and truth are verified and confirmed by the results of putting one's concepts into practice.
Peirce defines truth as follows: "Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth." This statement stresses Peirce's view that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and "reference to the future", are essential to a proper conception of meaning and truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions.
William James's version of pragmatic theory, while complex, is often summarized by his statement that "the 'true' is only the expedient in our way of thinking, just as the 'right' is only the expedient in our way of behaving". By this, James meant that truth is a quality, the value of which is confirmed by its effectiveness when applying concepts to practice (thus, "pragmatic").
John Dewey, less broadly than James but more broadly than Peirce, held that inquiry, whether scientific, technical, sociological, philosophical or cultural, is self-corrective over time if openly submitted for testing by a community of inquirers in order to clarify, justify, refine and/or refute proposed meanings and truths.
A later variation of the pragmatic theory was William Ernest Hocking's "negative pragmatism": what works may or may not be true, but what fails cannot be true, because the truth and its meaning always works. James's and Dewey's ideas also ascribe meaning and truth to repeated testing, which is "self-corrective" over time.
Pragmatism and negative pragmatism are also closely aligned with the coherence theory of truth in that any testing should not be isolated but rather incorporate knowledge from all human endeavors and experience. The universe is a whole and integrated system, and testing should acknowledge and account for its diversity. As physicist Richard Feynman said: "if it disagrees with experiment, it is wrong".
Associated theories and commentaries
Some have asserted that meaning is nothing substantially more or less than the truth conditions they involve. For such theories, an emphasis is placed upon reference to actual things in the world to account for meaning, with the caveat that reference more or less explains the greater part (or all) of meaning itself.
Logic and language
The logical positivists argued that the meaning of a statement arose from how it is verified.
Gottlob Frege
In his paper "Über Sinn und Bedeutung" (now usually translated as "On Sense and Reference"), Gottlob Frege argued that proper names present at least two problems in explaining meaning.
Suppose the meaning of a name is the thing it refers to. Sam, then, means a person in the world who is named Sam. But if the object referred to by the name did not exist—i.e., Pegasus—then, according to that theory, it would be meaningless.
Suppose two different names refer to the same object. Hesperus and Phosphorus were the names given to what were considered distinct celestial bodies. It was later shown that they were the same thing (the planet Venus). If the words meant the same thing, then substituting one for the other in a sentence would not result in a sentence that differs in meaning from the original. But in that case, "Hesperus is Phosphorus" would mean the same thing as "Hesperus is Hesperus". This is clearly absurd, since we learn something new and unobvious by the former statement, but not by the latter.
Frege can be interpreted as arguing that it was therefore a mistake to think that the meaning of a name is the thing it refers to. Instead, the meaning must be something else—the "sense" of the word. Two names for the same person, then, can have different senses (or meanings): one referent might be picked out by more than one sense. This sort of theory is called a mediated reference theory. Frege argued that, ultimately, the same bifurcation of meaning must apply to most or all linguistic categories, such as to quantificational expressions like "All boats float".
Bertrand Russell
Logical analysis was further advanced by Bertrand Russell and Alfred North Whitehead in their groundbreaking Principia Mathematica, which attempted to produce a formal language with which the truth of all mathematical statements could be demonstrated from first principles.
Russell differed from Frege greatly on many points, however. He rejected Frege's sense-reference distinction. He also disagreed that language was of fundamental significance to philosophy, and saw the project of developing formal logic as a way of eliminating all of the confusions caused by ordinary language, and hence at creating a perfectly transparent medium in which to conduct traditional philosophical argument. He hoped, ultimately, to extend the proofs of the Principia to all possible true statements, a scheme he called logical atomism. For a while it appeared that his pupil Wittgenstein had succeeded in this plan with his Tractatus Logico-Philosophicus.
Russell's work, and that of his colleague G. E. Moore, developed in response to what they perceived as the nonsense dominating British philosophy departments at the turn of the 20th century, which was a kind of British Idealism most of which was derived (albeit very distantly) from the work of Hegel. In response Moore developed an approach ("Common Sense Philosophy") which sought to examine philosophical difficulties by a close analysis of the language used in order to determine its meaning. In this way Moore sought to expunge philosophical absurdities such as "time is unreal". Moore's work would have significant, if oblique, influence (largely mediated by Wittgenstein) on Ordinary language philosophy.
Other truth theories of meaning
The Vienna Circle, a famous group of logical positivists from the early 20th century (closely allied with Russell and Frege), adopted the verificationist theory of meaning, a type of truth theory of meaning. The verificationist theory of meaning (in at least one of its forms) states that to say that an expression is meaningful is to say that there are some conditions of experience that could exist to show that the expression is true. As noted, Frege and Russell were two proponents of this way of thinking.
A semantic theory of truth was produced by Alfred Tarski for formal semantics. According to Tarski's account, meaning consists of a recursive set of rules that end up yielding an infinite set of sentences, "'p' is true if and only if p", covering the whole language. His innovation produced the notion of propositional functions discussed on the section on universals (which he called "sentential functions"), and a model-theoretic approach to semantics (as opposed to a proof-theoretic one). Finally, some links were forged to the correspondence theory of truth (Tarski, 1944).
Perhaps the most influential current approach in the contemporary theory of meaning is that sketched by Donald Davidson in his introduction to the collection of essays Truth and Meaning in 1967. There he argued for the following two theses:
Any learnable language must be statable in a finite form, even if it is capable of a theoretically infinite number of expressions—as we may assume that natural human languages are, at least in principle. If it could not be stated in a finite way then it could not be learned through a finite, empirical method such as the way humans learn their languages. It follows that it must be possible to give a theoretical semantics for any natural language which could give the meanings of an infinite number of sentences on the basis of a finite system of axioms.
Giving the meaning of a sentence, he further argued, was equivalent to stating its truth conditions. He proposed that it must be possible to account for language as a set of distinct grammatical features together with a lexicon, and for each of them explain its workings in such a way as to generate trivial (obviously correct) statements of the truth conditions of all the (infinitely many) sentences built up from these.
The result is a theory of meaning that rather resembles, by no accident, Tarski's account.
Davidson's account, though brief, constitutes the first systematic presentation of truth-conditional semantics. He proposed simply translating natural languages into first-order predicate calculus in order to reduce meaning to a function of truth.
Saul Kripke
Saul Kripke examined the relation between sense and reference in dealing with possible and actual situations. He showed that one consequence of his interpretation of certain systems of modal logic was that the reference of a proper name is necessarily linked to its referent, but that the sense is not. So for instance "Hesperus" necessarily refers to Hesperus, even in those imaginary cases and worlds in which perhaps Hesperus is not the evening star. That is, Hesperus is necessarily Hesperus, but only contingently the morning star.
This results in the curious situation that part of the meaning of a name — that it refers to some particular thing — is a necessary fact about that name, but another part — that it is used in some particular way or situation — is not.
Kripke also drew the distinction between speaker's meaning and semantic meaning, elaborating on the work of ordinary language philosophers Paul Grice and Keith Donnellan. The speaker's meaning is what the speaker intends to refer to by saying something; the semantic meaning is what the words uttered by the speaker mean according to the language.
In some cases, people do not say what they mean; in other cases, they say something that is in error. In both these cases, the speaker's meaning and the semantic meaning seem to be different. Sometimes words do not actually express what the speaker wants them to express; so words will mean one thing, and what people intend to convey by them might mean another. The meaning of the expression, in such cases, is ambiguous.
Critiques of truth theories of meaning
W. V. O. Quine attacked both verificationism and the very notion of meaning in his famous essay, "Two Dogmas of Empiricism". In it, he suggested that meaning was nothing more than a vague and dispensable notion. Instead, he asserted, what was more interesting to study was the synonymy between signs. He also pointed out that verificationism was tied to the distinction between analytic and synthetic statements, and asserted that such a divide was defended ambiguously. He also suggested that the unit of analysis for any potential investigation into the world (and, perhaps, meaning) would be the entire body of statements taken as a collective, not just individual statements on their own.
Other criticisms can be raised on the basis of the limitations that truth-conditional theorists themselves admit to. Tarski, for instance, recognized that truth-conditional theories of meaning only make sense of statements, but fail to explain the meanings of the lexical parts that make up statements. Rather, the meaning of the parts of statements is presupposed by an understanding of the truth-conditions of a whole statement, and explained in terms of what he called "satisfaction conditions".
Still another objection (noted by Frege and others) was that some kinds of statements do not seem to have any truth-conditions at all. For instance, "Hello!" has no truth-conditions, because it does not even attempt to tell the listener anything about the state of affairs in the world. In other words, different propositions have different grammatical moods.
Deflationist accounts of truth, sometimes called 'irrealist' accounts, are the staunchest source of criticism of truth-conditional theories of meaning. According to them, "truth" is a word with no serious meaning or function in discourse. For instance, for the deflationist, the sentences "It's true that Tiny Tim is trouble" and "Tiny Tim is trouble" are equivalent. In consequence, for the deflationist, any appeal to truth as an account of meaning has little explanatory power.
The sort of truth theories presented here can also be attacked for their formalism both in practice and principle. The principle of formalism is challenged by the informalists, who suggest that language is largely a construction of the speaker, and so, not compatible with formalization. The practice of formalism is challenged by those who observe that formal languages (such as present-day quantificational logic) fail to capture the expressive power of natural languages (as is arguably demonstrated in the awkward character of the quantificational explanation of definite description statements, as laid out by Bertrand Russell).
Finally, over the past century, forms of logic have been developed that are not dependent exclusively on the notions of truth and falsity. Some of these types of logic have been called modal logics. They explain how certain logical connectives such as "if-then" work in terms of necessity and possibility. Indeed, modal logic was the basis of one of the most popular and rigorous formulations in modern semantics called the Montague grammar. The successes of such systems naturally give rise to the argument that these systems have captured the natural meaning of connectives like if-then far better than an ordinary, truth-functional logic ever could.
Usage and meaning
Throughout the 20th century, English philosophy focused closely on analysis of language. This style of analytic philosophy became very influential and led to the development of a wide range of philosophical tools.
Ludwig Wittgenstein
The philosopher Ludwig Wittgenstein was originally an ideal language philosopher, following the influence of Russell and Frege. In his Tractatus Logico-Philosophicus he had supported the idea of an ideal language built up from atomic statements using logical connectives (see picture theory of meaning and logical atomism). However, as he matured, he came to appreciate more and more the phenomenon of natural language. Philosophical Investigations, published after his death, signalled a sharp departure from his earlier work with its focus upon ordinary language use (see use theory of meaning and ordinary language philosophy). His approach is often summarised by the aphorism "the meaning of a word is its use in a language". However, following in Frege's footsteps, in the Tractatus, Wittgenstein declares: "... Only in the context of a proposition has a name meaning."
His work would come to inspire future generations and spur forward a whole new discipline, which explained meaning in a new way. Meaning in a natural language was seen as primarily a question of how the speaker uses words within the language to express intention.
This close examination of natural language proved to be a powerful philosophical technique. Practitioners who were influenced by Wittgenstein's approach have included an entire tradition of thinkers, featuring P. F. Strawson, Paul Grice, R. M. Hare, R. S. Peters, and Jürgen Habermas.
J. L. Austin
At around the same time Ludwig Wittgenstein was re-thinking his approach to language, reflections on the complexity of language led to a more expansive approach to meaning. Following the lead of George Edward Moore, J. L. Austin examined the use of words in great detail. He argued against fixating on the meaning of words. He showed that dictionary definitions are of limited philosophical use, since there is no simple "appendage" to a word that can be called its meaning. Instead, he showed how to focus on the way in which words are used in order to do things. He analysed the structure of utterances into three distinct parts: locutions, illocutions and perlocutions. His pupil John Searle developed the idea under the label "speech acts". Their work greatly influenced pragmatics.
Peter Strawson
Past philosophers had understood reference to be tied to words themselves. However, Peter Strawson disagreed in his seminal essay, "On Referring", where he argued that there is nothing true about statements on their own; rather, only the uses of statements could be considered to be true or false.
Indeed, one of the hallmarks of the ordinary use perspective is its insistence upon the distinctions between meaning and use. "Meanings", for ordinary language philosophers, are the instructions for usage of words — the common and conventional definitions of words. Usage, on the other hand, is the actual meanings that individual speakers have — the things that an individual speaker in a particular context wants to refer to. The word "dog" is an example of a meaning, but pointing at a nearby dog and shouting "This dog smells foul!" is an example of usage. From this distinction between usage and meaning arose the divide between the fields of pragmatics and semantics.
Yet another distinction is of some utility in discussing language: "mentioning". Mention is when an expression refers to itself as a linguistic item, usually surrounded by quotation marks. For instance, in the expression "'Opopanax' is hard to spell", what is referred to is the word itself ("opopanax") and not what it means (an obscure gum resin). Frege had referred to instances of mentioning as "opaque contexts".
In his essay, "Reference and Definite Descriptions", Keith Donnellan sought to improve upon Strawson's distinction. He pointed out that there are two uses of definite descriptions: attributive and referential. Attributive uses provide a description of whoever is being referred to, while referential uses point out the actual referent. Attributive uses are like mediated references, while referential uses are more directly referential.
Paul Grice
The philosopher Paul Grice, working within the ordinary language tradition, understood "meaning" — in his 1957 article — to have two kinds: natural and non-natural. Natural meaning had to do with cause and effect, for example with the expression "these spots mean measles". Non-natural meaning, on the other hand, had to do with the intentions of the speaker in communicating something to the listener.
In his essay, Logic and Conversation, Grice went on to explain and defend an explanation of how conversations work. His guiding maxim was called the cooperative principle, which claimed that the speaker and the listener will have mutual expectations of the kind of information that will be shared. The principle is broken down into four maxims: Quality (which demands truthfulness and honesty), Quantity (demand for just enough information as is required), Relation (relevance of things brought up), and Manner (lucidity). This principle, if and when followed, lets the speaker and listener figure out the meaning of certain implications by way of inference.
The works of Grice led to an avalanche of research and interest in the field, both supportive and critical. One spinoff was called Relevance theory, developed by Dan Sperber and Deirdre Wilson during the mid-1980s, whose goal was to make the notion of relevance more clear. Similarly, in his work, "Universal pragmatics", Jürgen Habermas began a program that sought to improve upon the work of the ordinary language tradition. In it, he laid out the goal of a valid conversation as a pursuit of mutual understanding.
Noam Chomsky
Although he has focused on the structure and functioning of human syntax, in many works Noam Chomsky has discussed many philosophical problems too, including the problem of meaning and reference in human language. Chomsky has formulated a strong criticism against both the externalist notion of reference (reference consists in a direct or causal relation among words and objects) and the internalist one (reference is a mind-mediated relation holding among words and reality). According to Chomsky, both these notions (and many others widely used in philosophy, such as that of truth) are basically inadequate for the naturalistic (= scientific) inquiry on human mind: they are common sense notions, not scientific notions, which cannot, as such, enter in the scientific discussion. Chomsky argues that the notion of reference can be used only when we deal with scientific languages, whose symbols refers to specific things or entities; but when we consider human language expressions, we immediately understand that their reference is vague, in the sense that they can be used to denote many things. For example, the word “book” can be used to denote an abstract object (e.g., “he is reading the book”) or a concrete one (e.g., “the book is on the chair”); the name “London” can denote at the same time a set of buildings, the air of a place and the character of a population (think to the sentence “London is so gray, polluted and sad”). These and other cases induce Chomsky to argue that the only plausible (although not scientific) notion of reference is that of act of reference, a complex phenomenon of language use (performance) which includes many factors (linguistic and not: i.e. beliefs, desires, assumptions about the world, premises, etc.). As Chomsky himself has pointed out, this conception of meaning is very close to that adopted by John Austin, Peter Strawson and the late Wittgenstein.
Colin Murray Turbayne
The Australian philosopher Colin Murray Turbayne explored the role played by conceptual metaphors in the conveyance of "meaning" in his book The Myth of Metaphor (1962). Turbayne argued that metaphorical constructs are essential to any language which lays claim to embody both richness and a depth of understanding. He further argued that failures to properly interpret metaphorical language as a "category mistake" serve to distort the conveyance of meaning. Additionally, the failure to recognize dead metaphors results in needless obfuscation in philosophical discourse, as illustrated in the adoption of Newtonian and Cartesian mechanistic explanations of the natural world and the incorporation of "substance" and "substratum" within Rene Des Cartes's dualism. In his Metaphors for the Mind, he further argued that metaphors first developed by Plato and Aristotle to explain "mind" and "language" have been incorporated into the philosophical writings of both George Berkeley and Immanuel Kant and that the "procreation" metaphor in Plato's Timeus has emerged within modern theories of both "thought" and "language". He further argued that through the use of deductive reasoning, the underlying meaning of such dead metaphors has been distorted to take on the appearance of "objective truth", once again resulting in needless philosophical confusion over their meaning.
Inferential role semantics
Michael Dummett argued against the kind of truth-conditional semantics presented by Davidson. Instead, he argued that basing semantics on assertion conditions avoids a number of difficulties with truth-conditional semantics, such as the transcendental nature of certain kinds of truth condition. He leverages work done in proof-theoretic semantics to provide a kind of inferential role semantics, where:
The meaning of sentences and grammatical constructs is given by their assertion conditions; and
Such a semantics is only guaranteed to be coherent if the inferences associated with the parts of language are in logical harmony.
A semantics based upon assertion conditions is called a verificationist semantics: cf. the verificationism of the Vienna Circle.
This work is closely related, though not identical, to one-factor theories of conceptual role semantics.
Critiques of use theories of meaning
Sometimes between the 1950-1990s, cognitive scientist Jerry Fodor said that use theories of meaning (of the Wittgensteinian kind) seem to assume that language is solely a public phenomenon, that there is no such thing as a "private language". Fodor thinks it is necessary to create or describe the language of thought, which would seemingly require the existence of a "private language".
In the 1960s, David Kellogg Lewis described meaning as use, a feature of a social convention and conventions as regularities of a specific sort. Lewis' work was an application of game theory in philosophical topics. Conventions, he argued, are a species of coordination equilibria.
Idea theory of meaning
The idea theory of meaning (also ideational theory of meaning), most commonly associated with the British empiricist John Locke, claims that meanings are mental representations provoked by signs.
The term "ideas" is used to refer to either mental representations, or to mental activity in general. Those who seek an explanation for meaning in the former sort of account endorse a stronger sort of idea theory of mind than the latter.
Each idea is understood to be necessarily about something external and/or internal, real or imaginary. For example, in contrast to the abstract meaning of the universal "dog", the referent "this dog" may mean a particular real life chihuahua. In both cases, the word is about something, but in the former it is about the class of dogs as generally understood, while in the latter it is about a very real and particular dog in the real world.
John Locke considered all ideas to be both imaginable objects of sensation and the very unimaginable objects of reflection. He said in his Essay Concerning Human Understanding that words are used both as signs for ideas and also to signify a lack of certain ideas. David Hume held that thoughts were kinds of imaginable entities: his Enquiry Concerning Human Understanding, section 2. He argued that any words that could not call upon any past experience were without meaning.
In contrast to Locke and Hume, George Berkeley and Ludwig Wittgenstein held that ideas alone are unable to account for the different variations within a general meaning. For example, any hypothetical image of the meaning of "dog" has to include such varied images as a chihuahua, a pug, and a black Labrador; and this seems impossible to imagine, since all of those particular breeds look very different from one another. Another way to see this point is to question why it is that, if we have an image of a specific type of dog (say of a chihuahua), it should be entitled to represent the entire concept.
Another criticism is that some meaningful words, known as non-lexical items, do not have any meaningfully associated image. For example, the word "the" has a meaning, but one would be hard-pressed to find a mental representation that fits it. Still another objection lies in the observation that certain linguistic items name something in the real world, and are meaningful, yet which we have no mental representations to deal with. For instance, it is not known what Newton's father looked like, yet the phrase "Newton's father" still has meaning.
Another problem is that of compositionthat it is difficult to explain how words and phrases combine into sentences if only ideas are involved in meaning.
Eleanor Rosch and George Lakoff have advanced a theory of "prototypes" which suggests that many lexical categories, at least on the face of things, have "radial structures". That is to say, there are some ideal member(s) in the category that seem to represent the category better than other members. For example, the category of "birds" may feature the robin as the prototype, or the ideal kind of bird. With experience, subjects might come to evaluate membership in the category of "bird" by comparing candidate members to the prototype and evaluating for similarities. So, for example, a penguin or an ostrich would sit at the fringe of the meaning of "bird", because a penguin is unlike a robin.
Intimately related to these researches is the notion of a psychologically basic level, which is both the first level named and understood by children, and "the highest level at which a single mental image can reflect the entire category" (Lakoff 1987:46). The "basic level" of cognition is understood by Lakoff as crucially drawing upon "image-schemas" along with various other cognitive processes.
Philosophers Ned Block, Gilbert Harman and Hartry Field, and cognitive scientists G. Miller and P. Johnson-Laird say that the meaning of a term can be found by investigating its role in relation to other concepts and mental states. They endorse a "conceptual role semantics". Those proponents of this view who understand meanings to be exhausted by the content of mental states can be said to endorse "one-factor" accounts of conceptual role semantics and thus to fit within the tradition of idea theories.
See also
Definitions of philosophy
Meaning (existential)
Semiotics
Semeiotic
References
Further reading
Akmajian, Adrian et al (1995), Linguistics: an introduction to language and communication (fourth edition), Cambridge: MIT Press.
Allan, Keith (1986), Linguistic Meaning, Volume One, New York: Routledge & Kegan Paul.
Arena, Leonardo Vittorio (2012), Nonsense as the Meaning (ebook).
Austin, J. L. (1962), How to Do Things With Words, Cambridge: Harvard University Press.
Berger, Peter and Thomas Luckmann (1967), The Social Construction of Reality : A Treatise in the Sociology of Knowledge (first edition: 240 pages), Anchor Books.
Davidson, Donald (2001), Inquiries into Truth and Interpretation (second edition), Oxford: Oxford University Press.
Dummett, Michael (1981), Frege: Philosophy of Language (second edition), Cambridge: Harvard University Press.
Frege, Gottlob (ed. Michael Beaney, 1997), The Frege Reader, Oxford: Blackwell.
Gauker, Christopher (2003), Words without Meaning, MIT Press.
Goffman, Erving (1959), Presentation of Self in Everyday Life, Anchor Books.
Grice, Paul (1989), Studies in the Way of Words, Cambridge: Harvard University Press.
Searle, John and Daniel Vanderveken (1985), Foundations of Illocutionary Logic, Cambridge: Cambridge University Press.
Searle, John (1969), Speech Acts, Cambridge: Cambridge University Press.
Searle, John (1979), Expression and Meaning, Cambridge: Cambridge University Press.
Stonier, Tom (1997), Information and Meaning: An Evolutionary Perspective, London: Springer.
External links
Abstraction
Cognition
Communication studies
Concepts in epistemology
Concepts in the philosophy of mind
Mental processes
Semantics
Concepts in logic
Concepts in the philosophy of language | 0.775581 | 0.993137 | 0.770258 |
Philosophy of mathematics | Philosophy of mathematics is the branch of philosophy that deals with the nature of mathematics and its relationship with other human activities.
Major themes that are dealt with in philosophy of mathematics include:
Reality: The question is whether mathematics is a pure product of human mind or whether it has some reality by itself.
Logic and rigor
Relationship with physical reality
Relationship with science
Relationship with applications
Mathematical truth
Nature as human activity (science, art, game, or all together)
Major themes
Reality
The connection between mathematics and material reality has led to philosophical debates since at least the time of Pythagoras. The ancient philosopher Plato argued that abstractions that reflect material reality have themselves a reality that exists outside space and time. As a result, the philosophical view that mathematical objects somehow exist on their own in abstraction is often referred to as Platonism. Independently of their possible philosophical opinions, modern mathematicians may be generally considered as Platonists, since they think of and talk of their objects of study as real objects (see Mathematical object).
Armand Borel summarized this view of mathematics reality as follows, and provided quotations of G. H. Hardy, Charles Hermite, Henri Poincaré and Albert Einstein that support his views.
Logic and rigor
Mathematical reasoning requires rigor. This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of syllogisms or inference rules, without any use of empirical evidence and intuition.
The rules of rigorous reasoning have been established by the ancient Greek philosophers under the name of logic. Logic is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere.
For many centuries, logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians. Circa the end of the 19th century, several paradoxes made questionable the logical foundation of mathematics, and consequently the validity of the whole mathematics. This has been called the foundational crisis of mathematics. Some of these paradoxes consist of results that seem to contradict the common intuition, such as the possibility to construct valid non-Euclidean geometries in which the parallel postulate is wrong, the Weierstrass function that is continuous but nowhere differentiable, and the study by Georg Cantor of infinite sets, which led to consider several sizes of infinity (infinite cardinals). Even more striking, Russell's paradox shows that the phrase "the set of all sets" is self contradictory.
Several methods have been proposed to solve the problem by changing of logical framework, such as constructive mathematics and intuitionistic logic. Roughly speaking, the first one consists of requiring that every existence theorem must provide an explicit example, and the second one excludes from mathematical reasoning the law of excluded middle and double negation elimination.
The problems of foundation of mathematics has been eventually resolved with the rise of mathematical logic as a new area of mathematics. In this framework, a mathematical or logical theory consists of a formal language that defines the well-formed of assertions, a set of basic assertions called axioms and a set of inference rules that allow producing new assertions from one or several known assertions. A theorem of such a theory is either an axiom or an assertion that can be obtained from previously known theorems by the application of an inference rule. The Zermelo–Fraenkel set theory with the axiom of choice, generally called ZFC, is such a theory in which all mathematics have been restated; it is used implicitely in all mathematics texts that do not specify explicitly on which foundations they are based. Moreover, the other proposed foundations can be modeled and studied inside ZFC.
It results that "rigor" is no more a relevant concept in mathematics, as a proof is either correct or erroneous, and a "rigorous proof" is simply a pleonasm. Where a special concept of rigor comes into play is in the socialized aspects of a proof. In particular, proofs are rarely written in full details, and some steps of a proof are generally considered as trivial, easy, or straightforward, and therefore left to the reader. As most proof errors occur in these skipped steps, a new proof requires to be verified by other specialists of the subject, and can be considered as reliable only after having been accepted by the community of the specialists, which may need several years.
Also, the concept of "rigor" may remain useful for teaching to beginners what is a mathematical proof.
Relationship with physical reality
Before the 19th century, the basic mathematical concepts, such as points, lines, natural numbers, real numbers (used for measurements), etc. were abstractions from the physical world, and it was commonly considered that it was sufficient for defining them.
As a consequence of this closeness to physical reality, mathematicians were very cautious when problems they want to solve led them to introduce new concepts that are not directly related the real world. These precautions are still reflected in modern terminology, where the numbers that are not quotient of natural numbers are called irrational numbers, originally meaning that reason cannot conceive them. Similarly, real numbers are the numbers that can be used for measurement, while imaginary numbers cannot.
During the 19th century, there were an active research for giving more precise definitions to the basic concepts resulting of abstraction from the real world; for example Peano arithmetic for natural numbers, the formal definitions of limit, series (infinite sums that may have a finite value), and continuity by Cauchy and Weierstrass, the definition of real numbers by Cauchy and Dedekind. These formal definitions allowed to prove counterintuitive results, which are a part of the origin of the foundational crisis of mathematics. For example, Weierstrass function is a function that is everywhere continuous and nowhere differentiable. Since the existence of such a monster seemed impossible, people had two choices: either they accept such unrealistic facts, which implies that mathematics does not need to reflect the physical reality; or they change the logical rules for excluding such monsters. The first choice led to the philosophical school of formalism; in its strong form this school may be understood as the fact that mathematicians must not take care of the physical reality. The second choice led to intuitionism and constructivism.
After strong debates, axiomatic approach became eventually a de facto norm in mathematics. This mean that mathematical theories must be based on axioms (basic assumptions that are considered as true) and a fixed set of inference rules; the theory consists of the results (theorems) that can be deduced (proved) from the axiom by using inference rules, and inference rules only. The entities (mathematical objects) involved in the axioms are considered as defined by the axioms, and nothing else is supposed on their nature. For example, plane geometry can be axiomatized with two sorts of objects, the points and the lines, and a relation "belonging to" or "passing through" that relates points and lines. One of the axioms is "there is exactly one line that passes through two points". The interpretation of points and lines (of the theory) as usual points and lines does not matter at all for the validity of the theory. This means that one can verify the correctness of a proof without referring to any figure, and that a proved theorem remains true independently of any interpretation of the entities involved in the axioms. For example, in plane projective geometry, one may interpret points as lines and vice versa. This implies that for every theorem relating points and lines, one gets immediately a new theorem by exchanging the role of the points and the lines (see duality). Nevertheless the interpretation of the objects of a theory in terms of physical reality (when possible) or of previously studied abstractions remains fundamental for guiding the choice of the axioms, understanding the subject of the theory, and follow the steps of a long proof.
This axiomatic approach has been applied to the whole mathematics, through ZFC, the Zermelo–Fraenkel set theory with the axiom of choice. The whole mathematics has been rebuilt inside this theory. Except if the contrary is explicitly stated, all modern mathematical texts use it as a foundation of mathematics.
As a consequence, the relationship between mathematics and physical reality is no more a mathematical question, but the nature of this relationship remains a philosophical question that does not have any uncontroversial answer.
Relationship with sciences
Mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws. The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model. Inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used. For example, the perihelion precession of Mercury could only be explained after the emergence of Einstein's general relativity, which replaced Newton's law of gravitation as a better mathematical model.
There is still a philosophical debate whether mathematics is a science. However, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. Like them, it is falsifiable, which means in mathematics that, if a result or a theory is wrong, this can be proved by providing a counterexample. Similarly as in science, theories and results (theorems) are often obtained from experimentation. In mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects (often mind representations without physical support). For example, when asked how he came about his theorems, Gauss once replied "durch planmässiges Tattonieren" (through systematic experimentation). However, some authors emphasize that mathematics differs from the modern notion of science by not on empirical evidence.
Unreasonable effectiveness
The unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist Eugene Wigner. It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. Examples of unexpected applications of mathematical theories can be found in many areas of mathematics.
A notable example is the prime factorization of natural numbers that was discovered more than 2,000 years before its common use for secure internet communications through the RSA cryptosystem. A second historical example is the theory of ellipses. They were studied by the ancient Greek mathematicians as conic sections (that is, intersections of cones with planes). It is almost 2,000 years later that Johannes Kepler discovered that the trajectories of the planets are ellipses.
In the 19th century, the internal development of geometry (pure mathematics) led to definition and study of non-Euclidean geometries, spaces of dimension higher than three and manifolds. At this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, Albert Einstein developed the theory of relativity that uses fundamentally these concepts. In particular, spacetime of special relativity is a non-Euclidean space of dimension four, and spacetime of general relativity is a (curved) manifold of dimension four.
A striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. This is illustrated by the discoveries of the positron and the baryon In both cases, the equations of the theories had unexplained solutions, which led to conjecture of the existence of an unknown particle, and the search for these particles. In both cases, these particles were discovered a few years later by specific experiments.
History
The origin of mathematics is of arguments and disagreements. Whether the birth of mathematics was by chance or induced by necessity during the development of similar subjects, such as physics, remains an area of contention.
Many thinkers have contributed their ideas concerning the nature of mathematics. Today, some philosophers of mathematics aim to give accounts of this form of inquiry and its products as they stand, while others emphasize a role for themselves that goes beyond simple interpretation to critical analysis. There are traditions of mathematical philosophy in both Western philosophy and Eastern philosophy. Western philosophies of mathematics go as far back as Pythagoras, who described the theory "everything is mathematics" (mathematicism), Plato, who paraphrased Pythagoras, and studied the ontological status of mathematical objects, and Aristotle, who studied logic and issues related to infinity (actual versus potential).
Greek philosophy on mathematics was strongly influenced by their study of geometry. For example, at one time, the Greeks held the opinion that 1 (one) was not a number, but rather a unit of arbitrary length. A number was defined as a multitude. Therefore, 3, for example, represented a certain multitude of units, and was thus "truly" a number. At another point, a similar argument was made that 2 was not a number but a fundamental notion of a pair. These views come from the heavily geometric straight-edge-and-compass viewpoint of the Greeks: just as lines drawn in a geometric problem are measured in proportion to the first arbitrarily drawn line, so too are the numbers on a number line measured in proportion to the arbitrary first "number" or "one".
These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the square root of two. Hippasus, a disciple of Pythagoras, showed that the diagonal of a unit square was incommensurable with its (unit-length) edge: in other words he proved there was no existing (rational) number that accurately depicts the proportion of the diagonal of the unit square to its edge. This caused a significant re-evaluation of Greek philosophy of mathematics. According to legend, fellow Pythagoreans were so traumatized by this discovery that they murdered Hippasus to stop him from spreading his heretical idea. Simon Stevin was one of the first in Europe to challenge Greek ideas in the 16th century. Beginning with Leibniz, the focus shifted strongly to the relationship between mathematics and logic. This perspective dominated the philosophy of mathematics through the time of Frege and of Russell, but was brought into question by developments in the late 19th and early 20th centuries.
Contemporary philosophy
A perennial issue in the philosophy of mathematics concerns the relationship between logic and mathematics at their joint foundations. While 20th-century philosophers continued to ask the questions mentioned at the outset of this article, the philosophy of mathematics in the 20th century was characterized by a predominant interest in formal logic, set theory (both naive set theory and axiomatic set theory), and foundational issues.
It is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their "truthfulness" remains elusive. Investigations into this issue are known as the foundations of mathematics program.
At the start of the 20th century, philosophers of mathematics were already beginning to divide into various schools of thought about all these questions, broadly distinguished by their pictures of mathematical epistemology and ontology. Three schools, formalism, intuitionism, and logicism, emerged at this time, partly in response to the increasingly widespread worry that mathematics as it stood, and analysis in particular, did not live up to the standards of certainty and rigor that had been taken for granted. Each school addressed the issues that came to the fore at that time, either attempting to resolve them or claiming that mathematics is not entitled to its status as our most trusted knowledge.
Surprising and counter-intuitive developments in formal logic and set theory early in the 20th century led to new questions concerning what was traditionally called the foundations of mathematics. As the century unfolded, the initial focus of concern expanded to an open exploration of the fundamental axioms of mathematics, the axiomatic approach having been taken for granted since the time of Euclid around 300 BCE as the natural basis for mathematics. Notions of axiom, proposition and proof, as well as the notion of a proposition being true of a mathematical object (see Assignment), were formalized, allowing them to be treated mathematically. The Zermelo–Fraenkel axioms for set theory were formulated which provided a conceptual framework in which much mathematical discourse would be interpreted. In mathematics, as in physics, new and unexpected ideas had arisen and significant changes were coming. With Gödel numbering, propositions could be interpreted as referring to themselves or other propositions, enabling inquiry into the consistency of mathematical theories. This reflective critique in which the theory under review "becomes itself the object of a mathematical study" led Hilbert to call such study metamathematics or proof theory.
At the middle of the century, a new mathematical theory was created by Samuel Eilenberg and Saunders Mac Lane, known as category theory, and it became a new contender for the natural language of mathematical thinking. As the 20th century progressed, however, philosophical opinions diverged as to just how well-founded were the questions about foundations that were raised at the century's beginning. Hilary Putnam summed up one common view of the situation in the last third of the century by saying:
When philosophy discovers something wrong with science, sometimes science has to be changed—Russell's paradox comes to mind, as does Berkeley's attack on the actual infinitesimal—but more often it is philosophy that has to be changed. I do not think that the difficulties that philosophy finds with classical mathematics today are genuine difficulties; and I think that the philosophical interpretations of mathematics that we are being offered on every hand are wrong, and that "philosophical interpretation" is just what mathematics doesn't need.
Philosophy of mathematics today proceeds along several different lines of inquiry, by philosophers of mathematics, logicians, and mathematicians, and there are many schools of thought on the subject. The schools are addressed separately in the next section, and their assumptions explained.
Contemporary schools of thought
Artistic
The view that claims that mathematics is the aesthetic combination of assumptions, and then also claims that mathematics is an art. A famous mathematician who claims that is the British G. H. Hardy. For Hardy, in his book, A Mathematician's Apology, the definition of mathematics was more like the aesthetic combination of concepts.
Platonism
Mathematical Platonism is the form of realism that suggests that mathematical entities are abstract, have no spatiotemporal or causal properties, and are eternal and unchanging. This is often claimed to be the view most people have of numbers. The term Platonism is used because such a view is seen to parallel Plato's Theory of Forms and a "World of Ideas" (Greek: eidos (εἶδος)) described in Plato's allegory of the cave: the everyday world can only imperfectly approximate an unchanging, ultimate reality. Both Plato's cave and Platonism have meaningful, not just superficial connections, because Plato's ideas were preceded and probably influenced by the hugely popular Pythagoreans of ancient Greece, who believed that the world was, quite literally, generated by numbers.
A major question considered in mathematical Platonism is: Precisely where and how do the mathematical entities exist, and how do we know about them? Is there a world, completely separate from our physical one, that is occupied by the mathematical entities? How can we gain access to this separate world and discover truths about the entities? One proposed answer is the Ultimate Ensemble, a theory that postulates that all structures that exist mathematically also exist physically in their own universe.
Kurt Gödel's Platonism postulates a special kind of mathematical intuition that lets us perceive mathematical objects directly. (This view bears resemblances to many things Husserl said about mathematics, and supports Kant's idea that mathematics is synthetic a priori.) Davis and Hersh have suggested in their 1999 book The Mathematical Experience that most mathematicians act as though they are Platonists, even though, if pressed to defend the position carefully, they may retreat to formalism.
Full-blooded Platonism is a modern variation of Platonism, which is in reaction to the fact that different sets of mathematical entities can be proven to exist depending on the axioms and inference rules employed (for instance, the law of the excluded middle, and the axiom of choice). It holds that all mathematical entities exist. They may be provable, even if they cannot all be derived from a single consistent set of axioms.
Set-theoretic realism (also set-theoretic Platonism) a position defended by Penelope Maddy, is the view that set theory is about a single universe of sets. This position (which is also known as naturalized Platonism because it is a naturalized version of mathematical Platonism) has been criticized by Mark Balaguer on the basis of Paul Benacerraf's epistemological problem. A similar view, termed Platonized naturalism, was later defended by the Stanford–Edmonton School: according to this view, a more traditional kind of Platonism is consistent with naturalism; the more traditional kind of Platonism they defend is distinguished by general principles that assert the existence of abstract objects.
Mathematicism
Max Tegmark's mathematical universe hypothesis (or mathematicism) goes further than Platonism in asserting that not only do all mathematical objects exist, but nothing else does. Tegmark's sole postulate is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world".
Logicism
Logicism is the thesis that mathematics is reducible to logic, and hence nothing but a part of logic. Logicists hold that mathematics can be known a priori, but suggest that our knowledge of mathematics is just part of our knowledge of logic in general, and is thus analytic, not requiring any special faculty of mathematical intuition. In this view, logic is the proper foundation of mathematics, and all mathematical statements are necessary logical truths.
Rudolf Carnap (1931) presents the logicist thesis in two parts:
The concepts of mathematics can be derived from logical concepts through explicit definitions.
The theorems of mathematics can be derived from logical axioms through purely logical deduction.
Gottlob Frege was the founder of logicism. In his seminal Die Grundgesetze der Arithmetik (Basic Laws of Arithmetic) he built up arithmetic from a system of logic with a general principle of comprehension, which he called "Basic Law V" (for concepts F and G, the extension of F equals the extension of G if and only if for all objects a, Fa equals Ga), a principle that he took to be acceptable as part of logic.
Frege's construction was flawed. Bertrand Russell discovered that Basic Law V is inconsistent (this is Russell's paradox). Frege abandoned his logicist program soon after this, but it was continued by Russell and Whitehead. They attributed the paradox to "vicious circularity" and built up what they called ramified type theory to deal with it. In this system, they were eventually able to build up much of modern mathematics but in an altered, and excessively complex form (for example, there were different natural numbers in each type, and there were infinitely many types). They also had to make several compromises in order to develop much of mathematics, such as the "axiom of reducibility". Even Russell said that this axiom did not really belong to logic.
Modern logicists (like Bob Hale, Crispin Wright, and perhaps others) have returned to a program closer to Frege's. They have abandoned Basic Law V in favor of abstraction principles such as Hume's principle (the number of objects falling under the concept F equals the number of objects falling under the concept G if and only if the extension of F and the extension of G can be put into one-to-one correspondence). Frege required Basic Law V to be able to give an explicit definition of the numbers, but all the properties of numbers can be derived from Hume's principle. This would not have been enough for Frege because (to paraphrase him) it does not exclude the possibility that the number 3 is in fact Julius Caesar. In addition, many of the weakened principles that they have had to adopt to replace Basic Law V no longer seem so obviously analytic, and thus purely logical.
Formalism
Formalism holds that mathematical statements may be thought of as statements about the consequences of certain string manipulation rules. For example, in the "game" of Euclidean geometry (which is seen as consisting of some strings called "axioms", and some "rules of inference" to generate new strings from given ones), one can prove that the Pythagorean theorem holds (that is, one can generate the string corresponding to the Pythagorean theorem). According to formalism, mathematical truths are not about numbers and sets and triangles and the like—in fact, they are not "about" anything at all.
Another version of formalism is known as deductivism. In deductivism, the Pythagorean theorem is not an absolute truth, but a relative one, if it follows deductively from the appropriate axioms. The same is held to be true for all other mathematical statements.
Formalism need not mean that mathematics is nothing more than a meaningless symbolic game. It is usually hoped that there exists some interpretation in which the rules of the game hold. (Compare this position to structuralism.) But it does allow the working mathematician to continue in his or her work and leave such problems to the philosopher or scientist. Many formalists would say that in practice, the axiom systems to be studied will be suggested by the demands of science or other areas of mathematics.
A major early proponent of formalism was David Hilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics. Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent. Hilbert's goals of creating a system of mathematics that is both complete and consistent were seriously undermined by the second of Gödel's incompleteness theorems, which states that sufficiently expressive consistent axiom systems can never prove their own consistency. Since any such axiom system would contain the finitary arithmetic as a subsystem, Gödel's theorem implied that it would be impossible to prove the system's consistency relative to that (since it would then prove its own consistency, which Gödel had shown was impossible). Thus, in order to show that any axiomatic system of mathematics is in fact consistent, one needs to first assume the consistency of a system of mathematics that is in a sense stronger than the system to be proven consistent.
Hilbert was initially a deductivist, but, as may be clear from above, he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation.
Other formalists, such as Rudolf Carnap, Alfred Tarski, and Haskell Curry, considered mathematics to be the investigation of formal axiom systems. Mathematical logicians study formal systems but are just as often realists as they are formalists.
Formalists are relatively tolerant and inviting to new approaches to logic, non-standard number systems, new set theories, etc. The more games we study, the better. However, in all three of these examples, motivation is drawn from existing mathematical or philosophical concerns. The "games" are usually not arbitrary.
The main critique of formalism is that the actual mathematical ideas that occupy mathematicians are far removed from the string manipulation games mentioned above. Formalism is thus silent on the question of which axiom systems ought to be studied, as none is more meaningful than another from a formalistic point of view.
Recently, some formalist mathematicians have proposed that all of our formal mathematical knowledge should be systematically encoded in computer-readable formats, so as to facilitate automated proof checking of mathematical proofs and the use of interactive theorem proving in the development of mathematical theories and computer software. Because of their close connection with computer science, this idea is also advocated by mathematical intuitionists and constructivists in the "computability" tradition—see QED project for a general overview.
Conventionalism
The French mathematician Henri Poincaré was among the first to articulate a conventionalist view. Poincaré's use of non-Euclidean geometries in his work on differential equations convinced him that Euclidean geometry should not be regarded as a priori truth. He held that axioms in geometry should be chosen for the results they produce, not for their apparent coherence with human intuitions about the physical world.
Intuitionism
In mathematics, intuitionism is a program of methodological reform whose motto is that "there are no non-experienced mathematical truths" (L. E. J. Brouwer). From this springboard, intuitionists seek to reconstruct what they consider to be the corrigible portion of mathematics in accordance with Kantian concepts of being, becoming, intuition, and knowledge. Brouwer, the founder of the movement, held that mathematical objects arise from the a priori forms of the volitions that inform the perception of empirical objects.
A major force behind intuitionism was L. E. J. Brouwer, who rejected the usefulness of formalized logic of any sort for mathematics. His student Arend Heyting postulated an intuitionistic logic, different from the classical Aristotelian logic; this logic does not contain the law of the excluded middle and therefore frowns upon proofs by contradiction. The axiom of choice is also rejected in most intuitionistic set theories, though in some versions it is accepted.
In intuitionism, the term "explicit construction" is not cleanly defined, and that has led to criticisms. Attempts have been made to use the concepts of Turing machine or computable function to fill this gap, leading to the claim that only questions regarding the behavior of finite algorithms are meaningful and should be investigated in mathematics. This has led to the study of the computable numbers, first introduced by Alan Turing. Not surprisingly, then, this approach to mathematics is sometimes associated with theoretical computer science.
Constructivism
Like intuitionism, constructivism involves the regulative principle that only mathematical entities which can be explicitly constructed in a certain sense should be admitted to mathematical discourse. In this view, mathematics is an exercise of the human intuition, not a game played with meaningless symbols. Instead, it is about entities that we can create directly through mental activity. In addition, some adherents of these schools reject non-constructive proofs, such as using proof by contradiction when showing the existence of an object or when trying to establish the truth of some proposition. Important work was done by Errett Bishop, who managed to prove versions of the most important theorems in real analysis as constructive analysis in his 1967 Foundations of Constructive Analysis.
Finitism
Finitism is an extreme form of constructivism, according to which a mathematical object does not exist unless it can be constructed from natural numbers in a finite number of steps. In her book Philosophy of Set Theory, Mary Tiles characterized those who allow countably infinite objects as classical finitists, and those who deny even countably infinite objects as strict finitists.
The most famous proponent of finitism was Leopold Kronecker, who said:
Ultrafinitism is an even more extreme version of finitism, which rejects not only infinities but finite quantities that cannot feasibly be constructed with available resources. Another variant of finitism is Euclidean arithmetic, a system developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets. Mayberry's system is Aristotelian in general inspiration and, despite his strong rejection of any role for operationalism or feasibility in the foundations of mathematics, comes to somewhat similar conclusions, such as, for instance, that super-exponentiation is not a legitimate finitary function.
Structuralism
Structuralism is a position holding that mathematical theories describe structures, and that mathematical objects are exhaustively defined by their places in such structures, consequently having no intrinsic properties. For instance, it would maintain that all that needs to be known about the number 1 is that it is the first whole number after 0. Likewise all the other whole numbers are defined by their places in a structure, the number line. Other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra.
Structuralism is an epistemologically realistic view in that it holds that mathematical statements have an objective truth value. However, its central claim only relates to what kind of entity a mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology). The kind of existence mathematical objects have would clearly be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard.
The ante rem structuralism ("before the thing") has a similar ontology to Platonism. Structures are held to have a real but abstract and immaterial existence. As such, it faces the standard epistemological problem of explaining the interaction between such abstract structures and flesh-and-blood mathematicians (see Benacerraf's identification problem).
The in re structuralism ("in the thing") is the equivalent of Aristotelian realism. Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be "big" enough to accommodate some otherwise legitimate structures.
The post rem structuralism ("after the thing") is anti-realist about structures in a way that parallels nominalism. Like nominalism, the post rem approach denies the existence of abstract mathematical objects with properties other than their place in a relational structure. According to this view mathematical systems exist, and have structural features in common. If something is true of a structure, it will be true of all systems exemplifying the structure. However, it is merely instrumental to talk of structures being "held in common" between systems: they in fact have no independent existence.
Embodied mind theories
Embodied mind theories hold that mathematical thought is a natural outgrowth of the human cognitive apparatus which finds itself in our physical universe. For example, the abstract concept of number springs from the experience of counting discrete objects (requiring the human senses such as sight for detecting the objects, touch; and signalling from the brain). It is held that mathematics is not universal and does not exist in any real sense, other than in human brains. Humans construct, but do not discover, mathematics.
The cognitive processes of pattern-finding and distinguishing objects are also subject to neuroscience; if mathematics is considered to be relevant to a natural world (such as from realism or a degree of it, as opposed to pure solipsism).
Its actual relevance to reality, while accepted to be a trustworthy approximation (it is also suggested the evolution of perceptions, the body, and the senses may have been necessary for survival) is not necessarily accurate to a full realism (and is still subject to flaws such as illusion, assumptions (consequently; the foundations and axioms in which mathematics have been formed by humans), generalisations, deception, and hallucinations). As such, this may also raise questions for the modern scientific method for its compatibility with general mathematics; as while relatively reliable, it is still limited by what can be measured by empiricism which may not be as reliable as previously assumed (see also: 'counterintuitive' concepts in such as quantum nonlocality, and action at a distance).
Another issue is that one numeral system may not necessarily be applicable to problem solving. Subjects such as complex numbers or imaginary numbers require specific changes to more commonly used axioms of mathematics; otherwise they cannot be adequately understood.
Alternatively, computer programmers may use hexadecimal for its 'human-friendly' representation of binary-coded values, rather than decimal (convenient for counting because humans have ten fingers). The axioms or logical rules behind mathematics also vary through time (such as the adaption and invention of zero).
As perceptions from the human brain are subject to illusions, assumptions, deceptions, (induced) hallucinations, cognitive errors or assumptions in a general context, it can be questioned whether they are accurate or strictly indicative of truth (see also: philosophy of being), and the nature of empiricism itself in relation to the universe and whether it is independent to the senses and the universe.
The human mind has no special claim on reality or approaches to it built out of math. If such constructs as Euler's identity are true then they are true as a map of the human mind and cognition.
Embodied mind theorists thus explain the effectiveness of mathematics—mathematics was constructed by the brain in order to be effective in this universe.
The most accessible, famous, and infamous treatment of this perspective is Where Mathematics Comes From, by George Lakoff and Rafael E. Núñez. In addition, mathematician Keith Devlin has investigated similar concepts with his book The Math Instinct, as has neuroscientist Stanislas Dehaene with his book The Number Sense. For more on the philosophical ideas that inspired this perspective, see cognitive science of mathematics.
Aristotelian realism
Aristotelian realism holds that mathematics studies properties such as symmetry, continuity and order that can be literally realized in the physical world (or in any other world there might be). It contrasts with Platonism in holding that the objects of mathematics, such as numbers, do not exist in an "abstract" world but can be physically realized. For example, the number 4 is realized in the relation between a heap of parrots and the universal "being a parrot" that divides the heap into so many parrots. Aristotelian realism is defended by James Franklin and the Sydney School in the philosophy of mathematics and is close to the view of Penelope Maddy that when an egg carton is opened, a set of three eggs is perceived (that is, a mathematical entity realized in the physical world). A problem for Aristotelian realism is what account to give of higher infinities, which may not be realizable in the physical world.
The Euclidean arithmetic developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets also falls into the Aristotelian realist tradition. Mayberry, following Euclid, considers numbers to be simply "definite multitudes of units" realized in nature—such as "the members of the London Symphony Orchestra" or "the trees in Birnam wood". Whether or not there are definite multitudes of units for which Euclid's Common Notion 5 (the whole is greater than the part) fails and which would consequently be reckoned as infinite is for Mayberry essentially a question about Nature and does not entail any transcendental suppositions.
Psychologism
Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts (or laws).
John Stuart Mill seems to have been an advocate of a type of logical psychologism, as were many 19th-century German logicians such as Sigwart and Erdmann as well as a number of psychologists, past and present: for example, Gustave Le Bon. Psychologism was famously criticized by Frege in his The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic. Edmund Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. The "Prolegomena" is considered a more concise, fair, and thorough refutation of psychologism than the criticisms made by Frege, and also it is considered today by many as being a memorable refutation for its decisive blow to psychologism. Psychologism was also criticized by Charles Sanders Peirce and Maurice Merleau-Ponty.
Empiricism
Mathematical empiricism is a form of realism that denies that mathematics can be known a priori at all. It says that we discover mathematical facts by empirical research, just like facts in any of the other sciences. It is not one of the classical three positions advocated in the early 20th century, but primarily arose in the middle of the century. However, an important early proponent of a view like this was John Stuart Mill. Mill's view was widely criticized, because, according to critics, such as A.J. Ayer, it makes statements like come out as uncertain, contingent truths, which we can only learn by observing instances of two pairs coming together and forming a quartet.
Karl Popper was another philosopher to point out empirical aspects of mathematics, observing that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently." Popper also noted he would "admit a system as empirical or scientific only if it is capable of being tested by experience."
Contemporary mathematical empiricism, formulated by W. V. O. Quine and Hilary Putnam, is primarily supported by the indispensability argument: mathematics is indispensable to all empirical sciences, and if we want to believe in the reality of the phenomena described by the sciences, we ought also believe in the reality of those entities required for this description. That is, since physics needs to talk about electrons to say why light bulbs behave as they do, then electrons must exist. Since physics needs to talk about numbers in offering any of its explanations, then numbers must exist. In keeping with Quine and Putnam's overall philosophies, this is a naturalistic argument. It argues for the existence of mathematical entities as the best explanation for experience, thus stripping mathematics of being distinct from the other sciences.
Putnam strongly rejected the term "Platonist" as implying an over-specific ontology that was not necessary to mathematical practice in any real sense. He advocated a form of "pure realism" that rejected mystical notions of truth and accepted much quasi-empiricism in mathematics. This grew from the increasingly popular assertion in the late 20th century that no one foundation of mathematics could be ever proven to exist. It is also sometimes called "postmodernism in mathematics" although that term is considered overloaded by some and insulting by others. Quasi-empiricism argues that in doing their research, mathematicians test hypotheses as well as prove theorems. A mathematical argument can transmit falsity from the conclusion to the premises just as well as it can transmit truth from the premises to the conclusion. Putnam has argued that any theory of mathematical realism would include quasi-empirical methods. He proposed that an alien species doing mathematics might well rely on quasi-empirical methods primarily, being willing often to forgo rigorous and axiomatic proofs, and still be doing mathematics—at perhaps a somewhat greater risk of failure of their calculations. He gave a detailed argument for this in New Directions. Quasi-empiricism was also developed by Imre Lakatos.
The most important criticism of empirical views of mathematics is approximately the same as that raised against Mill. If mathematics is just as empirical as the other sciences, then this suggests that its results are just as fallible as theirs, and just as contingent. In Mill's case the empirical justification comes directly, while in Quine's case it comes indirectly, through the coherence of our scientific theory as a whole, i.e. consilience after E.O. Wilson. Quine suggests that mathematics seems completely certain because the role it plays in our web of belief is extraordinarily central, and that it would be extremely difficult for us to revise it, though not impossible.
For a philosophy of mathematics that attempts to overcome some of the shortcomings of Quine and Gödel's approaches by taking aspects of each see Penelope Maddy's Realism in Mathematics. Another example of a realist theory is the embodied mind theory.
For experimental evidence suggesting that human infants can do elementary arithmetic, see Brian Butterworth.
Fictionalism
Mathematical fictionalism was brought to fame in 1980 when Hartry Field published Science Without Numbers, which rejected and in fact reversed Quine's indispensability argument. Where Quine suggested that mathematics was indispensable for our best scientific theories, and therefore should be accepted as a body of truths talking about independently existing entities, Field suggested that mathematics was dispensable, and therefore should be considered as a body of falsehoods not talking about anything real. He did this by giving a complete axiomatization of Newtonian mechanics with no reference to numbers or functions at all. He started with the "betweenness" of Hilbert's axioms to characterize space without coordinatizing it, and then added extra relations between points to do the work formerly done by vector fields. Hilbert's geometry is mathematical, because it talks about abstract points, but in Field's theory, these points are the concrete points of physical space, so no special mathematical objects at all are needed.
Having shown how to do science without using numbers, Field proceeded to rehabilitate mathematics as a kind of useful fiction. He showed that mathematical physics is a conservative extension of his non-mathematical physics (that is, every physical fact provable in mathematical physics is already provable from Field's system), so that mathematics is a reliable process whose physical applications are all true, even though its own statements are false. Thus, when doing mathematics, we can see ourselves as telling a sort of story, talking as if numbers existed. For Field, a statement like is just as fictitious as "Sherlock Holmes lived at 221B Baker Street"—but both are true according to the relevant fictions.
Another fictionalist, Mary Leng, expresses the perspective succinctly by dismissing any seeming connection between mathematics and the physical world as "a happy coincidence". This rejection separates fictionalism from other forms of anti-realism, which see mathematics itself as artificial but still bounded or fitted to reality in some way.
By this account, there are no metaphysical or epistemological problems special to mathematics. The only worries left are the general worries about non-mathematical physics, and about fiction in general. Field's approach has been very influential, but is widely rejected. This is in part because of the requirement of strong fragments of second-order logic to carry out his reduction, and because the statement of conservativity seems to require quantification over abstract models or deductions.
Social constructivism
Social constructivism sees mathematics primarily as a social construct, as a product of culture, subject to correction and change. Like the other sciences, mathematics is viewed as an empirical endeavor whose results are constantly evaluated and may be discarded. However, while on an empiricist view the evaluation is some sort of comparison with "reality", social constructivists emphasize that the direction of mathematical research is dictated by the fashions of the social group performing it or by the needs of the society financing it. However, although such external forces may change the direction of some mathematical research, there are strong internal constraints—the mathematical traditions, methods, problems, meanings and values into which mathematicians are enculturated—that work to conserve the historically-defined discipline.
This runs counter to the traditional beliefs of working mathematicians, that mathematics is somehow pure or objective. But social constructivists argue that mathematics is in fact grounded by much uncertainty: as mathematical practice evolves, the status of previous mathematics is cast into doubt, and is corrected to the degree it is required or desired by the current mathematical community. This can be seen in the development of analysis from reexamination of the calculus of Leibniz and Newton. They argue further that finished mathematics is often accorded too much status, and folk mathematics not enough, due to an overemphasis on axiomatic proof and peer review as practices.
The social nature of mathematics is highlighted in its subcultures. Major discoveries can be made in one branch of mathematics and be relevant to another, yet the relationship goes undiscovered for lack of social contact between mathematicians. Social constructivists argue each speciality forms its own epistemic community and often has great difficulty communicating, or motivating the investigation of unifying conjectures that might relate different areas of mathematics. Social constructivists see the process of "doing mathematics" as actually creating the meaning, while social realists see a deficiency either of human capacity to abstractify, or of human's cognitive bias, or of mathematicians' collective intelligence as preventing the comprehension of a real universe of mathematical objects. Social constructivists sometimes reject the search for foundations of mathematics as bound to fail, as pointless or even meaningless.
Contributions to this school have been made by Imre Lakatos and Thomas Tymoczko, although it is not clear that either would endorse the title. More recently Paul Ernest has explicitly formulated a social constructivist philosophy of mathematics. Some consider the work of Paul Erdős as a whole to have advanced this view (although he personally rejected it) because of his uniquely broad collaborations, which prompted others to see and study "mathematics as a social activity", e.g., via the Erdős number. Reuben Hersh has also promoted the social view of mathematics, calling it a "humanistic" approach, similar to but not quite the same as that associated with Alvin White; one of Hersh's co-authors, Philip J. Davis, has expressed sympathy for the social view as well.
Beyond the traditional schools
Unreasonable effectiveness
Rather than focus on narrow debates about the true nature of mathematical truth, or even on practices unique to mathematicians such as the proof, a growing movement from the 1960s to the 1990s began to question the idea of seeking foundations or finding any one right answer to why mathematics works. The starting point for this was Eugene Wigner's famous 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", in which he argued that the happy coincidence of mathematics and physics being so well matched seemed to be unreasonable and hard to explain.
Popper's two senses of number statements
Realist and constructivist theories are normally taken to be contraries. However, Karl Popper argued that a number statement such as can be taken in two senses. In one sense it is irrefutable and logically true. In the second sense it is factually true and falsifiable. Another way of putting this is to say that a single number statement can express two propositions: one of which can be explained on constructivist lines; the other on realist lines.
Philosophy of language
Innovations in the philosophy of language during the 20th century renewed interest in whether mathematics is, as is often said, the language of science. Although some mathematicians and philosophers would accept the statement "mathematics is a language" (most consider that the language of mathematics is a part of mathematics to which mathematics cannot be reduced), linguists believe that the implications of such a statement must be considered. For example, the tools of linguistics are not generally applied to the symbol systems of mathematics, that is, mathematics is studied in a markedly different way from other languages. If mathematics is a language, it is a different type of language from natural languages. Indeed, because of the need for clarity and specificity, the language of mathematics is far more constrained than natural languages studied by linguists. However, the methods developed by Frege and Tarski for the study of mathematical language have been extended greatly by Tarski's student Richard Montague and other linguists working in formal semantics to show that the distinction between mathematical language and natural language may not be as great as it seems.
Mohan Ganesalingam has analysed mathematical language using tools from formal linguistics. Ganesalingam notes that some features of natural language are not necessary when analysing mathematical language (such as tense), but many of the same analytical tools can be used (such as context-free grammars). One important difference is that mathematical objects have clearly defined types, which can be explicitly defined in a text: "Effectively, we are allowed to introduce a word in one part of a sentence, and declare its part of speech in another; and this operation has no analogue in natural language."
Arguments
Indispensability argument for realism
This argument, associated with Willard Quine and Hilary Putnam, is considered by Stephen Yablo to be one of the most challenging arguments in favor of the acceptance of the existence of abstract mathematical entities, such as numbers and sets. The form of the argument is as follows.
One must have ontological commitments to all entities that are indispensable to the best scientific theories, and to those entities only (commonly referred to as "all and only").
Mathematical entities are indispensable to the best scientific theories. Therefore,
One must have ontological commitments to mathematical entities.
The justification for the first premise is the most controversial. Both Putnam and Quine invoke naturalism to justify the exclusion of all non-scientific entities, and hence to defend the "only" part of "all and only". The assertion that "all" entities postulated in scientific theories, including numbers, should be accepted as real is justified by confirmation holism. Since theories are not confirmed in a piecemeal fashion, but as a whole, there is no justification for excluding any of the entities referred to in well-confirmed theories. This puts the nominalist who wishes to exclude the existence of sets and non-Euclidean geometry, but to include the existence of quarks and other undetectable entities of physics, for example, in a difficult position.
Epistemic argument against realism
The anti-realist "epistemic argument" against Platonism has been made by Paul Benacerraf and Hartry Field. Platonism posits that mathematical objects are abstract entities. By general agreement, abstract entities cannot interact causally with concrete, physical entities ("the truth-values of our mathematical assertions depend on facts involving Platonic entities that reside in a realm outside of space-time"). Whilst our knowledge of concrete, physical objects is based on our ability to perceive them, and therefore to causally interact with them, there is no parallel account of how mathematicians come to have knowledge of abstract objects. Another way of making the point is that if the Platonic world were to disappear, it would make no difference to the ability of mathematicians to generate proofs, etc., which is already fully accountable in terms of physical processes in their brains.
Field developed his views into fictionalism. Benacerraf also developed the philosophy of mathematical structuralism, according to which there are no mathematical objects. Nonetheless, some versions of structuralism are compatible with some versions of realism.
The argument hinges on the idea that a satisfactory naturalistic account of thought processes in terms of brain processes can be given for mathematical reasoning along with everything else. One line of defense is to maintain that this is false, so that mathematical reasoning uses some special intuition that involves contact with the Platonic realm. A modern form of this argument is given by Sir Roger Penrose.
Another line of defense is to maintain that abstract objects are relevant to mathematical reasoning in a way that is non-causal, and not analogous to perception. This argument is developed by Jerrold Katz in his 2000 book Realistic Rationalism.
A more radical defense is denial of physical reality, i.e. the mathematical universe hypothesis. In that case, a mathematician's knowledge of mathematics is one mathematical object making contact with another.
Aesthetics
Many practicing mathematicians have been drawn to their subject because of a sense of beauty they perceive in it. One sometimes hears the sentiment that mathematicians would like to leave philosophy to the philosophers and get back to mathematics—where, presumably, the beauty lies.
In his work on the divine proportion, H.E. Huntley relates the feeling of reading and understanding someone else's proof of a theorem of mathematics to that of a viewer of a masterpiece of art—the reader of a proof has a similar sense of exhilaration at understanding as the original author of the proof, much as, he argues, the viewer of a masterpiece has a sense of exhilaration similar to the original painter or sculptor. Indeed, one can study mathematical and scientific writings as literature.
Philip J. Davis and Reuben Hersh have commented that the sense of mathematical beauty is universal amongst practicing mathematicians. By way of example, they provide two proofs of the irrationality of . The first is the traditional proof by contradiction, ascribed to Euclid; the second is a more direct proof involving the fundamental theorem of arithmetic that, they argue, gets to the heart of the issue. Davis and Hersh argue that mathematicians find the second proof more aesthetically appealing because it gets closer to the nature of the problem.
Paul Erdős was well known for his notion of a hypothetical "Book" containing the most elegant or beautiful mathematical proofs. There is not universal agreement that a result has one "most elegant" proof; Gregory Chaitin has argued against this idea.
Philosophers have sometimes criticized mathematicians' sense of beauty or elegance as being, at best, vaguely stated. By the same token, however, philosophers of mathematics have sought to characterize what makes one proof more desirable than another when both are logically sound.
Another aspect of aesthetics concerning mathematics is mathematicians' views towards the possible uses of mathematics for purposes deemed unethical or inappropriate. The best-known exposition of this view occurs in G. H. Hardy's book A Mathematician's Apology, in which Hardy argues that pure mathematics is superior in beauty to applied mathematics precisely because it cannot be used for war and similar ends.
Journals
Philosophia Mathematica journal
The Philosophy of Mathematics Education Journal homepage
See also
Definitions of mathematics
Formal language
Foundations of mathematics
Golden ratio
Model theory
Non-standard analysis
Philosophy of language
Philosophy of logic
Philosophy of science
Philosophy of physics
Philosophy of probability
Rule of inference
Science studies
Scientific method
Related works
The Analyst
Euclid's Elements
"On Formally Undecidable Propositions of Principia Mathematica and Related Systems"
"On Computable Numbers, with an Application to the Entscheidungsproblem"
Introduction to Mathematical Philosophy
"New Foundations for Mathematical Logic"
Principia Mathematica
The Simplest Mathematics
Historical topics
History and philosophy of science
History of mathematics
History of philosophy
Notes
References
Further reading
External links
Mathematical Structuralism, Internet Encyclopaedia of Philosophy
Abstractionism, Internet Encyclopaedia of Philosophy
The London Philosophy Study Guide offers many suggestions on what to read, depending on the student's familiarity with the subject:
Philosophy of Mathematics
Mathematical Logic
Set Theory & Further Logic
R.B. Jones' philosophy of mathematics page | 0.77324 | 0.99599 | 0.770139 |
Nomothetic and idiographic | Nomothetic and idiographic are terms used by Neo-Kantian philosopher Wilhelm Windelband to describe two distinct approaches to knowledge, each one corresponding to a different intellectual tendency, and each one corresponding to a different branch of academia. To say that Windelband supported that last dichotomy is a consequent misunderstanding of his own thought. For him, any branch of science and any discipline can be handled by both methods as they offer two integrating points of view.
Nomothetic is based on what Kant described as a tendency to generalize, and is typical for the natural sciences. It describes the effort to derive laws that explain types or categories of objective phenomena, in general.
Idiographic is based on what Kant described as a tendency to specify, and is typical for the humanities. It describes the effort to understand the meaning of contingent, unique, and often cultural or subjective phenomena.
Use in the social sciences
The problem of whether to use nomothetic or idiographic approaches is most sharply felt in the social sciences, whose subject are unique individuals (idiographic perspective), but who have certain general properties or behave according to general rules (nomothetic perspective).
Often, nomothetic approaches are quantitative, and idiographic approaches are qualitative, although the "Personal Questionnaire" developed by Monte B. Shapiro and its further developments (e.g. Discan scale and PSYCHLOPS) are both quantitative and idiographic. Another very influential quantitative but idiographic tool is the Repertory grid when used with elicited constructs and perhaps elicited elements. Personal cognition (D.A. Booth) is idiographic, qualitative and quantitative, using the individual's own narrative of action within situation to scale the ongoing biosocial cognitive processes in units of discrimination from norm (with M.T. Conner 1986, R.P.J. Freeman 1993 and O. Sharpe 2005). Methods of "rigorous idiography" allow probabilistic evaluation of information transfer even with fully idiographic data.
In psychology, idiographic describes the study of the individual, who is seen as a unique agent with a unique life history, with properties setting them apart from other individuals (see idiographic image). A common method to study these unique characteristics is an (auto)biography, i.e. a narrative that recounts the unique sequence of events that made the person who they are. Nomothetic describes the study of classes or cohorts of individuals. Here the subject is seen as an exemplar of a population and their corresponding personality traits and behaviours. It is widely held that the terms idiographic and nomothetic were introduced to American psychology by Gordon Allport in 1937, but Hugo Münsterberg used them in his 1898 presidential address at the American Psychological Association meeting. This address was published in Psychological Review in 1899.
Theodore Millon stated that when spotting and diagnosing personality disorders, first clinicians start with the nomothetic perspective and look for various general scientific laws; then when they believe they have identified a disorder, they switch their view to the idiographic perspective to focus on the specific individual and his or her unique traits.
In sociology, the nomothetic model tries to find independent variables that account for the variations in a given phenomenon (e.g. What is the relationship between timing/frequency of childbirth and education?). Nomothetic explanations are probabilistic and usually incomplete. The idiographic model focuses on a complete, in-depth understanding of a single case (e.g. Why do I not have any pets?).
In anthropology, idiographic describes the study of a group, seen as an entity, with specific properties that set it apart from other groups. Nomothetic refers to the use of generalization rather than specific properties in the same context.
See also
Nomological
References
Further reading
Cone, J. D. (1986). "Idiographic, nomothetic, and related perspectives in behavioral assessment." In: R. O. Nelson & S. C. Hayes (eds.): Conceptual foundations of behavioral assessment (pp. 111–128). New York: Guilford.
Thomae, H. (1999). "The nomothetic-idiographic issue: Some roots and recent trends." International Journal of Group Tensions, 28(1), 187–215.
Concepts in epistemology | 0.780119 | 0.987158 | 0.770101 |
Structuralism | Structuralism is an intellectual current and methodological approach, primarily in the social sciences, that interprets elements of human culture by way of their relationship to a broader system. It works to uncover the structural patterns that underlie all the things that humans do, think, perceive, and feel.
Alternatively, as summarized by philosopher Simon Blackburn, structuralism is:"The belief that phenomena of human life are not intelligible except through their interrelations. These relations constitute a structure, and behind local variations in the surface phenomena there are constant laws of abstract structure."The structuralist mode of reasoning has since been applied in a range of fields, including anthropology, sociology, psychology, literary criticism, economics, and architecture. Along with Claude Lévi-Strauss, the most prominent thinkers associated with structuralism include linguist Roman Jakobson and psychoanalyst Jacques Lacan.
History and background
The term structuralism is ambiguous, referring to different schools of thought in different contexts. As such, the movement in humanities and social sciences called structuralism relates to sociology. Emile Durkheim based his sociological concept on 'structure' and 'function', and from his work emerged the sociological approach of structural functionalism.
Apart from Durkheim's use of the term structure, the semiological concept of Ferdinand de Saussure became fundamental for structuralism. Saussure conceived language and society as a system of relations. His linguistic approach was also a refutation of evolutionary linguistics.
Structuralism in Europe developed in the early 20th century, mainly in France and the Russian Empire, in the structural linguistics of Ferdinand de Saussure and the subsequent Prague, Moscow, and Copenhagen schools of linguistics. As an intellectual movement, structuralism became the heir to existentialism. After World War II, an array of scholars in the humanities borrowed Saussure's concepts for use in their respective fields. French anthropologist Claude Lévi-Strauss was arguably the first such scholar, sparking a widespread interest in structuralism.
Throughout the 1940s and 1950s, existentialism, such as that propounded by Jean-Paul Sartre, was the dominant European intellectual movement. Structuralism rose to prominence in France in the wake of existentialism, particularly in the 1960s. The initial popularity of structuralism in France led to its spread across the globe. By the early 1960s, structuralism as a movement was coming into its own and some believed that it offered a single unified approach to human life that would embrace all disciplines.
By the late 1960s, many of structuralism's basic tenets came under attack from a new wave of predominantly French intellectuals/philosophers such as historian Michel Foucault, Jacques Derrida, Marxist philosopher Louis Althusser, and literary critic Roland Barthes. Though elements of their work necessarily relate to structuralism and are informed by it, these theorists eventually came to be referred to as post-structuralists. Many proponents of structuralism, such as Lacan, continue to influence continental philosophy and many of the fundamental assumptions of some of structuralism's post-structuralist critics are a continuation of structuralist thinking.
Russian functional linguist Roman Jakobson was a pivotal figure in the adaptation of structural analysis to disciplines beyond linguistics, including philosophy, anthropology, and literary theory. Jakobson was a decisive influence on anthropologist Claude Lévi-Strauss, by whose work the term structuralism first appeared in reference to social sciences. Lévi-Strauss' work in turn gave rise to the structuralist movement in France, also called French structuralism, influencing the thinking of other writers, most of whom disavowed themselves as being a part of this movement. This included such writers as Louis Althusser and psychoanalyst Jacques Lacan, as well as the structural Marxism of Nicos Poulantzas. Roland Barthes and Jacques Derrida focused on how structuralism could be applied to literature.
Accordingly, the so-called "Gang of Four" of structuralism is considered to be Lévi-Strauss, Lacan, Barthes, and Michel Foucault.[dubious – discuss]
Ferdinand de Saussure
The origins of structuralism are connected with the work of Ferdinand de Saussure on linguistics along with the linguistics of the Prague and Moscow schools. In brief, Saussure's structural linguistics propounded three related concepts.
Saussure argued for a distinction between langue (an idealized abstraction of language) and parole (language as actually used in daily life). He argued that a "sign" is composed of a "signified" (signifié, i.e. an abstract concept or idea) and a "signifier" (signifiant, i.e. the perceived sound/visual image).
Because different languages have different words to refer to the same objects or concepts, there is no intrinsic reason why a specific signifier is used to express a given concept or idea. It is thus "arbitrary."
Signs gain their meaning from their relationships and contrasts with other signs. As he wrote, "in language, there are only differences 'without positive terms.
Lévi-Strauss
Structuralism rejected the concept of human freedom and choice, focusing instead on the way that human experience and behaviour is determined by various structures. The most important initial work on this score was Lévi-Strauss's 1949 volume The Elementary Structures of Kinship. Lévi-Strauss had known Roman Jakobson during their time together at the New School in New York during WWII and was influenced by both Jakobson's structuralism, as well as the American anthropological tradition.
In Elementary Structures, he examined kinship systems from a structural point of view and demonstrated how apparently different social organizations were different permutations of a few basic kinship structures. In the late 1958, he published Structural Anthropology, a collection of essays outlining his program for structuralism.
Lacan and Piaget
Blending Freud and Saussure, French (post)structuralist Jacques Lacan applied structuralism to psychoanalysis. Similarly, Jean Piaget applied structuralism to the study of psychology, though in a different way. Piaget, who would better define himself as constructivist, considered structuralism as "a method and not a doctrine," because, for him, "there exists no structure without a construction, abstract or genetic."
'Third order'
Proponents of structuralism argue that a specific domain of culture may be understood by means of a structure that is modelled on language and is distinct both from the organizations of reality and those of ideas, or the imagination—the "third order." In Lacan's psychoanalytic theory, for example, the structural order of "the Symbolic" is distinguished both from "the Real" and "the Imaginary;" similarly, in Althusser's Marxist theory, the structural order of the capitalist mode of production is distinct both from the actual, real agents involved in its relations and from the ideological forms in which those relations are understood.
Althusser
Although French theorist Louis Althusser is often associated with structural social analysis, which helped give rise to "structural Marxism," such association was contested by Althusser himself in the Italian foreword to the second edition of Reading Capital. In this foreword Althusser states the following:
Despite the precautions we took to distinguish ourselves from the 'structuralist' ideology…, despite the decisive intervention of categories foreign to 'structuralism'…, the terminology we employed was too close in many respects to the 'structuralist' terminology not to give rise to an ambiguity. With a very few exceptions…our interpretation of Marx has generally been recognized and judged, in homage to the current fashion, as 'structuralist'.… We believe that despite the terminological ambiguity, the profound tendency of our texts was not attached to the 'structuralist' ideology.
Assiter
In a later development, feminist theorist Alison Assiter enumerated four ideas common to the various forms of structuralism:
a structure determines the position of each element of a whole;
every system has a structure;
structural laws deal with co-existence rather than change; and
structures are the "real things" that lie beneath the surface or the appearance of meaning.
In linguistics
In Ferdinand de Saussure's Course in General Linguistics, the analysis focuses not on the use of language (parole, 'speech'), but rather on the underlying system of language (langue). This approach examines how the elements of language relate to each other in the present, synchronically rather than diachronically. Saussure argued that linguistic signs were composed of two parts:
a signifiant ('signifier'): the "sound pattern" of a word, either in mental projection—e.g., as when one silently recites lines from signage, a poem to one's self—or in actual, any kind of text, physical realization as part of a speech act.
a signifié '(signified'): the concept or meaning of the word.
This differed from previous approaches that focused on the relationship between words and the things in the world that they designate.
Although not fully developed by Saussure, other key notions in structural linguistics can be found in structural "idealism." A structural idealism is a class of linguistic units (lexemes, morphemes, or even constructions) that are possible in a certain position in a given syntagm, or linguistic environment (such as a given sentence). The different functional role of each of these members of the paradigm is called 'value' (French: ).
Prague School
In France, Antoine Meillet and Émile Benveniste continued Saussure's project, and members of the Prague school of linguistics such as Roman Jakobson and Nikolai Trubetzkoy conducted influential research. The clearest and most important example of Prague school structuralism lies in phonemics. Rather than simply compiling a list of which sounds occur in a language, the Prague school examined how they were related. They determined that the inventory of sounds in a language could be analysed as a series of contrasts.
Thus, in English, the sounds /p/ and /b/ represent distinct phonemes because there are cases (minimal pairs) where the contrast between the two is the only difference between two distinct words (e.g. 'pat' and 'bat'). Analyzing sounds in terms of contrastive features also opens up comparative scope—for instance, it makes clear the difficulty Japanese speakers have differentiating /r/ and /l/ in English and other languages is because these sounds are not contrastive in Japanese. Phonology would become the paradigmatic basis for structuralism in a number of different fields.
Based on the Prague school concept, André Martinet in France, J. R. Firth in the UK and Louis Hjelmslev in Denmark developed their own versions of structural and functional linguistics.
In anthropology
According to structural theory in anthropology and social anthropology, meaning is produced and reproduced within a culture through various practices, phenomena, and activities that serve as systems of signification.
A structuralist approach may study activities as diverse as food-preparation and serving rituals, religious rites, games, literary and non-literary texts, and other forms of entertainment to discover the deep structures by which meaning is produced and reproduced within the culture. For example, Lévi-Strauss analysed in the 1950s cultural phenomena including mythology, kinship (the alliance theory and the incest taboo), and food preparation. In addition to these studies, he produced more linguistically-focused writings in which he applied Saussure's distinction between langue and parole in his search for the fundamental structures of the human mind, arguing that the structures that form the "deep grammar" of society originate in the mind and operate in people unconsciously. Lévi-Strauss took inspiration from mathematics.
Another concept used in structural anthropology came from the Prague school of linguistics, where Roman Jakobson and others analysed sounds based on the presence or absence of certain features (e.g., voiceless vs. voiced). Lévi-Strauss included this in his conceptualization of the universal structures of the mind, which he held to operate based on pairs of binary oppositions such as hot-cold, male-female, culture-nature, cooked-raw, or marriageable vs. tabooed women.
A third influence came from Marcel Mauss (1872–1950), who had written on gift-exchange systems. Based on Mauss, for instance, Lévi-Strauss argued an alliance theory—that kinship systems are based on the exchange of women between groups—as opposed to the 'descent'-based theory described by Edward Evans-Pritchard and Meyer Fortes. While replacing Mauss at his Ecole Pratique des Hautes Etudes chair, the writings of Lévi-Strauss became widely popular in the 1960s and 1970s and gave rise to the term "structuralism" itself.
In Britain, authors such as Rodney Needham and Edmund Leach were highly influenced by structuralism. Authors such as Maurice Godelier and Emmanuel Terray combined Marxism with structural anthropology in France. In the United States, authors such as Marshall Sahlins and James Boon built on structuralism to provide their own analysis of human society. Structural anthropology fell out of favour in the early 1980s for a number of reasons. D'Andrade suggests that this was because it made unverifiable assumptions about the universal structures of the human mind. Authors such as Eric Wolf argued that political economy and colonialism should be at the forefront of anthropology. More generally, criticisms of structuralism by Pierre Bourdieu led to a concern with how cultural and social structures were changed by human agency and practice, a trend which Sherry Ortner has referred to as 'practice theory'.
One example is Douglas E. Foley's Learning Capitalist Culture (2010), in which he applied a mixture of structural and Marxist theories to his ethnographic fieldwork among high school students in Texas. Foley analyzed how they reach a shared goal through the lens of social solidarity when he observed "Mexicanos" and "Anglo-Americans" come together on the same football team to defeat the school's rivals. However, he also continually applies a marxist lens and states that he," wanted to wow peers with a new cultural marxist theory of schooling."
Some anthropological theorists, however, while finding considerable fault with Lévi-Strauss's version of structuralism, did not turn away from a fundamental structural basis for human culture. The Biogenetic Structuralism group for instance argued that some kind of structural foundation for culture must exist because all humans inherit the same system of brain structures. They proposed a kind of neuroanthropology which would lay the foundations for a more complete scientific account of cultural similarity and variation by requiring an integration of cultural anthropology and neuroscience—a program that theorists such as Victor Turner also embraced.
In literary criticism and theory
In literary theory, structuralist criticism relates literary texts to a larger structure, which may be a particular genre, a range of intertextual connections, a model of a universal narrative structure, or a system of recurrent patterns or motifs.
The field of structuralist semiotics argues that there must be a structure in every text, which explains why it is easier for experienced readers than for non-experienced readers to interpret a text. Everything that is written seems to be governed by rules, or "grammar of literature", that one learns in educational institutions and that are to be unmasked.
A potential problem for a structuralist interpretation is that it can be highly reductive; as scholar Catherine Belsey puts it: "the structuralist danger of collapsing all difference." An example of such a reading might be if a student concludes the authors of West Side Story did not write anything "really" new, because their work has the same structure as Shakespeare's Romeo and Juliet. In both texts a girl and a boy fall in love (a "formula" with a symbolic operator between them would be "Boy + Girl") despite the fact that they belong to two groups that hate each other ("Boy's Group - Girl's Group" or "Opposing forces") and conflict is resolved by their deaths. Structuralist readings focus on how the structures of the single text resolve inherent narrative tensions. If a structuralist reading focuses on multiple texts, there must be some way in which those texts unify themselves into a coherent system. The versatility of structuralism is such that a literary critic could make the same claim about a story of two friendly families ("Boy's Family + Girl's Family") that arrange a marriage between their children despite the fact that the children hate each other ("Boy - Girl") and then the children commit suicide to escape the arranged marriage; the justification is that the second story's structure is an 'inversion' of the first story's structure: the relationship between the values of love and the two pairs of parties involved have been reversed.
Structuralist literary criticism argues that the "literary banter of a text" can lie only in new structure, rather than in the specifics of character development and voice in which that structure is expressed. Literary structuralism often follows the lead of Vladimir Propp, Algirdas Julien Greimas, and Claude Lévi-Strauss in seeking out basic deep elements in stories, myths, and more recently, anecdotes, which are combined in various ways to produce the many versions of the ur-story or ur-myth.
There is considerable similarity between structural literary theory and Northrop Frye's archetypal criticism, which is also indebted to the anthropological study of myths. Some critics have also tried to apply the theory to individual works, but the effort to find unique structures in individual literary works runs counter to the structuralist program and has an affinity with New Criticism.
In economics
Yifu Lin criticizes early structural economic systems and theories, discussing the failures of it. He writes:"The structuralism believes that the failure to develop advanced capital-intensive industries spontaneously in a developing country is due to market failures caused by various structural rigidities..." "According to neoliberalism, the main reason for the failure of developing countries to catch up with developed countries was too much state intervention in the market, causing misallocation of resources, rent seeking and so forth."Rather these failures are more so centered around the unlikelihood of such quick development of these advanced industries within developing countries.
New Structural Economics (NSE)
New structural economics is an economic development strategy developed by World Bank Chief Economist Justin Yifu Lin. The strategy combines ideas from both neoclassical economics and structural economics.
NSE studies two parts: the base and the superstructure. A base is a combination of forces and relations of production, consisting of, but not limited to, industry and technology, while the superstructure consists of hard infrastructure and institutions. This results in an explanation of how the base impacts the superstructure which then determines transaction costs.
Interpretations and general criticisms
Structuralism is less popular today than other approaches, such as post-structuralism and deconstruction. Structuralism has often been criticized for being ahistorical and for favouring deterministic structural forces over the ability of people to act. As the political turbulence of the 1960s and 1970s (particularly the student uprisings of May 1968) began affecting academia, issues of power and political struggle moved to the center of public attention.
In the 1980s, deconstruction—and its emphasis on the fundamental ambiguity of language rather than its logical structure—became popular. By the end of the century, structuralism was seen as a historically important school of thought, but the movements that it spawned, rather than structuralism itself, commanded attention.
Several social theorists and academics have strongly criticized structuralism or even dismissed it. French hermeneutic philosopher Paul Ricœur (1969) criticized Lévi-Strauss for overstepping the limits of validity of the structuralist approach, ending up in what Ricœur described as "a Kantianism without a transcendental subject."
Anthropologist Adam Kuper (1973) argued that:'Structuralism' came to have something of the momentum of a millennial movement and some of its adherents felt that they formed a secret society of the seeing in a world of the blind. Conversion was not just a matter of accepting a new paradigm. It was, almost, a question of salvation. Philip Noel Pettit (1975) called for an abandoning of "the positivist dream which Lévi-Strauss dreamed for semiology," arguing that semiology is not to be placed among the natural sciences. Cornelius Castoriadis (1975) criticized structuralism as failing to explain symbolic mediation in the social world; he viewed structuralism as a variation on the "logicist" theme, arguing that, contrary to what structuralists advocate, language—and symbolic systems in general—cannot be reduced to logical organizations on the basis of the binary logic of oppositions.
Critical theorist Jürgen Habermas (1985) accused structuralists like Foucault of being positivists; Foucault, while not an ordinary positivist per se, paradoxically uses the tools of science to criticize science, according to Habermas. (See Performative contradiction and Foucault–Habermas debate.) Sociologist Anthony Giddens (1993) is another notable critic; while Giddens draws on a range of structuralist themes in his theorizing, he dismisses the structuralist view that the reproduction of social systems is merely "a mechanical outcome."
See also
Antihumanism
Engaged theory
Genetic structuralism
Holism
Isomorphism
Post-structuralism
Russian formalism
Structuralist film theory
Structuration theory
Émile Durkheim
Structural functionalism
Structuralism (philosophy of science)
Structuralism (philosophy of mathematics)
Structuralism (psychology)
Structural change
Structuralist economics
References
Further reading
Angermuller, Johannes. 2015. Why There Is No Poststructuralism in France: The Making of an Intellectual Generation. London: Bloomsbury.
Roudinesco, Élisabeth. 2008. Philosophy in Turbulent Times: Canguilhem, Sartre, Foucault, Althusser, Deleuze, Derrida. New York: Columbia University Press.
Primary sources
Althusser, Louis. Reading Capital.
Barthes, Roland. S/Z.
Deleuze, Gilles. 1973. "À quoi reconnaît-on le structuralisme?" Pp. 299–335 in Histoire de la philosophie, Idées, Doctrines. Vol. 8: Le XXe siècle, edited by F. Châtelet. Paris: Hachette
de Saussure, Ferdinand. 1916. Course in General Linguistics.
Foucault, Michel. The Order of Things.
Jakobson, Roman. Essais de linguistique générale.
Lacan, Jacques. The Seminars of Jacques Lacan.
Lévi-Strauss, Claude. The Elementary Structures of Kinship.
—— 1958. Structural Anthropology [Anthropologie structurale]
—— 1964–1971. Mythologiques
Wilcken, Patrick, ed. Claude Levi-Strauss: The Father of Modern Anthropology.
Linguistic theories and hypotheses
Literary criticism
Philosophical anthropology
Psychoanalytic theory
Sociological theories
Theories of language | 0.771115 | 0.998681 | 0.770097 |
Inquiry-based learning | Inquiry-based learning (also spelled as enquiry-based learning in British English) is a form of active learning that starts by posing questions, problems or scenarios. It contrasts with traditional education, which generally relies on the teacher presenting facts and their knowledge about the subject. Inquiry-based learning is often assisted by a facilitator rather than a lecturer. Inquirers will identify and research issues and questions to develop knowledge or solutions. Inquiry-based learning includes problem-based learning, and is generally used in small-scale investigations and projects, as well as research. The inquiry-based instruction is principally very closely related to the development and practice of thinking and problem-solving skills.
History
Inquiry-based learning is primarily a pedagogical method, developed during the discovery learning movement of the 1960s as a response to traditional forms of instruction—where people were required to memorize information from instructional materials, such as direct instruction and rote learning. The philosophy of inquiry based learning finds its antecedents in constructivist learning theories, such as the work of Piaget, Dewey, Vygotsky, and Freire among others, and can be considered a constructivist philosophy. Generating information and making meaning of it based on personal or societal experience is referred to as constructivism. Dewey's experiential learning pedagogy (that is, learning through experiences) comprises the learner actively participating in personal or authentic experiences to make meaning from it. Inquiry can be conducted through experiential learning because inquiry values the same concepts, which include engaging with the content/material in questioning, as well as investigating and collaborating to make meaning. Vygotsky approached constructivism as learning from an experience that is influenced by society and the facilitator. The meaning constructed from an experience can be concluded as an individual or within a group.
In the 1960s Joseph Schwab called for inquiry to be divided into three distinct levels. This was later formalized by Marshall Herron in 1971, who developed the Herron Scale to evaluate the amount of inquiry within a particular lab exercise. Since then, there have been a number of revisions proposed and inquiry can take various forms. There is a spectrum of inquiry-based teaching methods available.
Inquiry learning has been used as a teaching and learning tool for thousands of years, however, the use of inquiry within public education has a much briefer history. Ancient Greek and Roman educational philosophies focused much more on the art of agricultural and domestic skills for the middle class and oratory for the wealthy upper class. It was not until the Enlightenment, or the Age of Reason, during the late 17th and 18th century that the subject of Science was considered a respectable academic body of knowledge. Up until the 1900s the study of science within education had a primary focus on memorizing and organizing facts.
John Dewey, a well-known philosopher of education at the beginning of the 20th century, was the first to criticize the fact that science education was not taught in a way to develop young scientific thinkers. Dewey proposed that science should be taught as a process and way of thinking – not as a subject with facts to be memorized. While Dewey was the first to draw attention to this issue, much of the reform within science education followed the lifelong work and efforts of Joseph Schwab.
Joseph Schwab was an educator who proposed that science did not need to be a process for identifying stable truths about the world that we live in, but rather science could be a flexible and multi-directional inquiry driven process of thinking and learning.
Schwab believed that science in the classroom should more closely reflect the work of practicing scientists. Schwab developed three levels of open inquiry that align with the breakdown of inquiry processes that we see today.
Students are provided with questions, methods and materials and are challenged to discover relationships between variables
Students are provided with a question, however, the method for research is up to the students to develop
Phenomena are proposed but students must develop their own questions and method for research to discover relationships among variables
The graduated levels of scientific inquiry outlined by Schwab demonstrate that students need to develop thinking skills and strategies prior to being exposed to higher levels of inquiry. Effectively, these skills need to be scaffolded by the teacher or instructor until students are able to develop questions, methods, and conclusions on their own.
Characteristics
Specific learning processes that people engage in during inquiry-learning include:
Creating questions of their own
Obtaining supporting evidence to answer the question(s)
Explaining the evidence collected
Connecting the explanation to the knowledge obtained from the investigative process
Creating an argument and justification for the explanation
Inquiry learning involves developing questions, making observations, doing research to find out what information is already recorded, developing methods for experiments, developing instruments for data collection, collecting, analyzing, and interpreting data, outlining possible explanations and creating predictions for future study.
Levels
There are many different explanations for inquiry teaching and learning and the various levels of inquiry that can exist within those contexts. The article titled The Many Levels of Inquiry by Heather Banchi and Randy Bell (2008) clearly outlines four levels of inquiry.
Level 1: Confirmation inquiry
The teacher has taught a particular science theme or topic. The teacher then develops questions and a procedure that guides students through an activity where the results are already known. This method is great to reinforce concepts taught and to introduce students into learning to follow procedures, collect and record data correctly and to confirm and deepen understandings.
Level 2: Structured inquiry
The teacher provides the initial question and an outline of the procedure. Students are to formulate explanations of their findings through evaluating and analyzing the data that they collect.
Level 3: Guided inquiry
The teacher provides only the research question for the students. The students are responsible for designing and following their own procedures to test that question and then communicate their results and findings.
Level 4: Open/true inquiry
Students formulate their own research question(s), design and follow through with a developed procedure, and communicate their findings and results. This type of inquiry is often seen in science fair contexts where students drive their own investigative questions.
Banchi and Bell (2008) explain that teachers should begin their inquiry instruction at the lower levels and work their way to open inquiry in order to effectively develop students' inquiry skills. Open inquiry activities are only successful if students are motivated by intrinsic interests and if they are equipped with the skills to conduct their own research study.
Open/true inquiry learning
An important aspect of inquiry-based learning is the use of open learning, as evidence suggests that only utilizing lower level inquiry is not enough to develop critical and scientific thinking to the full potential. Open learning has no prescribed target or result that people have to achieve. There is an emphasis on the individual manipulating information and creating meaning from a set of given materials or circumstances. In many conventional and structured learning environments, people are told what the outcome is expected to be, and then they are simply expected to 'confirm' or show evidence that this is the case.
Open learning has many benefits. It means students do not simply perform experiments in a routine like fashion, but actually think about the results they collect and what they mean. With traditional non-open lessons there is a tendency for students to say that the experiment 'went wrong' when they collect results contrary to what they are told to expect. In open learning there are no wrong results, and students have to evaluate the strengths and weaknesses of the results they collect themselves and decide their value.
Open learning has been developed by a number of science educators including the American John Dewey and the German Martin Wagenschein. Wagenschein's ideas particularly complement both open learning and inquiry-based learning in teaching work. He emphasized that students should not be taught bald facts, but should understand and explain what they are learning. His most famous example of this was when he asked physics students to tell him what the speed of a falling object was. Nearly all students would produce an equation, but no students could explain what this equation meant. Wagenschein used this example to show the importance of understanding over knowledge.
Although both guided and open/true inquiry were found to promote science literacy and interest, each has its own advantages. While open/true inquiry may contribute to students' initiative, flexibility and adaptability better than guided inquiry in the long run, some claim that it may lead to high cognitive load and that guided inquiry is more efficient in terms of time and content learning.
Inquisitive learning
Sociologist of education Phillip Brown defined inquisitive learning as learning that is intrinsically motivated (e.g. by curiosity and interest in knowledge for its own sake), as opposed to acquisitive learning that is extrinsically motivated (e.g. by acquiring high scores on examinations to earn credentials). However, occasionally the term inquisitive learning is simply used as a synonym for inquiry-based learning.
Inquiry-based learning in academic disciplines
Science education
History
A catalyst for reform within North American science education was the 1957 launch of Sputnik, the Soviet Union satellite. This historical scientific breakthrough caused a great deal of concern around the science and technology education the American students were receiving. In 1958 the U.S. congress developed and passed the National Defense Education Act in order to provide math and science teachers with adequate teaching materials.
Science standards
America's Next Generation Science Standards (NGSS) embrace student centered inquiry-based pedagogy by implementing a three-part approach to science education: Disciplinary Core Ideas (DCIs), Science and Engineering Practices (SEPs), and Cross Cutting Concepts (CCCs). The standards are designed so that students learn science by performing scientific practices in the classroom. Students use practices such as asking questions, planning and carrying out investigations, collaborating, collecting and analyzing data, and arguing from evidence to learn the core ideas and concepts in scientific content areas. These practices are comparable to the 21st century skills that have been shown to be indicators of success in modern societies and workplaces regardless of whether that field is science based.
Pedagogical applications
Inquiry-based pedagogy in science education has been shown to increase students' scientific knowledge and literacy when compared to when students are taught using more traditional pedagogical methods. However, even though students in inquiry-based classrooms are shown to have higher scientific knowledge, they have also been shown to have increased frustration and decreased confidence in scientific ability when compared to their peers taught using traditional methods. Research has also shown that while inquiry-based pedagogy has been shown to improve students' science achievement, social contexts must be taken into account. This is because achievement gaps among students may be as likely to widen as they are to decrease due to differences in student readiness for inquiry-based learning based on social and economic status differences.
In cases where students' scientific knowledge in an inquiry based classroom was not significantly different than their peers taught in traditional methods, student problem solving ability was found to be improved for inquiry learning students. Inquiry as a pedagogical framework and learning process fits within many educational models including Problem Based Learning and the 5E Model of Education.
Problem-based learning
Inquiry as a pedagogical framework has been shown to be especially effective when used along problem-based learning (PBL) assignments. As a student-centered strategy, problem-based learning fits well within an inquiry based classroom. Students learn science by performing science: asking questions, designing experiments, collecting data, making claims, and using data to support claims. By creating a culture and community of inquiry in a science classroom, students learn science by working collaboratively with their peers to investigate the world around them and ways to solve problems affecting their communities. Students confronted with real world problems that affect their everyday lives are shown to have increased engagement and feel more encouraged to solve the problems posed to them.
5E Model of Science Education
The 5E Model of Science Education is a planning structure that helps science teachers develop student centered inquiry-based lessons and units. In the 5E model, students learn science by exploring their questions using the same approach scientists explore their questions. By using this approach, science teachers help their students connect scientific content learned in the classroom with phenomena from their own lives and apply that learning to new areas, in science and beyond.
The 5E Model is broken into the following sections which may repeat and occur at various stages of the learning process.
Engage: This is generally considered to be the opening stage of the 5E Model and is used to inspire student curiosity and should help students connect new phenomena to prior learning. This stage of the 5E model also aims to identify student misconceptions that need to be addressed through the lessons designed by the teacher.
Explore: In this stage, students investigate the phenomena observed during the engage stage and answer any questions they have generated based on their observations. The level of inquiry (i.e. fully open vs. guided) may vary based on the level, age, and readiness of students
Explain: In this stage, the teacher helps students piece together the information they gathered during the explore stage. Again, the level of direct teacher instruction and explanation may vary based on the level, age, and readiness of students.
Elaborate/Expand: This stage determines if students are truly able to apply the information they have learned to new areas and to the solution of real world problems.
Evaluate: In this stage students evaluate their own learning and the teacher evaluates student understanding and ability to apply knowledge to multiple areas.
Collaboration and communication
Effective collaboration and communication is an integral part of scientists' and engineers' everyday lives and their importance is reflected in the representation of these skills in the science and engineering practices of the Next Generation Science Standards. Inquiry education supports these skills, especially when students take part in a community of inquiry. Students who are actively collaborating and communicating in an inquiry based science class exhibit and develop many of these skills. Specifically, these students:
make observations and ask questions with their peers
work with peers to design solutions to problems
analyze claims of their peers
argue from evidence
support their peers' growth and search for knowledge
Social studies and history
The College, Career, and Civic Life (C3) Framework for Social Studies State Standards was a joint collaboration among states and social studies organizations, including the National Council for the Social Studies, designed to focus social studies education on the practice of inquiry, emphasizing "the disciplinary concepts and practices that support students as they develop the capacity to know, analyze, explain, and argue about interdisciplinary challenges in our social world." The C3 Framework recommends an "Inquiry Arc" incorporating four dimensions: 1. developing questions and planning inquiries; 2. applying disciplinary concepts and tools; 3. evaluating primary sources and using evidence; and 4. communicating conclusions and taking informed action. For example, a theme for this approach could be an exploration of etiquette today and in the past. Students might formulate their own questions or begin with an essential question such as "Why are men and women expected to follow different codes of etiquette?" Students explore change and continuity of manners over time and the perspectives of different cultures and groups of people. They analyze primary source documents such as books of etiquette from different time periods and form conclusions that answer the inquiry questions. Students finally communicate their conclusions in formal essays or creative projects. They may also take action by recommending solutions for improving school climate.
Robert Bain in How Students Learn described a similar approach called "problematizing history". First a learning curriculum is organized around central concepts. Next, a question and primary sources are provided, such as eyewitness historical accounts. The task for inquiry is to create an interpretation of history that will answer the central question. Students will form a hypothesis, collect and consider information and revisit their hypothesis as they evaluate their data.
Ontario's kindergarten program
After Charles Pascal's report in 2009, the Canadian province of Ontario's Ministry of Education decided to implement a full day kindergarten program that focuses on inquiry and play-based learning, called The Early Learning Kindergarten Program. As of September 2014, all primary schools in Ontario started the program. The curriculum document outlines the philosophy, definitions, process and core learning concepts for the program. Bronfenbrenner's ecological model, Vygotsky's zone of proximal development, Piaget's child development theory and Dewey's experiential learning are the heart of the program's design. As research shows, children learn best through play, whether it is independently or in a group. Three forms of play are noted in the curriculum document, pretend or "pretense" play, socio-dramatic play and constructive play. Through play and authentic experiences, children interact with their environment (people and/or objects) and question things; thus leading to inquiry learning. A chart on page 15 clearly outlines the process of inquiry for young children, including initial engagement, exploration, investigation, and communication. The new program supports holistic approach to learning. For further details, please see the curriculum document.
Since the program is extremely new, there is limited research on its success and areas of improvement. One government research report was released with the initial groups of children in the new kindergarten program. The Final Report: Evaluation of the Implementation of the Ontario Full-Day Early-Learning Kindergarten Program from Vanderlee, Youmans, Peters, and Eastabrook (2012) conclude with primary research that high-need children improved more compared to children who did not attend Ontario's new kindergarten program. As with inquiry-based learning in all divisions and subject areas, longitudinal research is needed to examine the full extent of this teaching/learning method.
Learning to read in the Netherlands
Since 2013, Dutch children have participated in a curriculum of learning to read through an inquiry-based pedagogical program. The program, from the Dutch developmental psychologist Ewald Vervaet, is named (OLL; 'Discovery Learning to Read') and has three parts. As of 2019, OLL is only available in Dutch.
OLL's main characteristic is that it is for children who are reading mature. Reading maturity is assessed with the Reading Maturity Test. It is a descriptive test that consists of two subtests.
Misconceptions
There are several common misconceptions regarding inquiry-based science, the first being that inquiry science is simply instruction that teaches students to follow the scientific method. Many teachers had the opportunity to work within the constraints of the scientific method as students themselves and assume inquiry learning must be the same. Inquiry science is not just about solving problems in six simple steps but much more broadly focused on the intellectual problem-solving skills developed throughout a scientific process. Additionally, not every hands-on lesson can be considered inquiry.
Some educators believe that there is only one true method of inquiry, which would be described as the level four: Open Inquiry. While open inquiry may be the most authentic form of inquiry, there are many skills and a level of conceptual understanding that the students must have developed before they can be successful at this high level of inquiry. While inquiry-based science is considered to be a teaching strategy that fosters higher order thinking in students, it should be one of several methods used. A multifaceted approach to science keeps students engaged and learning.
Not every student is going to learn the same amount from an inquiry lesson; students must be invested in the topic of study to authentically reach the set learning goals. Teachers must be prepared to ask students questions to probe their thinking processes in order to assess accurately. Inquiry-science requires a lot of time, effort, and expertise, however, the benefits outweigh the cost when true authentic learning can take place.
Neuroscience complexity
The literature states that inquiry requires multiple cognitive processes and variables, such as causality and co-occurrence that enrich with age and experience.
Kuhn, et al. (2000) used explicit training workshops to teach children in grades six to eight in the United States how to inquire through a quantitative study. By completing an inquiry-based task at the end of the study, the participants demonstrated enhanced mental models by applying different inquiry strategies. In a similar study, Kuhan and Pease (2008) completed a longitudinal quantitative study following a set of American children from grades four to six to investigate the effectiveness of scaffolding strategies for inquiry. Results demonstrated that children benefitted from the scaffolding because they outperformed the grade seven control group on an inquiry task. Understanding the neuroscience of inquiry learning the scaffolding process related to it should be reinforced for Ontario's primary teachers as part of their training.
Necessity for teacher training
There is a necessity for professional collaboration when executing a new inquiry program (Chu, 2009; Twigg, 2010). The teacher training and process of using inquiry learning should be a joint mission to ensure the maximal amount of resources are used and that the teachers are producing the best learning scenarios. The scholarly literature supports this notion. Twigg's (2010) education professionals who participated in her experiment
emphasized year round professional development sessions, such as workshops, weekly meetings and observations, to ensure inquiry is being implemented in the class correctly. Another example is Chu's (2009) study, where the participants appreciated the professional collaboration of educators, information technicians and librarians to provide more resources and expertise for preparing the structure and resources for the inquiry project. To establish a professional collaboration and researched training methods, administration support is required for funding.
Criticism
Kirschner, Sweller, and Clark (2006) review of literature found that although constructivists often cite each other's work, empirical evidence is not often cited. Nonetheless the constructivist movement gained great momentum in the 1990s, because many educators began to write about this philosophy of learning.
Hmelo-Silver, Duncan, & Chinn cite several studies supporting the success of the constructivist problem-based and inquiry learning methods. For example, they describe a project called GenScope, an inquiry-based science software application. Students using the GenScope software showed significant gains over the control groups, with the largest gains shown in students from basic courses.
In contrast, Hmelo-Silver et al. also cite a large study by Geier on the effectiveness of inquiry-based science for middle school students, as demonstrated by their performance on high-stakes standardized tests. The improvement was 14% for the first cohort of students and 13% for the second cohort. This study also found that inquiry-based teaching methods greatly reduced the achievement gap for African-American students.
In a 2006 article, the Thomas B. Fordham Institute's president, Chester E. Finn Jr., was quoted as saying "But like so many things in education, it gets carried to excess... [the approach is] fine to some degree." The organization ran a study in 2005 concluding that the emphasis states put on inquiry-based learning is too great.
Richard E. Mayer from the University of California, Santa Barbara, wrote in 2004 that there was sufficient research evidence to make any reasonable person skeptical about the benefits of discovery learning—practiced under the guise of cognitive constructivism or social constructivism—as a preferred instructional method. He reviewed research on discovery of problem-solving rules culminating in the 1960s, discovery of conservation strategies culminating in the 1970s, and discovery of LOGO programming strategies culminating in the 1980s. In each case, guided discovery was more effective than pure discovery in helping students learn and transfer.
It should be cautioned that inquiry-based learning takes a lot of planning before implementation. It is not something that can be put into place in the classroom quickly. Measurements must be put in place for how students knowledge and performance will be measured and how standards will be incorporated. The teacher's responsibility during inquiry exercises is to support and facilitate student learning (Bell et al., 769–770). A common mistake teachers make is lacking the vision to see where students' weaknesses lie. According to Bain, teachers cannot assume that students will hold the same assumptions and thinking processes as a professional within that discipline (p. 201).
While some see inquiry-based teaching as increasingly mainstream, it can be perceived as in conflict with standardized testing common in standards-based assessment systems which emphasise the measurement of student knowledge, and meeting of pre-defined criteria, for example the shift towards "fact" in changes to the National Assessment of Educational Progress as a result of the American No Child Left Behind program.
Additional scholarly research literature
Chu (2009) used a mixed method design to examine the outcome of an inquiry project completed by students in Hong Kong with the assistance of multiple educators. Chu's (2009) results show that the children were more motivated and academically successful compared to the control group.
Cindy Hmelo-Silver reviewed a number of reports on a variety studies into problem based learning.
Edelson, Gordin and Pea describe five significant challenges to implementing inquiry-based learning and present strategies for addressing them through the design of technology and curriculum. They present a design history covering four generations of software and curriculum to show how these challenges arise in classrooms and how the design strategies respond to them.
See also
Action learning
Design-based learning
Discovery learning
Networked learning
Phenomenon-based learning
POGIL
Problem-based learning
Progressive inquiry
Project-based learning
Scientific literacy
Three-part lesson
Notes
References and further reading
External links
Inquiry-based middle school lesson plan: "Born to Run: Artificial Selection Lab"
Teaching Inquiry-based Science
What is Inquiry?
Applied learning
Philosophy of education
Education reform
Standards-based education
Inquiry
Educational practices | 0.775151 | 0.99337 | 0.770012 |
Secular ethics | Secular ethics is a branch of moral philosophy in which ethics is based solely on human faculties such as logic, empathy, reason or moral intuition, and not derived from belief in supernatural revelation or guidance—a source of ethics in many religions. Secular ethics refers to any ethical system that does not draw on the supernatural, and includes humanism, secularism and freethinking. A classical example of literature on secular ethics is the Kural text, authored by the ancient Indian philosopher Valluvar.
Secular ethical systems comprise a wide variety of ideas to include the normativity of social contracts, some form of attribution of intrinsic moral value, intuition-based deontology, cultural moral relativism, and the idea that scientific reasoning can reveal objective moral truth (known as science of morality).
Secular ethics frameworks are not always mutually exclusive from theological values. For example, the Golden Rule or a commitment to non-violence, could be supported by both religious and secular frameworks. Secular ethics systems can vary within the societal and cultural norms of a specific time period, and may also be used by a person of any religious persuasion, including atheists.
Tenets of secular ethics
Despite the width and diversity of their philosophical views, secular ethicists generally share one or more principles:
Human beings, through their ability to empathize, are capable of determining ethical grounds.
The well-being of others is central to ethical decision-making.
Human beings, through logic and reason, are capable of deriving normative principles of behavior.
This may lead to a behavior preferable to that propagated or condoned based on religious texts. Alternatively, this may lead to the advocacy of a system of moral principles that a broad group of people, both religious and non-religious, can agree upon.
Human beings have the moral responsibility to ensure that societies and individuals act based on these ethical principles.
Societies should, if at all possible, advance from a less ethical and just form to a more ethical and just form.
Many of these tenets are applied in the science of morality, the use of the scientific method to answer moral questions. Various thinkers have framed morality as questions of empirical truth to be explored in a scientific context. The science is related to ethical naturalism, a type of ethical realism.
In How Good People Make Tough Choices: Resolving the Dilemmas of Ethical Living, Rushworth Kidder identifies four general characteristics of an ethical code:
1. It is brief
2. It is usually not explanatory
3. Can be expressed in a number of forms (e.g. positive or negative, single words or a list of sentences)
4. Centers on moral values
Humanist ethics
Humanists endorse universal morality based on the commonality of human nature, and that knowledge of right and wrong is based on our best understanding of our individual and joint interests, rather than stemming from a transcendental or arbitrarily local source, therefore rejecting faith completely as a basis for action. The humanist ethics goal is a search for viable individual, social and political principles of conduct, judging them on their ability to enhance human well-being and individual responsibility, thus ultimately eliminating human suffering.
The International Humanist and Ethical Union (IHEU) is the world-wide umbrella organization for those adhering to the Humanist life stance.
Humanism is a democratic and ethical life stance, which affirms that human beings have the right and responsibility to give meaning and shape to their own lives. It stands for the building of a more humane society through an ethic based on human and other natural values in the spirit of reason and free inquiry through human capabilities. It is not theistic, and it does not accept supernatural views of reality.
Humanism is known to adopt principles of the Golden Rule.
Secular ethics and religion
There are those who state that religion is not necessary for moral behavior at all. The Dalai Lama has said that compassion and affection are human values independent of religion: "We need these human values. I call these secular ethics, secular beliefs. There’s no relationship with any particular religion. Even without religion, even as nonbelievers, we have the capacity to promote these things."
Those who are unhappy with the negative orientation of traditional religious ethics believe that prohibitions can only set the absolute limits of what a society is willing to tolerate from people at their worst, not guide them towards achieving their best. In other words, someone who follows all these prohibitions has just barely avoided being a criminal, not acted as a positive influence on the world. They conclude that rational ethics can lead to a fully expressed ethical life, while religious prohibitions are insufficient.
That does not mean secular ethics and religion are mutually exclusive. In fact, many principles, such as the Golden Rule, are present in both systems, and some religious people, as well as some Deists, prefer to adopt a rational approach to ethics.
Examples of secular ethical codes
Humanist Manifestos
The Humanist Manifestos are three manifestos, the first published in 1933, that outline the philosophical views and stances of humanists. Integral to the manifestos is a lack of supernatural guidance.
Alternatives to the Ten Commandments
There are numerous versions of Alternatives to the Ten Commandments
Girl Scout law
The Girl Scout law is as follows:
I will do my best to be
honest and fair,
friendly and helpful,
considerate and caring,
courageous and strong, and
responsible for what I say and do,
and to
respect myself and others,
respect authority,
use resources wisely,
make the world a better place, and
be a sister to every Girl Scout.
United States Naval Academy honor concept
"Midshipmen are persons of integrity: They stand for that which is right.
They tell the truth and ensure that the full truth is known. They do not lie.
They embrace fairness in all actions. They ensure that work submitted as their own is their own, and that assistance received from any source is authorized and properly documented. They do not cheat.
They respect the property of others and ensure that others are able to benefit from the use of their own property. They do not steal."
Minnesota Principles
The Minnesota Principles were proposed "by the Minnesota Center for Corporate Responsibility in 1992 as a guide to international business activities":
Business activities must be characterized by fairness. We understand fairness to include equitable treatment and equality of opportunity for all participants in the marketplace.
Business activities must be characterized by honesty. We understand honesty to include candor, truthfulness and promise-keeping.
Business activities must be characterized by respect for human dignity. We understand this to mean that business activities should show a special concern for the less powerful and the disadvantaged.
Business activities must be characterized by respect for the environment. We understand this to mean that business activities should promote sustainable development and prevent environmental degradation and waste of resources.
Rotary Four-Way Test
The Four-Way Test is the "linchpin of Rotary International's ethical practice." It acts as a test of thoughts as well as actions. It asks, "Of the things we think, say, or do":
Is it the truth?
Is it fair to all concerned?
Will it build goodwill and better friendships?
Will it be beneficial to all concerned?
Military codes
As the United States Constitution prohibits the establishment of a government religion, US military codes of conduct typically contain no religious overtones.
West Point Honor Code
The West Point honor code states that "A cadet will not lie, cheat, steal, or tolerate those who do." The non-toleration clause is key in differentiating it from numerous other codes.
Nature and ethics
Whether or not the relationships between animals found in nature and between people in early human evolution can provide a basis for human morality is a persistently unresolved question. Thomas Henry Huxley wrote in Evolution and Ethics in 1893 that people make a grave error in trying to create moral ideas from the behavior of animals in nature. He remarked:
Famous biologist and writer Stephen Jay Gould has stated that "answers will not be read passively from nature" and "[t]he factual state of the world does not teach us how we, with our powers for good and evil, should alter or preserve it in the most ethical manner". Thus, he concluded that ideas of morality should come from a form of higher mental reason, with nature viewed as an independent phenomenon.
Evolutionary ethics is not the only way to involve nature with ethics. For example, there are ethically realist theories like ethical naturalism. Related to ethical naturalism is also the idea that ethics are best explored, not just using the lens of philosophy, but science as well (a science of morality).
Key philosophers and philosophical texts
Epicurus
Epicurus (341–270 BCE), in his philosophy of Epicureanism, posits an ethics of pleasure based on the study of nature, and teaches that we should carry out our choices and rejections based on hedonic calculus. Epicureanism also teaches a curriculum of control of desires based on the hierarchy of desires: natural and necessary desires are top priority, natural yet unnecessary desires are secondary, and those that are neither natural nor necessary are dismissed. The main texts of its ethics are the 40 Principal Doctrines (Kyriai Doxai) and Epicurus' Letter to Menoeceus. As per Principal Doctrine 5, the ethical code of the School promotes living pleasantly, justly, prudently, and correctly.
Valluvar
Thiruvalluvar (before c. 5th century CE), a South Indian poet-philosopher and the author of the Kural, a non-denominational Classical Tamil work on secular ethics and morality, is believed to have lived between the 1st century BCE and the 5th century CE. While others of his time chiefly focused on the praise of God, culture and the ruler of the land, Valluvar focused on the moral behaviors of the common individual. Valluvar limits his theistic teachings to the introductory chapter of the Kural text, the "Praise of God." Throughout the text thereafter, he focuses on the everyday moral behaviors of an individual, thus making the text a secular one. Even in the introductory chapter, he refrains from mentioning the name of any particular god but only addresses God in generic terms as "the Creator," "the truly Wise One," "the One of eight-fold excellence," and so forth. Translated into about 40 world languages, the Kural text remains one of the most widely translated non-religious works in the world. Praised as "the Universal Veda," it emphasizes on the ethical edifices of non-violence, moral vegetarianism, casteless human brotherhood, absence of desires, path of righteousness and truth, and so forth, besides covering a wide range of subjects such as moral codes of rulers, friendship, agriculture, knowledge and wisdom, sobriety, love, and domestic life.
Holyoake
George Jacob Holyoake's 1896 publication English Secularism defines secularism thus:
"Secularism is a code of duty pertaining to this life, founded on considerations purely human, and intended mainly for those who find theology indefinite or inadequate, unreliable or unbelievable. Its essential principles are three: (1) The improvement of this life by material means. (2) That science is the available Providence of man. (3) That it is good to do good. Whether there be other good or not, the good of the present life is good, and it is good to seek that good."
Holyoake held that secularism should take no interest at all in religious questions (as they were irrelevant), and was thus to be distinguished from strong freethought and atheism. In this he disagreed with Charles Bradlaugh, and the disagreement split the secularist movement between those who argued that anti-religious movements and activism was not necessary or desirable and those who argued that it was.
Nietzsche
Nietzsche's many works spoke of a Master-Slave Morality, The Will to Power, or something stronger that overcomes the weaker and Darwinistic adaptation and will to live. Nietzsche expressed his moral philosophy throughout his collection of works; the most important of these to secular ethics being The Gay Science (in which the famous God is dead phrase was first used), Thus Spoke Zarathustra, Beyond Good and Evil and On The Genealogy of Morals.
According to Nietzsche, our understanding of cause and effect may be limited to a single will, an inclination that is identical to that of other living things. In this respect, Nietzsche's theory is more straightforward than Kant's since Nietzsche did manage clarify our sense of causality by considering simply the will to power, whereas Kant postulated two sorts of wills (free and unfree wills). We shouldn't "accept numerous forms of causality until the experiment of making do with a single one has been pushed to its farthest extent," according to Nietzsche, since the "conscience of method requires" it. The want to power appears to be practically beyond dispute, according to Nietzsche's formulation, and since causation is our ultimate certainty.
Kant
On ethics, Kant wrote works that both described the nature of universal principles and also sought to demonstrate the procedure of their application. Kant maintained that only a "good will" is morally praiseworthy, so that doing what appears to be ethical for the wrong reasons is not a morally good act. Kant's emphasis on one's intent or reasons for acting is usually contrasted with the utilitarian tenet that the goodness of an action is to be judged solely by its results. Utilitarianism is a hypothetical imperative, if one wants _, they must do __. Contrast this with the Kantian ethic of the categorical imperative, where the moral act is done for its own sake, and is framed: One must do __ or alternatively, one must not do __.
For instance, under Kantian ethics, if a person were to give money to charity because failure to do so would result in some sort of punishment from a god or Supreme Being, then the charitable donation would not be a morally good act. A dutiful action must be performed solely out of a sense of duty; any other motivation profanes the act and strips it of its moral quality.
Utilitarianism
Utilitarianism (from the Latin utilis, useful) is a theory of ethics that prescribes the quantitative maximization of good consequences for a population. It is a form of consequentialism. This good to be maximized is usually happiness, pleasure, or preference satisfaction. Though some utilitarian theories might seek to maximize other consequences, these consequences generally have something to do with the welfare of people (or of people and nonhuman animals). For this reason, utilitarianism is often associated with the term welfarist consequentialism.
In utilitarianism, it is the "end result" which is fundamental (as opposed to Kantian ethics discussed above). Thus using the same scenario as above, it would be irrelevant whether the person giving money to charity was doing so out of personal or religious conviction, the mere fact that the charitable donation is being made is sufficient for it to be classified as morally good.
See also
Outline of ethics
Secular humanism
Secular morality
References
Bibliography
Kosower, M. (2018) Some Reflections on Torah, Science, Rationality, and Morality https://mkosower.academia.edu/research
https://blogs.timesofisrael.com/some-reflections-on-torah-science-rationality-and-morality/
https://www.secularethic.org/home-1
https://archive.pagecentertraining.psu.edu/public-relations-ethics/ethical-decision-making/yet-another-test-page/the-difference-between-ethics-and-religion/ | 0.7779 | 0.989827 | 0.769987 |
Universal (metaphysics) | In metaphysics, a universal is what particular things have in common, namely characteristics or qualities. In other words, universals are repeatable or recurrent entities that can be instantiated or exemplified by many particular things. For example, suppose there are two chairs in a room, each of which is green. These two chairs share the quality of "chairness", as well as "greenness" or the quality of being green; in other words, they share two "universals". There are three major kinds of qualities or characteristics: types or kinds (e.g. mammal), properties (e.g. short, strong), and relations (e.g. father of, next to). These are all different types of universals.
Paradigmatically, universals are abstract (e.g. humanity), whereas particulars are concrete (e.g. the personhood of Socrates). However, universals are not necessarily abstract and particulars are not necessarily concrete. For example, one might hold that numbers are particular yet abstract objects. Likewise, some philosophers, such as D. M. Armstrong, consider universals to be concrete.
Most do not consider classes to be universals, although some prominent philosophers do, such as John Bigelow.
Problem of universals
The problem of universals is an ancient problem in metaphysics on the existence of universals. The problem arises from attempts to account for the phenomenon of similarity or attribute agreement among things. For example, grass and Granny Smith apples are similar or agree in attribute, namely in having the attribute of greenness. The issue is how to account for this sort of agreement in attribute among things.
There are many philosophical positions regarding universals. Taking "beauty" as an example, four positions are:
Idealism: beauty is a property constructed in the mind, so it exists only in descriptions of things.
Platonic extreme realism: beauty is a property that exists in an ideal form independently of any mind or thing.
Aristotelian moderate realism or conceptualism: beauty is a property of things (fundamentum in re) that the mind abstracts from these beautiful things.
Nominalism: there are no universals, only individuals.
Taking a broader view, the main positions are generally considered classifiable as: extreme realism, nominalism (sometimes simply named "anti-realism" with regard to universals), moderate realism, and idealism. Extreme Realists posit the existence of independent, abstract universals to account for attribute agreement. Nominalists deny that universals exist, claiming that they are not necessary to explain attribute agreement. Conceptualists posit that universals exist only in the mind, or when conceptualized, denying the independent existence of universals, but accepting they have a fundamentum in re.
Complications which arise include the implications of language use and the complexity of relating language to ontology.
Particular
A universal may have instances, known as its particulars. For example, the type dog (or doghood) is a universal, as are the property red (or redness) and the relation betweenness (or being between). Any particular dog, red thing, or object that is between other things is not a universal, however, but is an instance of a universal. That is, a universal type (doghood), property (redness), or relation (betweenness) inheres in a particular object (a specific dog, red thing, or object between other things).
Platonic realism
Platonic realism holds universals to be the referents of general terms, such as the abstract, nonphysical, non-mental entities to which words such as "sameness", "circularity", and "beauty" refer. Particulars are the referents of proper names, such as "Phaedo," or of definite descriptions that identify single objects, such as the phrase, "that person over there". Other metaphysical theories may use the terminology of universals to describe physical entities.
Plato's examples of what we might today call universals included mathematical and geometrical ideas such as a circle and natural numbers as universals. Plato's views on universals did, however, vary across several different discussions. In some cases, Plato spoke as if the perfect circle functioned as the form or blueprint for all copies and for the word definition of circle. In other discussions, Plato describes particulars as "participating" in the associated universal.
Contemporary realists agree with the thesis that universals are multiply-exemplifiable entities. Examples include by D. M. Armstrong, Nicholas Wolterstorff, Reinhardt Grossmann, Michael Loux.
Nominalism
Nominalists hold that universals are not real mind-independent entities but either merely concepts (sometimes called "conceptualism") or merely names. Nominalists typically argue that properties are abstract particulars (like tropes) rather than universals. JP Moreland distinguishes between "extreme" and "moderate" nominalism. Examples of nominalists include Buddhist logicians and apoha theorists, the medieval philosophers Roscelin of Compiègne and William of Ockham and contemporary philosophers W. V. O. Quine, Wilfred Sellars, D. C. Williams, and Keith Campbell.
Ness-ity-hood principle
The ness-ity-hood principle is used mainly by English-speaking philosophers to generate convenient, concise names for universals or properties. According to the Ness-Ity-Hood Principle, a name for any universal may be formed by taking the name of the predicate and adding the suffix "ness", "ity", or "hood". For example, the universal that is distinctive of left-handers may be formed by taking the predicate "left-handed" and adding "ness", which yields the name "left-handedness". The principle is most helpful in cases where there is not an established or standard name of the universal in ordinary English usage: What is the name of the universal distinctive of chairs? "Chair" in English is used not only as a subject (as in "The chair is broken"), but also as a predicate (as in "That is a chair"). So to generate a name for the universal distinctive of chairs, take the predicate "chair" and add "ness", which yields "chairness".
See also
Hypostatic abstraction
Philosophy of mathematics
Sortal
Transcendental nominalism
The Secret of Hegel
Universality (philosophy)
Universalism
Notes
References
Feldman, Fred (2005). "The Open Question Argument: What It Isn't; and What It Is", Philosophical Issues 15, Normativity.
Loux, Michael J. (1998). Metaphysics: A Contemporary Introduction, N.Y.: Routledge.
Loux, Michael J. (2001). "The Problem of Universals" in Metaphysics: Contemporary Readings, Michael J. Loux (ed.), N.Y.: Routledge, pp. 3–13.
MacLeod, M. & Rubenstein, E. (2006). "Universals", The Internet Encyclopedia of Philosophy, J. Fieser & B. Dowden (eds.). (link)
Moreland, J. P. (2001). Universals, McGill-Queen's University Press/Acumen.
Price, H. H. (1953). "Universals and Resemblance", Ch. 1 of Thinking and Experience, Hutchinson's University Library.
Rodriguez-Pereyra, Gonzalo (2008). "Nominalism in Metaphysics", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). (link)
Further reading
Aristotle, Categories (link)
Aristotle, Metaphysics (link)
Armstrong, D. M. (1989). Universals: An Opinionated Introduction, Westview Press. (link)
Bolton, M., “Universals, Essences, and Abstract Entities”, in: D. Garber, M. Ayers, red., The Cambridge History of Seventeenth-Century Philosophy (Cambridge: Cambridge University Press, 1998), vol. I, pp. 178–211
Lewis, D. (1983), "New work for a theory of universals". Australasian Journal of Philosophy. Vol. 61, No. 4.
Libera, Alain de (2005), Der Universalienstreit. Von Platon bis zum Ende des Mittelalters, München, Wilhelm Fink Verlag, 2005
Plato, Phaedo (link)
Plato, Republic (esp. books V, VI, VII and X) (link)
Plato, Parmenides (link)
Plato, Sophist (link)
Quine, W. V. O. (1961). "On What There is," in From a Logical Point of View, 2nd/ed. N.Y: Harper and Row.
Russell, Bertrand (1912). "The World of Universals," in The Problems of Philosophy, Oxford University Press.
Russell, Bertrand (1912b). "On the Relation of Universals and Particulars" (link)
Swoyer, Chris (2000). "Properties", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). (link)
Williams, D. C. (1953). "On the Elements of Being", Review of Metaphysics, vol. 17. (link)
External links
Chrysippus – Stanford Encyclopedia of Philosophy
Chrysippus – Internet Encyclopedia of Philosophy
Metaphysical properties
Ontology
Abstract object theory
Substance theory | 0.777926 | 0.989766 | 0.769965 |
Scientific theory | A scientific theory is an explanation of an aspect of the natural world and universe that can be (or a fortiori, that has been) repeatedly tested and corroborated in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. Where possible, theories are tested under controlled conditions in an experiment. In circumstances not amenable to experimental testing, theories are evaluated through principles of abductive reasoning. Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.
A scientific theory differs from a scientific fact or scientific law in that a theory seeks to explain "why" or "how", whereas a fact is a simple, basic observation and a law is an empirical description of a relationship between facts and/or other laws. For example, Newton's Law of Gravity is a mathematical equation that can be used to predict the attraction between bodies, but it is not a theory to explain how gravity works. Stephen Jay Gould wrote that "...facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts."
The meaning of the term scientific theory (often contracted to theory for brevity) as used in the disciplines of science is significantly different from the common vernacular usage of theory. In everyday speech, theory can imply an explanation that represents an unsubstantiated and speculative guess, whereas in a scientific context it most often refers to an explanation that has already been tested and is widely accepted as valid.
The strength of a scientific theory is related to the diversity of phenomena it can explain and its simplicity. As additional scientific evidence is gathered, a scientific theory may be modified and ultimately rejected if it cannot be made to fit the new findings; in such circumstances, a more accurate theory is then required. Some theories are so well-established that they are unlikely ever to be fundamentally changed (for example, scientific theories such as evolution, heliocentric theory, cell theory, theory of plate tectonics, germ theory of disease, etc.). In certain cases, a scientific theory or scientific law that fails to fit all data can still be useful (due to its simplicity) as an approximation under specific conditions. An example is Newton's laws of motion, which are a highly accurate approximation to special relativity at velocities that are small relative to the speed of light.
Scientific theories are testable and make verifiable predictions. They describe the causes of a particular natural phenomenon and are used to explain and predict aspects of the physical universe or specific areas of inquiry (for example, electricity, chemistry, and astronomy). As with other forms of scientific knowledge, scientific theories are both deductive and inductive, aiming for predictive and explanatory power. Scientists use theories to further scientific knowledge, as well as to facilitate advances in technology or medicine. Scientific hypothesis can never be "proven" because scientists are not able to fully confirm that their hypothesis is true. Instead, scientists say that the study "supports" or is consistent with their hypothesis.
Types
Albert Einstein described two different types of scientific theories: "Constructive theories" and "principle theories". Constructive theories are constructive models for phenomena: for example, kinetic theory. Principle theories are empirical generalisations, one such example being Newton's laws of motion.
Characteristics
Essential criteria
For any theory to be accepted within most academia there is usually one simple criterion. The essential criterion is that the theory must be observable and repeatable. The aforementioned criterion is essential to prevent fraud and perpetuate science itself.
The defining characteristic of all scientific knowledge, including theories, is the ability to make falsifiable or testable predictions. The relevance and specificity of those predictions determine how potentially useful the theory is. A would-be theory that makes no observable predictions is not a scientific theory at all. Predictions not sufficiently specific to be tested are similarly not useful. In both cases, the term "theory" is not applicable.
A body of descriptions of knowledge can be called a theory if it fulfills the following criteria:
It makes falsifiable predictions with consistent accuracy across a broad area of scientific inquiry (such as mechanics).
It is well-supported by many independent strands of evidence, rather than a single foundation.
It is consistent with preexisting experimental results and at least as accurate in its predictions as are any preexisting theories.
These qualities are certainly true of such established theories as special and general relativity, quantum mechanics, plate tectonics, the modern evolutionary synthesis, etc.
Other criteria
In addition, most scientists prefer to work with a theory that meets the following qualities:
It can be subjected to minor adaptations to account for new data that do not fit it perfectly, as they are discovered, thus increasing its predictive capability over time.
It is among the most parsimonious explanations, economical in the use of proposed entities or explanatory steps as per Occam's razor. This is because for each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.
Definitions from scientific organizations
The United States National Academy of Sciences defines scientific theories as follows:
The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the Sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics)...One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.
From the American Association for the Advancement of Science:
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory". It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.
Note that the term theory would not be appropriate for describing untested but intricate hypotheses or even scientific models.
Formation
The scientific method involves the proposal and testing of hypotheses, by deriving predictions from the hypotheses about the results of future experiments, then performing those experiments to see whether the predictions are valid. This provides evidence either for or against the hypothesis. When enough experimental results have been gathered in a particular area of inquiry, scientists may propose an explanatory framework that accounts for as many of these as possible. This explanation is also tested, and if it fulfills the necessary criteria (see above), then the explanation becomes a theory. This can take many years, as it can be difficult or complicated to gather sufficient evidence.
Once all of the criteria have been met, it will be widely accepted by scientists (see scientific consensus) as the best available explanation of at least some phenomena. It will have made predictions of phenomena that previous theories could not explain or could not predict accurately, and it will have many repeated bouts of testing. The strength of the evidence is evaluated by the scientific community, and the most important experiments will have been replicated by multiple independent groups.
Theories do not have to be perfectly accurate to be scientifically useful. For example, the predictions made by classical mechanics are known to be inaccurate in the relativistic realm, but they are almost exactly correct at the comparatively low velocities of common human experience. In chemistry, there are many acid-base theories providing highly divergent explanations of the underlying nature of acidic and basic compounds, but they are very useful for predicting their chemical behavior. Like all knowledge in science, no theory can ever be completely certain, since it is possible that future experiments might conflict with the theory's predictions. However, theories supported by the scientific consensus have the highest level of certainty of any scientific knowledge; for example, that all objects are subject to gravity or that life on Earth evolved from a common ancestor.
Acceptance of a theory does not require that all of its major predictions be tested, if it is already supported by sufficiently strong evidence. For example, certain tests may be unfeasible or technically difficult. As a result, theories may make predictions that have not yet been confirmed or proven incorrect; in this case, the predicted results may be described informally with the term "theoretical". These predictions can be tested at a later time, and if they are incorrect, this may lead to the revision or rejection of the theory.As Feynman puts it:It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.
Modification and improvement
If experimental results contrary to a theory's predictions are observed, scientists first evaluate whether the experimental design was sound, and if so they confirm the results by independent replication. A search for potential improvements to the theory then begins. Solutions may require minor or major changes to the theory, or none at all if a satisfactory explanation is found within the theory's existing framework. Over time, as successive modifications build on top of each other, theories consistently improve and greater predictive accuracy is achieved. Since each new version of a theory (or a completely new theory) must have more predictive and explanatory power than the last, scientific knowledge consistently becomes more accurate over time.
If modifications to the theory or other explanations seem to be insufficient to account for the new results, then a new theory may be required. Since scientific knowledge is usually durable, this occurs much less commonly than modification. Furthermore, until such a theory is proposed and accepted, the previous theory will be retained. This is because it is still the best available explanation for many other phenomena, as verified by its predictive power in other contexts. For example, it has been known since 1859 that the observed perihelion precession of Mercury violates Newtonian mechanics, but the theory remained the best explanation available until relativity was supported by sufficient evidence. Also, while new theories may be proposed by a single person or by many, the cycle of modifications eventually incorporates contributions from many different scientists.
After the changes, the accepted theory will explain more phenomena and have greater predictive power (if it did not, the changes would not be adopted); this new explanation will then be open to further replacement or modification. If a theory does not require modification despite repeated tests, this implies that the theory is very accurate. This also means that accepted theories continue to accumulate evidence over time, and the length of time that a theory (or any of its principles) remains accepted often indicates the strength of its supporting evidence.
Unification
In some cases, two or more theories may be replaced by a single theory that explains the previous theories as approximations or special cases, analogous to the way a theory is a unifying explanation for many confirmed hypotheses; this is referred to as unification of theories. For example, electricity and magnetism are now known to be two aspects of the same phenomenon, referred to as electromagnetism.
When the predictions of different theories appear to contradict each other, this is also resolved by either further evidence or unification. For example, physical theories in the 19th century implied that the Sun could not have been burning long enough to allow certain geological changes as well as the evolution of life. This was resolved by the discovery of nuclear fusion, the main energy source of the Sun. Contradictions can also be explained as the result of theories approximating more fundamental (non-contradictory) phenomena. For example, atomic theory is an approximation of quantum mechanics. Current theories describe three separate fundamental phenomena of which all other theories are approximations; the potential unification of these is sometimes called the Theory of Everything.
Example: Relativity
In 1905, Albert Einstein published the principle of special relativity, which soon became a theory. Special relativity predicted the alignment of the Newtonian principle of Galilean invariance, also termed Galilean relativity, with the electromagnetic field. By omitting from special relativity the luminiferous aether, Einstein stated that time dilation and length contraction measured in an object in relative motion is inertial—that is, the object exhibits constant velocity, which is speed with direction, when measured by its observer. He thereby duplicated the Lorentz transformation and the Lorentz contraction that had been hypothesized to resolve experimental riddles and inserted into electrodynamic theory as dynamical consequences of the aether's properties. An elegant theory, special relativity yielded its own consequences, such as the equivalence of mass and energy transforming into one another and the resolution of the paradox that an excitation of the electromagnetic field could be viewed in one reference frame as electricity, but in another as magnetism.
Einstein sought to generalize the invariance principle to all reference frames, whether inertial or accelerating. Rejecting Newtonian gravitation—a central force acting instantly at a distance—Einstein presumed a gravitational field. In 1907, Einstein's equivalence principle implied that a free fall within a uniform gravitational field is equivalent to inertial motion. By extending special relativity's effects into three dimensions, general relativity extended length contraction into space contraction, conceiving of 4D space-time as the gravitational field that alters geometrically and sets all local objects' pathways. Even massless energy exerts gravitational motion on local objects by "curving" the geometrical "surface" of 4D space-time. Yet unless the energy is vast, its relativistic effects of contracting space and slowing time are negligible when merely predicting motion. Although general relativity is embraced as the more explanatory theory via scientific realism, Newton's theory remains successful as merely a predictive theory via instrumentalism. To calculate trajectories, engineers and NASA still uses Newton's equations, which are simpler to operate.
Theories and laws
Both scientific laws and scientific theories are produced from the scientific method through the formation and testing of hypotheses, and can predict the behavior of the natural world. Both are also typically well-supported by observations and/or experimental evidence. However, scientific laws are descriptive accounts of how nature will behave under certain conditions. Scientific theories are broader in scope, and give overarching explanations of how nature works and why it exhibits certain characteristics. Theories are supported by evidence from many different sources, and may contain one or several laws.
A common misconception is that scientific theories are rudimentary ideas that will eventually graduate into scientific laws when enough data and evidence have been accumulated. A theory does not change into a scientific law with the accumulation of new or better evidence. A theory will always remain a theory; a law will always remain a law. Both theories and laws could potentially be falsified by countervailing evidence.
Theories and laws are also distinct from hypotheses. Unlike hypotheses, theories and laws may be simply referred to as scientific fact.
However, in science, theories are different from facts even when they are well supported. For example, evolution is both a theory and a fact.
About theories
Theories as axioms
The logical positivists thought of scientific theories as statements in a formal language. First-order logic is an example of a formal language. The logical positivists envisaged a similar scientific language. In addition to scientific theories, the language also included observation sentences ("the sun rises in the east"), definitions, and mathematical statements. The phenomena explained by the theories, if they could not be directly observed by the senses (for example, atoms and radio waves), were treated as theoretical concepts. In this view, theories function as axioms: predicted observations are derived from the theories much like theorems are derived in Euclidean geometry. However, the predictions are then tested against reality to verify the predictions, and the "axioms" can be revised as a direct result.
The phrase "the received view of theories" is used to describe this approach. Terms commonly associated with it are "linguistic" (because theories are components of a language) and "syntactic" (because a language has rules about how symbols can be strung together). Problems in defining this kind of language precisely, e.g., are objects seen in microscopes observed or are they theoretical objects, led to the effective demise of logical positivism in the 1970s.
Theories as models
The semantic view of theories, which identifies scientific theories with models rather than propositions, has replaced the received view as the dominant position in theory formulation in the philosophy of science. A model is a logical framework intended to represent reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country.
In this approach, theories are a specific category of models that fulfill the necessary criteria (see above). One can use language to describe a model; however, the theory is the model (or a collection of similar models), and not the description of the model. A model of the solar system, for example, might consist of abstract objects that represent the sun and the planets. These objects have associated properties, e.g., positions, velocities, and masses. The model parameters, e.g., Newton's Law of Gravitation, determine how the positions and velocities change with time. This model can then be tested to see whether it accurately predicts future observations; astronomers can verify that the positions of the model's objects over time match the actual positions of the planets. For most planets, the Newtonian model's predictions are accurate; for Mercury, it is slightly inaccurate and the model of general relativity must be used instead.
The word "semantic" refers to the way that a model represents the real world. The representation (literally, "re-presentation") describes particular aspects of a phenomenon or the manner of interaction among a set of phenomena. For instance, a scale model of a house or of a solar system is clearly not an actual house or an actual solar system; the aspects of an actual house or an actual solar system represented in a scale model are, only in certain limited ways, representative of the actual entity. A scale model of a house is not a house; but to someone who wants to learn about houses, analogous to a scientist who wants to understand reality, a sufficiently detailed scale model may suffice.
Differences between theory and model
Several commentators have stated that the distinguishing characteristic of theories is that they are explanatory as well as descriptive, while models are only descriptive (although still predictive in a more limited sense). Philosopher Stephen Pepper also distinguished between theories and models, and said in 1948 that general models and theories are predicated on a "root" metaphor that constrains how scientists theorize and model a phenomenon and thus arrive at testable hypotheses.
Engineering practice makes a distinction between "mathematical models" and "physical models"; the cost of fabricating a physical model can be minimized by first creating a mathematical model using a computer software package, such as a computer aided design tool. The component parts are each themselves modelled, and the fabrication tolerances are specified. An exploded view drawing is used to lay out the fabrication sequence. Simulation packages for displaying each of the subassemblies allow the parts to be rotated, magnified, in realistic detail. Software packages for creating the bill of materials for construction allows subcontractors to specialize in assembly processes, which spreads the cost of manufacturing machinery among multiple customers. See: Computer-aided engineering, Computer-aided manufacturing, and 3D printing
Assumptions in formulating theories
An assumption (or axiom) is a statement that is accepted without evidence. For example, assumptions can be used as premises in a logical argument. Isaac Asimov described assumptions as follows:
...it is incorrect to speak of an assumption as either true or false, since there is no way of proving it to be either (If there were, it would no longer be an assumption). It is better to consider assumptions as either useful or useless, depending on whether deductions made from them corresponded to reality...Since we must start somewhere, we must have assumptions, but at least let us have as few assumptions as possible.
Certain assumptions are necessary for all empirical claims (e.g. the assumption that reality exists). However, theories do not generally make assumptions in the conventional sense (statements accepted without evidence). While assumptions are often incorporated during the formation of new theories, these are either supported by evidence (such as from previously existing theories) or the evidence is produced in the course of validating the theory. This may be as simple as observing that the theory makes accurate predictions, which is evidence that any assumptions made at the outset are correct or approximately correct under the conditions tested.
Conventional assumptions, without evidence, may be used if the theory is only intended to apply when the assumption is valid (or approximately valid). For example, the special theory of relativity assumes an inertial frame of reference. The theory makes accurate predictions when the assumption is valid, and does not make accurate predictions when the assumption is not valid. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).
The term "assumption" is actually broader than its standard use, etymologically speaking. The Oxford English Dictionary (OED) and online Wiktionary indicate its Latin source as assumere ("accept, to take to oneself, adopt, usurp"), which is a conjunction of ad- ("to, towards, at") and sumere (to take). The root survives, with shifted meanings, in the Italian assumere and Spanish sumir. The first sense of "assume" in the OED is "to take unto (oneself), receive, accept, adopt". The term was originally employed in religious contexts as in "to receive up into heaven", especially "the reception of the Virgin Mary into heaven, with body preserved from corruption", (1297 CE) but it was also simply used to refer to "receive into association" or "adopt into partnership". Moreover, other senses of assumere included (i) "investing oneself with (an attribute)", (ii) "to undertake" (especially in Law), (iii) "to take to oneself in appearance only, to pretend to possess", and (iv) "to suppose a thing to be" (all senses from OED entry on "assume"; the OED entry for "assumption" is almost perfectly symmetrical in senses). Thus, "assumption" connotes other associations than the contemporary standard sense of "that which is assumed or taken for granted; a supposition, postulate" (only the 11th of 12 senses of "assumption", and the 10th of 11 senses of "assume").
Descriptions
From philosophers of science
Karl Popper described the characteristics of a scientific theory as follows:
It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations.
Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory—an event which would have refuted the theory.
Every "good" scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of "corroborating evidence".)
Some genuinely testable theories, when found to be false, might still be upheld by their admirers—for example by introducing post hoc (after the fact) some auxiliary hypothesis or assumption, or by reinterpreting the theory post hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status, by tampering with evidence. The temptation to tamper can be minimized by first taking the time to write down the testing protocol before embarking on the scientific work.
Popper summarized these statements by saying that the central criterion of the scientific status of a theory is its "falsifiability, or refutability, or testability". Echoing this, Stephen Hawking states, "A theory is a good theory if it satisfies two requirements: It must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations." He also discusses the "unprovable but falsifiable" nature of theories, which is a necessary consequence of inductive logic, and that "you can disprove a theory by finding even a single observation that disagrees with the predictions of the theory".
Several philosophers and historians of science have, however, argued that Popper's definition of theory as a set of falsifiable statements is wrong because, as Philip Kitcher has pointed out, if one took a strictly Popperian view of "theory", observations of Uranus when first discovered in 1781 would have "falsified" Newton's celestial mechanics. Rather, people suggested that another planet influenced Uranus' orbit—and this prediction was indeed eventually confirmed.
Kitcher agrees with Popper that "There is surely something right in the idea that a science can succeed only if it can fail." He also says that scientific theories include statements that cannot be falsified, and that good theories must also be creative. He insists we view scientific theories as an "elaborate collection of statements", some of which are not falsifiable, while others—those he calls "auxiliary hypotheses", are.
According to Kitcher, good scientific theories must have three features:
Unity: "A science should be unified.... Good theories consist of just one problem-solving strategy, or a small family of problem-solving strategies, that can be applied to a wide range of problems."
Fecundity: "A great scientific theory, like Newton's, opens up new areas of research.... Because a theory presents a new way of looking at the world, it can lead us to ask new questions, and so to embark on new and fruitful lines of inquiry.... Typically, a flourishing science is incomplete. At any time, it raises more questions than it can currently answer. But incompleteness is not vice. On the contrary, incompleteness is the mother of fecundity.... A good theory should be productive; it should raise new questions and presume those questions can be answered without giving up its problem-solving strategies."
Auxiliary hypotheses that are independently testable: "An auxiliary hypothesis ought to be testable independently of the particular problem it is introduced to solve, independently of the theory it is designed to save." (For example, the evidence for the existence of Neptune is independent of the anomalies in Uranus's orbit.)
Like other definitions of theories, including Popper's, Kitcher makes it clear that a theory must include statements that have observational consequences. But, like the observation of irregularities in the orbit of Uranus, falsification is only one possible consequence of observation. The production of new hypotheses is another possible and equally important result.
Analogies and metaphors
The concept of a scientific theory has also been described using analogies and metaphors. For example, the logical empiricist Carl Gustav Hempel likened the structure of a scientific theory to a "complex spatial network:"
Its terms are represented by the knots, while the threads connecting the latter correspond, in part, to the definitions and, in part, to the fundamental and derivative hypotheses included in the theory. The whole system floats, as it were, above the plane of observation and is anchored to it by the rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of these interpretive connections, the network can function as a scientific theory: From certain observational data, we may ascend, via an interpretive string, to some point in the theoretical network, thence proceed, via definitions and hypotheses, to other points, from which another interpretive string permits a descent to the plane of observation.
Michael Polanyi made an analogy between a theory and a map:
A theory is something other than myself. It may be set out on paper as a system of rules, and it is the more truly a theory the more completely it can be put down in such terms. Mathematical theory reaches the highest perfection in this respect. But even a geographical map fully embodies in itself a set of strict rules for finding one's way through a region of otherwise uncharted experience. Indeed, all theory may be regarded as a kind of map extended over space and time.
A scientific theory can also be thought of as a book that captures the fundamental information about the world, a book that must be researched, written, and shared. In 1623, Galileo Galilei wrote:
Philosophy [i.e. physics] is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.
The book metaphor could also be applied in the following passage, by the contemporary philosopher of science Ian Hacking:
I myself prefer an Argentine fantasy. God did not write a Book of Nature of the sort that the old Europeans imagined. He wrote a Borgesian library, each book of which is as brief as possible, yet each book of which is inconsistent with every other. No book is redundant. For every book there is some humanly accessible bit of Nature such that that book, and no other, makes possible the comprehension, prediction and influencing of what is going on...Leibniz said that God chose a world which maximized the variety of phenomena while choosing the simplest laws. Exactly so: but the best way to maximize phenomena and have simplest laws is to have the laws inconsistent with each other, each applying to this or that but none applying to all.
In physics
In physics, the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries—like equality of locations in space or in time, or identity of electrons, etc.)—that is capable of producing experimental predictions for a given category of physical systems. A good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered to be adequately tested, with new ones always in the making and perhaps untested. An example of the latter might be the radiation reaction force. As of 2009, its effects on the periodic motion of charges are detectable in synchrotrons, but only as averaged effects over time. Some researchers are now considering experiments that could observe these effects at the instantaneous level (i.e. not averaged over time).
Examples
Note that many fields of inquiry do not have specific named theories, e.g. developmental biology. Scientific knowledge outside a named theory can still have a high level of certainty, depending on the amount of evidence supporting it. Also note that since theories draw evidence from many fields, the categorization is not absolute.
Biology: cell theory, theory of evolution (modern evolutionary synthesis), abiogenesis, germ theory, particulate inheritance theory, dual inheritance theory, Young–Helmholtz theory, opponent process, cohesion-tension theory
Chemistry: collision theory, kinetic theory of gases, Lewis theory, molecular theory, molecular orbital theory, transition state theory, valence bond theory
Physics: atomic theory, Big Bang theory, Dynamo theory, perturbation theory, theory of relativity (successor to classical mechanics), quantum field theory
Earth science: Climate change theory (from climatology), plate tectonics theory (from geology), theories of the origin of the Moon, theories for the Moon illusion
Astronomy: Self-gravitating system, Stellar evolution, solar nebular model, stellar nucleosynthesis
Explanatory notes
References
Further reading
Essay by a British/American meteorologist and NASA astronaut on anthopogenic global warming and "theory".
Epistemology of science
Scientific method | 0.771323 | 0.998064 | 0.769829 |
Anthropic principle | The anthropic principle, also known as the observation selection effect, is the hypothesis that the range of possible observations that could be made about the universe is limited by the fact that observations are only possible in the type of universe that is capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to be finely tuned for the existence of life.
There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail.
Definition and basis
The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life on Earth. The anthropic principle states that this is an a posteriori necessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility.
The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved.
The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. The anthropic principle is often criticized for lacking falsifiability and therefore its critics may point out that the anthropic principle is a non-scientific concept, even though the weak anthropic principle, "conditions that are observed in the universe must allow the observer to exist", is "easy" to support in mathematics and philosophy (i.e., it is a tautology or truism). However, building a substantive argument based on a tautological foundation is problematic. Stronger variants of the anthropic principle are not tautologies and thus make claims considered controversial by some and that are contingent upon empirical verification.
Anthropic observations
In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old. If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspired Dirac's varying-G theory.
Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter, account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics"). However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life.
The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction (up to 50% for some authors) would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction also would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life.
Origin
The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Kraków symposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlies the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang).
Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics.
Roger Penrose explained the weak form as follows:
One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse ("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of a selection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do the fundamental laws of physics take the particular form we observe and not another?"
Since Carter's 1973 paper, the term anthropic principle has been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler, which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section.
Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man." In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem."
Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang Boltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably low entropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be.
Variants
Weak anthropic principle (WAP) (Carter): "... our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." For Carter, "location" refers to our location in time as well as space.
Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism.
In their 1986 book, The anthropic cosmological principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows:
Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP.
Strong anthropic principle (SAP) (Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler:
"There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'."
This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life emerges and evolves.
"Observers are necessary to bring the Universe into being."
Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory anthropic principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner.
"An ensemble of other different universes is necessary for the existence of our Universe."
By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation.
The philosophers John Leslie and Nick Bostrom reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes:
Strong self-sampling assumption (SSSA) (Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class." Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice.
According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary.
Playwright and novelist Michael Frayn describes a form of the strong anthropic principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe":
Character of anthropic reasoning
Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in is fine tuned to permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions". If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder.
Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was the many-worlds interpretation of quantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking in the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions.
The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein said: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind stated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle."
The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the anthropic principle as conventionally stated actually undermines intelligent design.
Paul Davies's book The Goldilocks Enigma (2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate:
The absurd universe: Our universe just happens to be the way it is.
The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. A Theory of Everything will explain why the various features of the Universe must have exactly the values that have been recorded.
The multiverse: Multiple universes exist, having all possible combinations of characteristics, and humans inevitably find themselves within a universe that allows us to exist.
Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence.
The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind.
The self-explaining universe: A closed explanatory or causal loop: "perhaps only universes with a capacity for consciousness can exist". This is Wheeler's participatory anthropic principle (PAP).
The fake universe: Humans live inside a virtual reality simulation.
Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005).
Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994).
The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983) inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links.
Observational evidence
No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist.
Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following:
Physical theory will evolve so as to strengthen the hypothesis that early phase transitions occur probabilistically rather than deterministically, in which case there will be no deep physical reason for the values of fundamental constants;
Various theories for generating multiple universes will prove robust;
Evidence that the universe is fine tuned will continue to accumulate;
No life with a non-carbon chemistry will be discovered;
Mathematical studies of galaxy formation will confirm that it is sensitive to the rate of expansion of the universe.
Hogan has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universe not to support life.
Probabilistic predictions of parameter values can be made given:
a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range ), and
an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe).
The probability of observing value X is then proportional to . A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense.
One thing that would not count as evidence for the anthropic principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers.
Applications of the principle
The nucleosynthesis of carbon-12
Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction.
However, in 2010 Helge Kragh argued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance.
Cosmic inflation
Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle. While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require.
String theory
String theory predicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind has argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed.
Steven Weinberg believes the anthropic principle may be appropriated by cosmologists committed to nontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Lubos Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present.
Jürgen Schmidhuber (2000–2002) points out that Ray Solomonoff's theory of universal inductive inference and its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and some prior distribution on the set of possible explanations of the universe.
Zhi-Wei Wang and Samuel L. Braunstein proved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life.
Dimensions of spacetime
There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That and , setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue.
The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena". Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204).
In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse.
Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if , Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Lastly, if , gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when , nerves cannot cross without intersecting. Hence anthropic and other arguments rule out all cases except and , which describes the world around us.
On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that -dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed.
In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks.
Metaphysical interpretations
Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua. From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has laconically stated that
William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point.
The anthropic cosmological principle
A thorough extant study of the anthropic principle is the book The anthropic cosmological principle by John D. Barrow, a cosmologist, and Frank J. Tipler, a cosmologist and mathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoretical astrophysics, it also touches on quantum physics, chemistry, and earth science. An entire chapter argues that Homo sapiens is, with high probability, the only intelligent species in the Milky Way.
The book begins with an extensive review of many topics in the history of ideas the authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological reasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks.
Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose the final anthropic principle (FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out.
Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas.
In his review of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP):
Reception and controversies
Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involves humans in particular, to the exclusion of non-human intelligence more broadly. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects.
A common criticism of Carter's SAP is that it is an easy deus ex machina that discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts."
Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different", which is a valid statement, but does not make a claim of some factual alternative over another.
Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result.
Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer, and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection to adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa.
Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine-tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc.
Lee Smolin has offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe.
The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation". A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours:
See also
(discussing the anthropic principle)
(an immediate precursor of the idea)
(work of Alejandro Jenkins)
Notes
Footnotes
References
5 chapters available online.
Stenger, Victor J. (1999), "Anthropic design", The skeptical inquirer 23 (August 31, 1999): 40–43
Mosterín, Jesús (2005). "Anthropic explanations in cosmology". In P. Háyek, L. Valdés and D. Westerstahl (ed.), Logic, methodology and philosophy of science, Proceedings of the 12th international congress of the LMPS. London: King's college publications, pp. 441–473. .
A simple anthropic argument for why there are 3 spatial and 1 temporal dimensions.
Shows that some of the common criticisms of anthropic principle based on its relationship with numerology or the theological design argument are wrong.
External links
Nick Bostrom: web site devoted to the anthropic principle.
Friederich, Simon. Fine-tuning, review article of the discussion about fine-tuning, highlighting the role of the anthropic principles.
Gijsbers, Victor. (2000). Theistic anthropic principle refuted – Positive atheism magazine.
Chown, Marcus, Anything Goes, New scientist, 6 June 1998. On Max Tegmark's work.
Stephen Hawking, Steven Weinberg, Alexander Vilenkin, David Gross and Lawrence Krauss: Debate on anthropic reasoning Kavli-CERCA conference video archive.
Sober, Elliott R. 2009, "Absence of evidence and evidence of absence – Evidential transitivity in connection with fossils, fishing, fine-tuning, and firing squads." Philosophical Studies, 2009, 143: 63–90.
"Anthropic coincidence" – The anthropic controversy as a segue to Lee Smolin's theory of cosmological natural selection.
Leonard Susskind and Lee Smolin debate the anthropic principle.
Debate among scientists on arxiv.org.
Evolutionary probability and fine tuning
Benevolent design and the anthropic principle at MathPages
Critical review of "The privileged planet"
The anthropic principle – a review.
Berger, Daniel, 2002, "An impertinent résumé of the Anthropic cosmological principle. " A critique of Barrow & Tipler.
Jürgen Schmidhuber: Papers on algorithmic theories of everything and the anthropic principle's lack of predictive power.
Paul Davies: Cosmic jackpot – Interview about the anthropic principle (starts at 40 min), 15 May 2007.
Astronomical hypotheses
Concepts in epistemology
Physical cosmology
Principles
Religion and science | 0.770788 | 0.998542 | 0.769664 |
Religious epistemology | Religious epistemology broadly covers religious approaches to epistemological questions, or attempts to understand the epistemological issues that come from religious belief. The questions asked by epistemologists apply to religious beliefs and propositions whether they seem rational, justified, warranted, reasonable, based on evidence and so on. Religious views also influence epistemological theories, such as in the case of Reformed epistemology.
Reformed epistemology has mainly developed in contemporary Christian religious epistemology, as in the work of Alvin Plantinga (born 1932), William P. Alston (1921-2009), Nicholas Wolterstorff (born 1932) and Kelly James Clark, as a critique of and alternative to the idea of "evidentialism" of the sort proposed by W. K. Clifford (1845-1879). Alvin Plantinga, for instance, is critical of the evidentialist analysis of knowledge provided by Richard Feldman and by Earl Conee.
D. Z. Phillips (1934-2006) states that the argument of the reformed epistemologists furthers and challenges a view he dubs "foundationalism":
Much work in recent epistemology of religion goes beyond debates on foundationalism and reformed epistemology to consider contemporary issues deriving from social epistemology (especially concerning the epistemology of testimony, or the epistemology of disagreement), or formal epistemology's use of probability theory. Other notable work draws on the idea that knowing God is akin to knowing a person, which is not reducible to knowing propositions about a person.
Some work in recent epistemology of religion discusses various challenges from psychology, cognitive science or evolutionary biology to the rationality or justification of religious beliefs. Some argue that evolutionary explanations of religious belief undermine its rationality. Others respond to these arguments.
See also
Pramana
Faith and rationality
Fideism
Pascal's wager
Reformed epistemology
Skeptical theism
References
External links
"Revitalizing the Epistemology of Religion". Oxford University Press blog, retrieved June 17, 2018.
Epistemology of religion | 0.78957 | 0.97476 | 0.769641 |
Rationality | Rationality is the quality of being guided by or based on reason. In this regard, a person acts rationally if they have a good reason for what they do, or a belief is rational if it is based on strong evidence. This quality can apply to an ability, as in a rational animal, to a psychological process, like reasoning, to mental states, such as beliefs and intentions, or to persons who possess these other forms of rationality. A thing that lacks rationality is either arational, if it is outside the domain of rational evaluation, or irrational, if it belongs to this domain but does not fulfill its standards.
There are many discussions about the essential features shared by all forms of rationality. According to reason-responsiveness accounts, to be rational is to be responsive to reasons. For example, dark clouds are a reason for taking an umbrella, which is why it is rational for an agent to do so in response. An important rival to this approach are coherence-based accounts, which define rationality as internal coherence among the agent's mental states. Many rules of coherence have been suggested in this regard, for example, that one should not hold contradictory beliefs or that one should intend to do something if one believes that one should do it. Goal-based accounts characterize rationality in relation to goals, such as acquiring truth in the case of theoretical rationality. Internalists believe that rationality depends only on the person's mind. Externalists contend that external factors may also be relevant. Debates about the normativity of rationality concern the question of whether one should always be rational. A further discussion is whether rationality requires that all beliefs be reviewed from scratch rather than trusting pre-existing beliefs.
Various types of rationality are discussed in the academic literature. The most influential distinction is between theoretical and practical rationality. Theoretical rationality concerns the rationality of beliefs. Rational beliefs are based on evidence that supports them. Practical rationality pertains primarily to actions. This includes certain mental states and events preceding actions, like intentions and decisions. In some cases, the two can conflict, as when practical rationality requires that one adopts an irrational belief. Another distinction is between ideal rationality, which demands that rational agents obey all the laws and implications of logic, and bounded rationality, which takes into account that this is not always possible since the computational power of the human mind is too limited. Most academic discussions focus on the rationality of individuals. This contrasts with social or collective rationality, which pertains to collectives and their group beliefs and decisions.
Rationality is important for solving all kinds of problems in order to efficiently reach one's goal. It is relevant to and discussed in many disciplines. In ethics, one question is whether one can be rational without being moral at the same time. Psychology is interested in how psychological processes implement rationality. This also includes the study of failures to do so, as in the case of cognitive biases. Cognitive and behavioral sciences usually assume that people are rational enough to predict how they think and act. Logic studies the laws of correct arguments. These laws are highly relevant to the rationality of beliefs. A very influential conception of practical rationality is given in decision theory, which states that a decision is rational if the chosen option has the highest expected utility. Other relevant fields include game theory, Bayesianism, economics, and artificial intelligence.
Definition and semantic field
In its most common sense, rationality is the quality of being guided by reasons or being reasonable. For example, a person who acts rationally has good reasons for what they do. This usually implies that they reflected on the possible consequences of their action and the goal it is supposed to realize. In the case of beliefs, it is rational to believe something if the agent has good evidence for it and it is coherent with the agent's other beliefs. While actions and beliefs are the most paradigmatic forms of rationality, the term is used both in ordinary language and in many academic disciplines to describe a wide variety of things, such as persons, desires, intentions, decisions, policies, and institutions. Because of this variety in different contexts, it has proven difficult to give a unified definition covering all these fields and usages. In this regard, different fields often focus their investigation on one specific conception, type, or aspect of rationality without trying to cover it in its most general sense.
These different forms of rationality are sometimes divided into abilities, processes, mental states, and persons. For example, when it is claimed that humans are rational animals, this usually refers to the ability to think and act in reasonable ways. It does not imply that all humans are rational all the time: this ability is exercised in some cases but not in others. On the other hand, the term can also refer to the process of reasoning that results from exercising this ability. Often many additional activities of the higher cognitive faculties are included as well, such as acquiring concepts, judging, deliberating, planning, and deciding as well as the formation of desires and intentions. These processes usually affect some kind of change in the thinker's mental states. In this regard, one can also talk of the rationality of mental states, like beliefs and intentions. A person who possesses these forms of rationality to a sufficiently high degree may themselves be called rational. In some cases, also non-mental results of rational processes may qualify as rational. For example, the arrangement of products in a supermarket can be rational if it is based on a rational plan.
The term "rational" has two opposites: irrational and arational. Arational things are outside the domain of rational evaluation, like digestive processes or the weather. Things within the domain of rationality are either rational or irrational depending on whether they fulfill the standards of rationality. For example, beliefs, actions, or general policies are rational if there is a good reason for them and irrational otherwise. It is not clear in all cases what belongs to the domain of rational assessment. For example, there are disagreements about whether desires and emotions can be evaluated as rational and irrational rather than arational. The term "irrational" is sometimes used in a wide sense to include cases of arationality.
The meaning of the terms "rational" and "irrational" in academic discourse often differs from how they are used in everyday language. Examples of behaviors considered irrational in ordinary discourse are giving into temptations, going out late even though one has to get up early in the morning, smoking despite being aware of the health risks, or believing in astrology. In the academic discourse, on the other hand, rationality is usually identified with being guided by reasons or following norms of internal coherence. Some of the earlier examples may qualify as rational in the academic sense depending on the circumstances. Examples of irrationality in this sense include cognitive biases and violating the laws of probability theory when assessing the likelihood of future events. This article focuses mainly on irrationality in the academic sense.
The terms "rationality", "reason", and "reasoning" are frequently used as synonyms. But in technical contexts, their meanings are often distinguished. Reason is usually understood as the faculty responsible for the process of reasoning. This process aims at improving mental states. Reasoning tries to ensure that the norms of rationality obtain. It differs from rationality nonetheless since other psychological processes besides reasoning may have the same effect. Rationality derives etymologically from the Latin term .
Disputes about the concept of rationality
There are many disputes about the essential characteristics of rationality. It is often understood in relational terms: something, like a belief or an intention, is rational because of how it is related to something else. But there are disagreements as to what it has to be related to and in what way. For reason-based accounts, the relation to a reason that justifies or explains the rational state is central. For coherence-based accounts, the relation of coherence between mental states matters. There is a lively discussion in the contemporary literature on whether reason-based accounts or coherence-based accounts are superior. Some theorists also try to understand rationality in relation to the goals it tries to realize.
Other disputes in this field concern whether rationality depends only on the agent's mind or also on external factors, whether rationality requires a review of all one's beliefs from scratch, and whether we should always be rational.
Based on reason-responsiveness
A common idea of many theories of rationality is that it can be defined in terms of reasons. On this view, to be rational means to respond correctly to reasons. For example, the fact that a food is healthy is a reason to eat it. So this reason makes it rational for the agent to eat the food. An important aspect of this interpretation is that it is not sufficient to merely act accidentally in accordance with reasons. Instead, responding to reasons implies that one acts intentionally because of these reasons.
Some theorists understand reasons as external facts. This view has been criticized based on the claim that, in order to respond to reasons, people have to be aware of them, i.e. they have some form of epistemic access. But lacking this access is not automatically irrational. In one example by John Broome, the agent eats a fish contaminated with salmonella, which is a strong reason against eating the fish. But since the agent could not have known this fact, eating the fish is rational for them. Because of such problems, many theorists have opted for an internalist version of this account. This means that the agent does not need to respond to reasons in general, but only to reasons they have or possess. The success of such approaches depends a lot on what it means to have a reason and there are various disagreements on this issue. A common approach is to hold that this access is given through the possession of evidence in the form of cognitive mental states, like perceptions and knowledge. A similar version states that "rationality consists in responding correctly to beliefs about reasons". So it is rational to bring an umbrella if the agent has strong evidence that it is going to rain. But without this evidence, it would be rational to leave the umbrella at home, even if, unbeknownst to the agent, it is going to rain. These versions avoid the previous objection since rationality no longer requires the agent to respond to external factors of which they could not have been aware.
A problem faced by all forms of reason-responsiveness theories is that there are usually many reasons relevant and some of them may conflict with each other. So while salmonella contamination is a reason against eating the fish, its good taste and the desire not to offend the host are reasons in favor of eating it. This problem is usually approached by weighing all the different reasons. This way, one does not respond directly to each reason individually but instead to their weighted sum. Cases of conflict are thus solved since one side usually outweighs the other. So despite the reasons cited in favor of eating the fish, the balance of reasons stands against it, since avoiding a salmonella infection is a much weightier reason than the other reasons cited. This can be expressed by stating that rational agents pick the option favored by the balance of reasons.
However, other objections to the reason-responsiveness account are not so easily solved. They often focus on cases where reasons require the agent to be irrational, leading to a rational dilemma. For example, if terrorists threaten to blow up a city unless the agent forms an irrational belief, this is a very weighty reason to do all in one's power to violate the norms of rationality.
Based on rules of coherence
An influential rival to the reason-responsiveness account understands rationality as internal coherence. On this view, a person is rational to the extent that their mental states and actions are coherent with each other. Diverse versions of this approach exist that differ in how they understand coherence and what rules of coherence they propose. A general distinction in this regard is between negative and positive coherence. Negative coherence is an uncontroversial aspect of most such theories: it requires the absence of contradictions and inconsistencies. This means that the agent's mental states do not clash with each other. In some cases, inconsistencies are rather obvious, as when a person believes that it will rain tomorrow and that it will not rain tomorrow. In complex cases, inconsistencies may be difficult to detect, for example, when a person believes in the axioms of Euclidean geometry and is nonetheless convinced that it is possible to square the circle. Positive coherence refers to the support that different mental states provide for each other. For example, there is positive coherence between the belief that there are eight planets in the solar system and the belief that there are less than ten planets in the solar system: the earlier belief implies the latter belief. Other types of support through positive coherence include explanatory and causal connections.
Coherence-based accounts are also referred to as rule-based accounts since the different aspects of coherence are often expressed in precise rules. In this regard, to be rational means to follow the rules of rationality in thought and action. According to the enkratic rule, for example, rational agents are required to intend what they believe they ought to do. This requires coherence between beliefs and intentions. The norm of persistence states that agents should retain their intentions over time. This way, earlier mental states cohere with later ones. It is also possible to distinguish different types of rationality, such as theoretical or practical rationality, based on the different sets of rules they require.
One problem with such coherence-based accounts of rationality is that the norms can enter into conflict with each other, so-called rational dilemmas. For example, if the agent has a pre-existing intention that turns out to conflict with their beliefs, then the enkratic norm requires them to change it, which is disallowed by the norm of persistence. This suggests that, in cases of rational dilemmas, it is impossible to be rational, no matter which norm is privileged. Some defenders of coherence theories of rationality have argued that, when formulated correctly, the norms of rationality cannot enter into conflict with each other. That means that rational dilemmas are impossible. This is sometimes tied to additional non-trivial assumptions, such that ethical dilemmas also do not exist. A different response is to bite the bullet and allow that rational dilemmas exist. This has the consequence that, in such cases, rationality is not possible for the agent and theories of rationality cannot offer guidance to them. These problems are avoided by reason-responsiveness accounts of rationality since they "allow for rationality despite conflicting reasons but [coherence-based accounts] do not allow for rationality despite conflicting requirements". Some theorists suggest a weaker criterion of coherence to avoid cases of necessary irrationality: rationality requires not to obey all norms of coherence but to obey as many norms as possible. So in rational dilemmas, agents can still be rational if they violate the minimal number of rational requirements.
Another criticism rests on the claim that coherence-based accounts are either redundant or false. On this view, either the rules recommend the same option as the balance of reasons or a different option. If they recommend the same option, they are redundant. If they recommend a different option, they are false since, according to its critics, there is no special value in sticking to rules against the balance of reasons.
Based on goals
A different approach characterizes rationality in relation to the goals it aims to achieve. In this regard, theoretical rationality aims at epistemic goals, like acquiring truth and avoiding falsehood. Practical rationality, on the other hand, aims at non-epistemic goals, like moral, prudential, political, economic, or aesthetic goals. This is usually understood in the sense that rationality follows these goals but does not set them. So rationality may be understood as a "minister without portfolio" since it serves goals external to itself. This issue has been the source of an important historical discussion between David Hume and Immanuel Kant. The slogan of Hume's position is that "reason is the slave of the passions". This is often understood as the claim that rationality concerns only how to reach a goal but not whether the goal should be pursued at all. So people with perverse or weird goals may still be perfectly rational. This position is opposed by Kant, who argues that rationality requires having the right goals and motives.
According to William Frankena there are four conceptions of rationality based on the goals it tries to achieve. They correspond to egoism, utilitarianism, perfectionism, and intuitionism. According to the egoist perspective, rationality implies looking out for one's own happiness. This contrasts with the utilitarian point of view, which states that rationality entails trying to contribute to everyone's well-being or to the greatest general good. For perfectionism, a certain ideal of perfection, either moral or non-moral, is the goal of rationality. According to the intuitionist perspective, something is rational "if and only if [it] conforms to self-evident truths, intuited by reason". These different perspectives diverge a lot concerning the behavior they prescribe. One problem for all of them is that they ignore the role of the evidence or information possessed by the agent. In this regard, it matters for rationality not just whether the agent acts efficiently towards a certain goal but also what information they have and how their actions appear reasonable from this perspective. Richard Brandt responds to this idea by proposing a conception of rationality based on relevant information: "Rationality is a matter of what would survive scrutiny by all relevant information." This implies that the subject repeatedly reflects on all the relevant facts, including formal facts like the laws of logic.
Internalism and externalism
An important contemporary discussion in the field of rationality is between internalists and externalists. Both sides agree that rationality demands and depends in some sense on reasons. They disagree on what reasons are relevant or how to conceive those reasons. Internalists understand reasons as mental states, for example, as perceptions, beliefs, or desires. On this view, an action may be rational because it is in tune with the agent's beliefs and realizes their desires. Externalists, on the other hand, see reasons as external factors about what is good or right. They state that whether an action is rational also depends on its actual consequences. The difference between the two positions is that internalists affirm and externalists reject the claim that rationality supervenes on the mind. This claim means that it only depends on the person's mind whether they are rational and not on external factors. So for internalism, two persons with the same mental states would both have the same degree of rationality independent of how different their external situation is. Because of this limitation, rationality can diverge from actuality. So if the agent has a lot of misleading evidence, it may be rational for them to turn left even though the actually correct path goes right.
Bernard Williams has criticized externalist conceptions of rationality based on the claim that rationality should help explain what motivates the agent to act. This is easy for internalism but difficult for externalism since external reasons can be independent of the agent's motivation. Externalists have responded to this objection by distinguishing between motivational and normative reasons. Motivational reasons explain why someone acts the way they do while normative reasons explain why someone ought to act in a certain way. Ideally, the two overlap, but they can come apart. For example, liking chocolate cake is a motivational reason for eating it while having high blood pressure is a normative reason for not eating it. The problem of rationality is primarily concerned with normative reasons. This is especially true for various contemporary philosophers who hold that rationality can be reduced to normative reasons. The distinction between motivational and normative reasons is usually accepted, but many theorists have raised doubts that rationality can be identified with normativity. On this view, rationality may sometimes recommend suboptimal actions, for example, because the agent lacks important information or has false information. In this regard, discussions between internalism and externalism overlap with discussions of the normativity of rationality.
Relativity
An important implication of internalist conceptions is that rationality is relative to the person's perspective or mental states. Whether a belief or an action is rational usually depends on which mental states the person has. So carrying an umbrella for the walk to the supermarket is rational for a person believing that it will rain but irrational for another person who lacks this belief. According to Robert Audi, this can be explained in terms of experience: what is rational depends on the agent's experience. Since different people make different experiences, there are differences in what is rational for them.
Normativity
Rationality is normative in the sense that it sets up certain rules or standards of correctness: to be rational is to comply with certain requirements. For example, rationality requires that the agent does not have contradictory beliefs. Many discussions on this issue concern the question of what exactly these standards are. Some theorists characterize the normativity of rationality in the deontological terms of obligations and permissions. Others understand them from an evaluative perspective as good or valuable. A further approach is to talk of rationality based on what is praise- and blameworthy. It is important to distinguish the norms of rationality from other types of norms. For example, some forms of fashion prescribe that men do not wear bell-bottom trousers. Understood in the strongest sense, a norm prescribes what an agent ought to do or what they have most reason to do. The norms of fashion are not norms in this strong sense: that it is unfashionable does not mean that men ought not to wear bell-bottom trousers.
Most discussions of the normativity of rationality are interested in the strong sense, i.e. whether agents ought always to be rational. This is sometimes termed a substantive account of rationality in contrast to structural accounts. One important argument in favor of the normativity of rationality is based on considerations of praise- and blameworthiness. It states that we usually hold each other responsible for being rational and criticize each other when we fail to do so. This practice indicates that irrationality is some form of fault on the side of the subject that should not be the case. A strong counterexample to this position is due to John Broome, who considers the case of a fish an agent wants to eat. It contains salmonella, which is a decisive reason why the agent ought not to eat it. But the agent is unaware of this fact, which is why it is rational for them to eat the fish. So this would be a case where normativity and rationality come apart. This example can be generalized in the sense that rationality only depends on the reasons accessible to the agent or how things appear to them. What one ought to do, on the other hand, is determined by objectively existing reasons. In the ideal case, rationality and normativity may coincide but they come apart either if the agent lacks access to a reason or if he has a mistaken belief about the presence of a reason. These considerations are summed up in the statement that rationality supervenes only on the agent's mind but normativity does not.
But there are also thought experiments in favor of the normativity of rationality. One, due to Frank Jackson, involves a doctor who receives a patient with a mild condition and has to prescribe one out of three drugs: drug A resulting in a partial cure, drug B resulting in a complete cure, or drug C resulting in the patient's death. The doctor's problem is that they cannot tell which of the drugs B and C results in a complete cure and which one in the patient's death. The objectively best case would be for the patient to get drug B, but it would be highly irresponsible for the doctor to prescribe it given the uncertainty about its effects. So the doctor ought to prescribe the less effective drug A, which is also the rational choice. This thought experiment indicates that rationality and normativity coincide since what is rational and what one ought to do depends on the agent's mind after all.
Some theorists have responded to these thought experiments by distinguishing between normativity and responsibility. On this view, critique of irrational behavior, like the doctor prescribing drug B, involves a negative evaluation of the agent in terms of responsibility but remains silent on normative issues. On a competence-based account, which defines rationality in terms of the competence of responding to reasons, such behavior can be understood as a failure to execute one's competence. But sometimes we are lucky and we succeed in the normative dimension despite failing to perform competently, i.e. rationally, due to being irresponsible. The opposite can also be the case: bad luck may result in failure despite a responsible, competent performance. This explains how rationality and normativity can come apart despite our practice of criticizing irrationality.
Normative and descriptive theories
The concept of normativity can also be used to distinguish different theories of rationality. Normative theories explore the normative nature of rationality. They are concerned with rules and ideals that govern how the mind should work. Descriptive theories, on the other hand, investigate how the mind actually works. This includes issues like under which circumstances the ideal rules are followed as well as studying the underlying psychological processes responsible for rational thought. Descriptive theories are often investigated in empirical psychology while philosophy tends to focus more on normative issues. This division also reflects how different these two types are investigated.
Descriptive and normative theorists usually employ different methodologies in their research. Descriptive issues are studied by empirical research. This can take the form of studies that present their participants with a cognitive problem. It is then observed how the participants solve the problem, possibly together with explanations of why they arrived at a specific solution. Normative issues, on the other hand, are usually investigated in similar ways to how the formal sciences conduct their inquiry. In the field of theoretical rationality, for example, it is accepted that deductive reasoning in the form of modus ponens leads to rational beliefs. This claim can be investigated using methods like rational intuition or careful deliberation toward a reflective equilibrium. These forms of investigation can arrive at conclusions about what forms of thought are rational and irrational without depending on empirical evidence.
An important question in this field concerns the relation between descriptive and normative approaches to rationality. One difficulty in this regard is that there is in many cases a huge gap between what the norms of ideal rationality prescribe and how people actually reason. Examples of normative systems of rationality are classical logic, probability theory, and decision theory. Actual reasoners often diverge from these standards because of cognitive biases, heuristics, or other mental limitations.
Traditionally, it was often assumed that actual human reasoning should follow the rules described in normative theories. On this view, any discrepancy is a form of irrationality that should be avoided. However, this usually ignores the human limitations of the mind. Given these limitations, various discrepancies may be necessary (and in this sense rational) to get the most useful results. For example, the ideal rational norms of decision theory demand that the agent should always choose the option with the highest expected value. However, calculating the expected value of each option may take a very long time in complex situations and may not be worth the trouble. This is reflected in the fact that actual reasoners often settle for an option that is good enough without making certain that it is really the best option available. A further difficulty in this regard is Hume's law, which states that one cannot deduce what ought to be based on what is. So just because a certain heuristic or cognitive bias is present in a specific case, it should not be inferred that it should be present. One approach to these problems is to hold that descriptive and normative theories talk about different types of rationality. This way, there is no contradiction between the two and both can be correct in their own field. Similar problems are discussed in so-called naturalized epistemology.
Conservatism and foundationalism
Rationality is usually understood as conservative in the sense that rational agents do not start from zero but already possess many beliefs and intentions. Reasoning takes place on the background of these pre-existing mental states and tries to improve them. This way, the original beliefs and intentions are privileged: one keeps them unless a reason to doubt them is encountered. Some forms of epistemic foundationalism reject this approach. According to them, the whole system of beliefs is to be justified by self-evident beliefs. Examples of such self-evident beliefs may include immediate experiences as well as simple logical and mathematical axioms.
An important difference between conservatism and foundationalism concerns their differing conceptions of the burden of proof. According to conservativism, the burden of proof is always in favor of already established belief: in the absence of new evidence, it is rational to keep the mental states one already has. According to foundationalism, the burden of proof is always in favor of suspending mental states. For example, the agent reflects on their pre-existing belief that the Taj Mahal is in Agra but is unable to access any reason for or against this belief. In this case, conservatists think it is rational to keep this belief while foundationalists reject it as irrational due to the lack of reasons. In this regard, conservatism is much closer to the ordinary conception of rationality. One problem for foundationalism is that very few beliefs, if any, would remain if this approach was carried out meticulously. Another is that enormous mental resources would be required to constantly keep track of all the justificatory relations connecting non-fundamental beliefs to fundamental ones.
Types
Rationality is discussed in a great variety of fields, often in very different terms. While some theorists try to provide a unifying conception expressing the features shared by all forms of rationality, the more common approach is to articulate the different aspects of the individual forms of rationality. The most common distinction is between theoretical and practical rationality. Other classifications include categories for ideal and bounded rationality as well as for individual and social rationality.
Theoretical and practical
The most influential distinction contrasts theoretical or epistemic rationality with practical rationality. Its theoretical side concerns the rationality of beliefs: whether it is rational to hold a given belief and how certain one should be about it. Practical rationality, on the other hand, is about the rationality of actions, intentions, and decisions. This corresponds to the distinction between theoretical reasoning and practical reasoning: theoretical reasoning tries to assess whether the agent should change their beliefs while practical reasoning tries to assess whether the agent should change their plans and intentions.
Theoretical
Theoretical rationality concerns the rationality of cognitive mental states, in particular, of beliefs. It is common to distinguish between two factors. The first factor is about the fact that good reasons are necessary for a belief to be rational. This is usually understood in terms of evidence provided by the so-called sources of knowledge, i.e. faculties like perception, introspection, and memory. In this regard, it is often argued that to be rational, the believer has to respond to the impressions or reasons presented by these sources. For example, the visual impression of the sunlight on a tree makes it rational to believe that the sun is shining. In this regard, it may also be relevant whether the formed belief is involuntary and implicit
The second factor pertains to the norms and procedures of rationality that govern how agents should form beliefs based on this evidence. These norms include the rules of inference discussed in regular logic as well as other norms of coherence between mental states. In the case of rules of inference, the premises of a valid argument offer support to the conclusion and make therefore the belief in the conclusion rational. The support offered by the premises can either be deductive or non-deductive. In both cases, believing in the premises of an argument makes it rational to also believe in its conclusion. The difference between the two is given by how the premises support the conclusion. For deductive reasoning, the premises offer the strongest possible support: it is impossible for the conclusion to be false if the premises are true. The premises of non-deductive arguments also offer support for their conclusion. But this support is not absolute: the truth of the premises does not guarantee the truth of the conclusion. Instead, the premises make it more likely that the conclusion is true. In this case, it is usually demanded that the non-deductive support is sufficiently strong if the belief in the conclusion is to be rational.
An important form of theoretical irrationality is motivationally biased belief, sometimes referred to as wishful thinking. In this case, beliefs are formed based on one's desires or what is pleasing to imagine without proper evidential support. Faulty reasoning in the form of formal and informal fallacies is another cause of theoretical irrationality.
Practical
All forms of practical rationality are concerned with how we act. It pertains both to actions directly as well as to mental states and events preceding actions, like intentions and decisions. There are various aspects of practical rationality, such as how to pick a goal to follow and how to choose the means for reaching this goal. Other issues include the coherence between different intentions as well as between beliefs and intentions.
Some theorists define the rationality of actions in terms of beliefs and desires. On this view, an action to bring about a certain goal is rational if the agent has the desire to bring about this goal and the belief that their action will realize it. A stronger version of this view requires that the responsible beliefs and desires are rational themselves. A very influential conception of the rationality of decisions comes from decision theory. In decisions, the agent is presented with a set of possible courses of action and has to choose one among them. Decision theory holds that the agent should choose the alternative that has the highest expected value. Practical rationality includes the field of actions but not of behavior in general. The difference between the two is that actions are intentional behavior, i.e. they are performed for a purpose and guided by it. In this regard, intentional behavior like driving a car is either rational or irrational while non-intentional behavior like sneezing is outside the domain of rationality.
For various other practical phenomena, there is no clear consensus on whether they belong to this domain or not. For example, concerning the rationality of desires, two important theories are proceduralism and substantivism. According to proceduralism, there is an important distinction between instrumental and noninstrumental desires. A desire is instrumental if its fulfillment serves as a means to the fulfillment of another desire. For example, Jack is sick and wants to take medicine to get healthy again. In this case, the desire to take the medicine is instrumental since it only serves as a means to Jack's noninstrumental desire to get healthy. Both proceduralism and substantivism usually agree that a person can be irrational if they lack an instrumental desire despite having the corresponding noninstrumental desire and being aware that it acts as a means. Proceduralists hold that this is the only way a desire can be irrational. Substantivists, on the other hand, allow that noninstrumental desires may also be irrational. In this regard, a substantivist could claim that it would be irrational for Jack to lack his noninstrumental desire to be healthy. Similar debates focus on the rationality of emotions.
Relation between the two
Theoretical and practical rationality are often discussed separately and there are many differences between them. In some cases, they even conflict with each other. However, there are also various ways in which they overlap and depend on each other.
It is sometimes claimed that theoretical rationality aims at truth while practical rationality aims at goodness. According to John Searle, the difference can be expressed in terms of "direction of fit". On this view, theoretical rationality is about how the mind corresponds to the world by representing it. Practical rationality, on the other hand, is about how the world corresponds to the ideal set up by the mind and how it should be changed. Another difference is that arbitrary choices are sometimes needed for practical rationality. For example, there may be two equally good routes available to reach a goal. On the practical level, one has to choose one of them if one wants to reach the goal. It would even be practically irrational to resist this arbitrary choice, as exemplified by Buridan's ass. But on the theoretical level, one does not have to form a belief about which route was taken upon hearing that someone reached the goal. In this case, the arbitrary choice for one belief rather than the other would be theoretically irrational. Instead, the agent should suspend their belief either way if they lack sufficient reasons. Another difference is that practical rationality is guided by specific goals and desires, in contrast to theoretical rationality. So it is practically rational to take medicine if one has the desire to cure a sickness. But it is theoretically irrational to adopt the belief that one is healthy just because one desires this. This is a form of wishful thinking.
In some cases, the demands of practical and theoretical rationality conflict with each other. For example, the practical reason of loyalty to one's child may demand the belief that they are innocent while the evidence linking them to the crime may demand a belief in their guilt on the theoretical level.
But the two domains also overlap in certain ways. For example, the norm of rationality known as enkrasia links beliefs and intentions. It states that "[r]ationality requires of you that you intend to F if you believe your reasons require you to F". Failing to fulfill this requirement results in cases of irrationality known as akrasia or weakness of the will. Another form of overlap is that the study of the rules governing practical rationality is a theoretical matter. And practical considerations may determine whether to pursue theoretical rationality on a certain issue as well as how much time and resources to invest in the inquiry. It is often held that practical rationality presupposes theoretical rationality. This is based on the idea that to decide what should be done, one needs to know what is the case. But one can assess what is the case independently of knowing what should be done. So in this regard, one can study theoretical rationality as a distinct discipline independent of practical rationality but not the other way round. However, this independence is rejected by some forms of doxastic voluntarism. They hold that theoretical rationality can be understood as one type of practical rationality. This is based on the controversial claim that we can decide what to believe. It can take the form of epistemic decision theory, which states that people try to fulfill epistemic aims when deciding what to believe. A similar idea is defended by Jesús Mosterín. He argues that the proper object of rationality is not belief but acceptance. He understands acceptance as a voluntary and context-dependent decision to affirm a proposition.
Ideal and bounded
Various theories of rationality assume some form of ideal rationality, for example, by demanding that rational agents obey all the laws and implications of logic. This can include the requirement that if the agent believes a proposition, they should also believe in everything that logically follows from this proposition. However, many theorists reject this form of logical omniscience as a requirement for rationality. They argue that, since the human mind is limited, rationality has to be defined accordingly to account for how actual finite humans possess some form of resource-limited rationality.
According to the position of bounded rationality, theories of rationality should take into account cognitive limitations, such as incomplete knowledge, imperfect memory, and limited capacities of computation and representation. An important research question in this field is about how cognitive agents use heuristics rather than brute calculations to solve problems and make decisions. According to the satisficing heuristic, for example, agents usually stop their search for the best option once an option is found that meets their desired achievement level. In this regard, people often do not continue to search for the best possible option, even though this is what theories of ideal rationality commonly demand. Using heuristics can be highly rational as a way to adapt to the limitations of the human mind, especially in complex cases where these limitations make brute calculations impossible or very time- and resource-intensive.
Individual and social
Most discussions and research in the academic literature focus on individual rationality. This concerns the rationality of individual persons, for example, whether their beliefs and actions are rational. But the question of rationality can also be applied to groups as a whole on the social level. This form of social or collective rationality concerns both theoretical and practical issues like group beliefs and group decisions. And just like in the individual case, it is possible to study these phenomena as well as the processes and structures that are responsible for them. On the social level, there are various forms of cooperation to reach a shared goal. In the theoretical cases, a group of jurors may first discuss and then vote to determine whether the defendant is guilty. Or in the practical case, politicians may cooperate to implement new regulations to combat climate change. These forms of cooperation can be judged on their social rationality depending on how they are implemented and on the quality of the results they bear. Some theorists try to reduce social rationality to individual rationality by holding that the group processes are rational to the extent that the individuals participating in them are rational. But such a reduction is frequently rejected.
Various studies indicate that group rationality often outperforms individual rationality. For example, groups of people working together on the Wason selection task usually perform better than individuals by themselves. This form of group superiority is sometimes termed "wisdom of crowds" and may be explained based on the claim that competent individuals have a stronger impact on the group decision than others. However, this is not always the case and sometimes groups perform worse due to conformity or unwillingness to bring up controversial issues.
Others
Many other classifications are discussed in the academic literature. One important distinction is between approaches to rationality based on the output or on the process. Process-oriented theories of rationality are common in cognitive psychology and study how cognitive systems process inputs to generate outputs. Output-oriented approaches are more common in philosophy and investigate the rationality of the resulting states. Another distinction is between relative and categorical judgments of rationality. In the relative case, rationality is judged based on limited information or evidence while categorical judgments take all the evidence into account and are thus judgments all things considered. For example, believing that one's investments will multiply can be rational in a relative sense because it is based on one's astrological horoscope. But this belief is irrational in a categorical sense if the belief in astrology is itself irrational.
Importance
Rationality is central to solving many problems, both on the local and the global scale. This is often based on the idea that rationality is necessary to act efficiently and to reach all kinds of goals. This includes goals from diverse fields, such as ethical goals, humanist goals, scientific goals, and even religious goals. The study of rationality is very old and has occupied many of the greatest minds since ancient Greek. This interest is often motivated by discovering the potentials and limitations of our minds. Various theorists even see rationality as the essence of being human, often in an attempt to distinguish humans from other animals. However, this strong affirmation has been subjected to many criticisms, for example, that humans are not rational all the time and that non-human animals also show diverse forms of intelligence.
The topic of rationality is relevant to a variety of disciplines. It plays a central role in philosophy, psychology, Bayesianism, decision theory, and game theory. But it is also covered in other disciplines, such as artificial intelligence, behavioral economics, microeconomics, and neuroscience. Some forms of research restrict themselves to one specific domain while others investigate the topic in an interdisciplinary manner by drawing insights from different fields.
Paradoxes of rationality
The term paradox of rationality has a variety of meanings. It is often used for puzzles or unsolved problems of rationality. Some are just situations where it is not clear what the rational person should do. Others involve apparent faults within rationality itself, for example, where rationality seems to recommend a suboptimal course of action. A special case are so-called rational dilemmas, in which it is impossible to be rational since two norms of rationality conflict with each other. Examples of paradoxes of rationality include Pascal's Wager, the Prisoner's dilemma, Buridan's ass, and the St. Petersburg paradox.
History
Max Weber
The German scholar Max Weber proposed an interpretation of social action that distinguished between four different idealized types of rationality.
The first, which he called Zweckrational or purposive/instrumental rationality, is related to the expectations about the behavior of other human beings or objects in the environment. These expectations serve as means for a particular actor to attain ends, ends which Weber noted were "rationally pursued and calculated." The second type, Weber called Wertrational or value/belief-oriented. Here the action is undertaken for what one might call reasons intrinsic to the actor: some ethical, aesthetic, religious or other motives, independent of whether it will lead to success. The third type was affectual, determined by an actor's specific affect, feeling, or emotion—to which Weber himself said that this was a kind of rationality that was on the borderline of what he considered "meaningfully oriented." The fourth was traditional or conventional, determined by ingrained habituation. Weber emphasized that it was very unusual to find only one of these orientations: combinations were the norm. His usage also makes clear that he considered the first two as more significant than the others, and it is arguable that the third and fourth are subtypes of the first two.
The advantage in Weber's interpretation of rationality is that it avoids a value-laden assessment, say, that certain kinds of beliefs are irrational. Instead, Weber suggests that ground or motive can be given—for religious or affect reasons, for example—that may meet the criterion of explanation or justification even if it is not an explanation that fits the Zweckrational orientation of means and ends. The opposite is therefore also true: some means-ends explanations will not satisfy those whose grounds for action are Wertrational.
Weber's constructions of rationality have been critiqued both from a Habermasian (1984) perspective (as devoid of social context and under-theorised in terms of social power) and also from a feminist perspective (Eagleton, 2003) whereby Weber's rationality constructs are viewed as imbued with masculine values and oriented toward the maintenance of male power. An alternative position on rationality (which includes both bounded rationality, as well as the affective and value-based arguments of Weber) can be found in the critique of Etzioni (1988), who reframes thought on decision-making to argue for a reversal of the position put forward by Weber. Etzioni illustrates how purposive/instrumental reasoning is subordinated by normative considerations (ideas on how people 'ought' to behave) and affective considerations (as a support system for the development of human relationships).
Richard Brandt
Richard Brandt proposed a "reforming definition" of rationality, arguing someone is rational if their notions survive a form of cognitive-psychotherapy.
Robert Audi
Robert Audi developed a comprehensive account of rationality that covers both the theoretical and the practical side of rationality. This account centers on the notion of a ground: a mental state is rational if it is "well-grounded" in a source of justification. Irrational mental states, on the other hand, lack a sufficient ground. For example, the perceptual experience of a tree when looking outside the window can ground the rationality of the belief that there is a tree outside.
Audi is committed to a form of foundationalism: the idea that justified beliefs, or in his case, rational states in general, can be divided into two groups: the foundation and the superstructure. The mental states in the superstructure receive their justification from other rational mental states while the foundational mental states receive their justification from a more basic source. For example, the above-mentioned belief that there is a tree outside is foundational since it is based on a basic source: perception. Knowing that trees grow in soil, we may deduce that there is soil outside. This belief is equally rational, being supported by an adequate ground, but it belongs to the superstructure since its rationality is grounded in the rationality of another belief. Desires, like beliefs, form a hierarchy: intrinsic desires are at the foundation while instrumental desires belong to the superstructure. In order to link the instrumental desire to the intrinsic desire an extra element is needed: a belief that the fulfillment of the instrumental desire is a means to the fulfillment of the intrinsic desire.
Audi asserts that all the basic sources providing justification for the foundational mental states come from experience. As for beliefs, there are four types of experience that act as sources: perception, memory, introspection, and rational intuition. The main basic source of the rationality of desires, on the other hand, comes in the form of hedonic experience: the experience of pleasure and pain. So, for example, a desire to eat ice-cream is rational if it is based on experiences in which the agent enjoyed the taste of ice-cream, and irrational if it lacks such a support. Because of its dependence on experience, rationality can be defined as a kind of responsiveness to experience.
Actions, in contrast to beliefs and desires, do not have a source of justification of their own. Their rationality is grounded in the rationality of other states instead: in the rationality of beliefs and desires. Desires motivate actions. Beliefs are needed here, as in the case of instrumental desires, to bridge a gap and link two elements. Audi distinguishes the focal rationality of individual mental states from the global rationality of persons. Global rationality has a derivative status: it depends on the focal rationality. Or more precisely: "Global rationality is reached when a person has a sufficiently integrated system of sufficiently well-grounded propositional attitudes, emotions, and actions". Rationality is relative in the sense that it depends on the experience of the person in question. Since different people undergo different experiences, what is rational to believe for one person may be irrational to believe for another person. That a belief is rational does not entail that it is true.
In various fields
Ethics and morality
The problem of rationality is relevant to various issues in ethics and morality. Many debates center around the question of whether rationality implies morality or is possible without it. Some examples based on common sense suggest that the two can come apart. For example, some immoral psychopaths are highly intelligent in the pursuit of their schemes and may, therefore, be seen as rational. However, there are also considerations suggesting that the two are closely related to each other. For example, according to the principle of universality, "one's reasons for acting are acceptable only if it is acceptable that everyone acts on such reasons". A similar formulation is given in Immanuel Kant's categorical imperative: "act only according to that maxim whereby you can, at the same time, will that it should become a universal law". The principle of universality has been suggested as a basic principle both for morality and for rationality. This is closely related to the question of whether agents have a duty to be rational. Another issue concerns the value of rationality. In this regard, it is often held that human lives are more important than animal lives because humans are rational.
Psychology
Many psychological theories have been proposed to describe how reasoning happens and what underlying psychological processes are responsible. One of their goals is to explain how the different types of irrationality happen and why some types are more prevalent than others. They include mental logic theories, mental model theories, and dual process theories. An important psychological area of study focuses on cognitive biases. Cognitive biases are systematic tendencies to engage in erroneous or irrational forms of thinking, judging, and acting. Examples include the confirmation bias, the self-serving bias, the hindsight bias, and the Dunning–Kruger effect. Some empirical findings suggest that metacognition is an important aspect of rationality. The idea behind this claim is that reasoning is carried out more efficiently and reliably if the responsible thought processes are properly controlled and monitored.
The Wason selection task is an influential test for studying rationality and reasoning abilities. In it, four cards are placed before the participants. Each has a number on one side and a letter on the opposite side. In one case, the visible sides of the four cards are A, D, 4, and 7. The participant is then asked which cards need to be turned around in order to verify the conditional claim "if there is a vowel on one side of the card, then there is an even number on the other side of the card". The correct answer is A and 7. But this answer is only given by about 10%. Many choose card 4 instead even though there is no requirement on what letters may appear on its opposite side. An important insight from using these and similar tests is that the rational ability of the participants is usually significantly better for concrete and realistic cases than for abstract or implausible cases. Various contemporary studies in this field use Bayesian probability theory to study subjective degrees of belief, for example, how the believer's certainty in the premises is carried over to the conclusion through reasoning.
In the psychology of reasoning, psychologists and cognitive scientists have defended different positions on human rationality. One prominent view, due to Philip Johnson-Laird and Ruth M. J. Byrne among others is that humans are rational in principle but they err in practice, that is, humans have the competence to be rational but their performance is limited by various factors. However, it has been argued that many standard tests of reasoning, such as those on the conjunction fallacy, on the Wason selection task, or the base rate fallacy suffer from methodological and conceptual problems. This has led to disputes in psychology over whether researchers should (only) use standard rules of logic, probability theory and statistics, or rational choice theory as norms of good reasoning. Opponents of this view, such as Gerd Gigerenzer, favor a conception of bounded rationality, especially for tasks under high uncertainty. The concept of rationality continues to be debated by psychologists, economists and cognitive scientists.
The psychologist Jean Piaget gave an influential account of how the stages in human development from childhood to adulthood can be understood in terms of the increase of rational and logical abilities. He identifies four stages associated with rough age groups: the sensorimotor stage below the age of two, the preoperational state until the age of seven, the concrete operational stage until the age of eleven, and the formal operational stage afterward. Rational or logical reasoning only takes place in the last stage and is related to abstract thinking, concept formation, reasoning, planning, and problem-solving.
Emotions
According to A. C. Grayling, rationality "must be independent of emotions, personal feelings or any kind of instincts". Certain findings in cognitive science and neuroscience show that no human has ever satisfied this criterion, except perhaps a person with no affective feelings, for example, an individual with a massively damaged amygdala or severe psychopathy. Thus, such an idealized form of rationality is best exemplified by computers, and not people. However, scholars may productively appeal to the idealization as a point of reference. In his book, The Edge of Reason: A Rational Skeptic in an Irrational World, British philosopher Julian Baggini sets out to debunk myths about reason (e.g., that it is "purely objective and requires no subjective judgment").
Cognitive and behavioral sciences
Cognitive and behavioral sciences try to describe, explain, and predict how people think and act. Their models are often based on the assumption that people are rational. For example, classical economics is based on the assumption that people are rational agents that maximize expected utility. However, people often depart from the ideal standards of rationality in various ways. For example, they may only look for confirming evidence and ignore disconfirming evidence. Another factor studied in this regard are the limitations of human intellectual capacities. Many discrepancies from rationality are caused by limited time, memory, or attention. Often heuristics and rules of thumb are used to mitigate these limitations, but they may lead to new forms of irrationality.
Logic
Theoretical rationality is closely related to logic, but not identical to it. Logic is often defined as the study of correct arguments. This concerns the relation between the propositions used in the argument: whether its premises offer support to its conclusion. Theoretical rationality, on the other hand, is about what to believe or how to change one's beliefs. The laws of logic are relevant to rationality since the agent should change their beliefs if they violate these laws. But logic is not directly about what to believe. Additionally, there are also other factors and norms besides logic that determine whether it is rational to hold or change a belief. The study of rationality in logic is more concerned with epistemic rationality, that is, attaining beliefs in a rational manner, than instrumental rationality.
Decision theory
An influential account of practical rationality is given by decision theory. Decisions are situations where a number of possible courses of action are available to the agent, who has to choose one of them. Decision theory investigates the rules governing which action should be chosen. It assumes that each action may lead to a variety of outcomes. Each outcome is associated with a conditional probability and a utility. The expected gain of an outcome can be calculated by multiplying its conditional probability with its utility. the expected utility of an act is equivalent to the sum of all expected gains of the outcomes associated with it. From these basic ingredients, it is possible to define the rationality of decisions: a decision is rational if it selects the act with the highest expected utility. While decision theory gives a very precise formal treatment of this issue, it leaves open the empirical problem of how to assign utilities and probabilities. So decision theory can still lead to bad empirical decisions if it is based on poor assignments.
According to decision theorists, rationality is primarily a matter of internal consistency. This means that a person's mental states like beliefs and preferences are consistent with each other or do not go against each other. One consequence of this position is that people with obviously false beliefs or perverse preferences may still count as rational if these mental states are consistent with their other mental states. Utility is often understood in terms of self-interest or personal preferences. However, this is not a necessary aspect of decisions theory and it can also be interpreted in terms of goodness or value in general.
Game theory
Game theory is closely related to decision theory and the problem of rational choice. Rational choice is based on the idea that rational agents perform a cost-benefit analysis of all available options and choose the option that is most beneficial from their point of view. In the case of game theory, several agents are involved. This further complicates the situation since whether a given option is the best choice for one agent may depend on choices made by other agents. Game theory can be used to analyze various situations, like playing chess, firms competing for business, or animals fighting over prey. Rationality is a core assumption of game theory: it is assumed that each player chooses rationally based on what is most beneficial from their point of view. This way, the agent may be able to anticipate how others choose and what their best choice is relative to the behavior of the others. This often results in a Nash equilibrium, which constitutes a set of strategies, one for each player, where no player can improve their outcome by unilaterally changing their strategy.
Bayesianism
A popular contemporary approach to rationality is based on Bayesian epistemology. Bayesian epistemology sees belief as a continuous phenomenon that comes in degrees. For example, Daniel is relatively sure that the Boston Celtics will win their next match and absolutely certain that two plus two equals four. In this case, the degree of the first belief is weaker than the degree of the second belief. These degrees are usually referred to as credences and represented by numbers between 0 and 1. 0 corresponds to full disbelief, 1 corresponds to full belief and 0.5 corresponds to suspension of belief. Bayesians understand this in terms of probability: the higher the credence, the higher the subjective probability that the believed proposition is true. As probabilities, they are subject to the laws of probability theory. These laws act as norms of rationality: beliefs are rational if they comply with them and irrational if they violate them. For example, it would be irrational to have a credence of 0.9 that it will rain tomorrow together with another credence of 0.9 that it will not rain tomorrow. This account of rationality can also be extended to the practical domain by requiring that agents maximize their subjective expected utility. This way, Bayesianism can provide a unified account of both theoretical and practical rationality.
Economics
Rationality plays a key role in economics and there are several strands to this. Firstly, there is the concept of instrumentality—basically the idea that people and organisations are instrumentally rational—that is, adopt the best actions to achieve their goals. Secondly, there is an axiomatic concept that rationality is a matter of being logically consistent within your preferences and beliefs. Thirdly, people have focused on the accuracy of beliefs and full use of information—in this view, a person who is not rational has beliefs that do not fully use the information they have.
Debates within economic sociology also arise as to whether or not people or organizations are "really" rational, as well as whether it makes sense to model them as such in formal models. Some have argued that a kind of bounded rationality makes more sense for such models.
Others think that any kind of rationality along the lines of rational choice theory is a useless concept for understanding human behavior; the term homo economicus (economic man: the imaginary man being assumed in economic models who is logically consistent but amoral) was coined largely in honor of this view. Behavioral economics aims to account for economic actors as they actually are, allowing for psychological biases, rather than assuming idealized instrumental rationality.
Artificial intelligence
The field of artificial intelligence is concerned, among other things, with how problems of rationality can be implemented and solved by computers. Within artificial intelligence, a rational agent is typically one that maximizes its expected utility, given its current knowledge. Utility is the usefulness of the consequences of its actions. The utility function is arbitrarily defined by the designer, but should be a function of "performance", which is the directly measurable consequences, such as winning or losing money. In order to make a safe agent that plays defensively, a nonlinear function of performance is often desired, so that the reward for winning is lower than the punishment for losing. An agent might be rational within its own problem area, but finding the rational decision for arbitrarily complex problems is not practically possible. The rationality of human thought is a key problem in the psychology of reasoning.
International relations
There is an ongoing debate over the merits of using "rationality" in the study of international relations (IR). Some scholars hold it indispensable. Others are more critical. Still, the pervasive and persistent usage of "rationality" in political science and IR is beyond dispute. "Rationality" remains ubiquitous in this field. Abulof finds that Some 40% of all scholarly references to "foreign policy" allude to "rationality"—and this ratio goes up to more than half of pertinent academic publications in the 2000s. He further argues that when it comes to concrete security and foreign policies, IR employment of rationality borders on "malpractice": rationality-based descriptions are largely either false or unfalsifiable; many observers fail to explicate the meaning of "rationality" they employ; and the concept is frequently used politically to distinguish between "us and them."
Criticism
The concept of rationality has been subject to criticism by various philosophers who question its universality and capacity to provide a comprehensive understanding of reality and human existence.
Friedrich Nietzsche, in his work "Beyond Good and Evil" (1886), criticized the overemphasis on rationality and argued that it neglects the irrational and instinctual aspects of human nature. Nietzsche advocated for a reevaluation of values based on individual perspectives and the will to power, stating, "There are no facts, only interpretations."
Martin Heidegger, in "Being and Time" (1927), offered a critique of the instrumental and calculative view of reason, emphasizing the primacy of our everyday practical engagement with the world. Heidegger challenged the notion that rationality alone is the sole arbiter of truth and understanding.
Max Horkheimer and Theodor Adorno, in their seminal work "Dialectic of Enlightenment" (1947), questioned the Enlightenment's rationality. They argued that the dominance of instrumental reason in modern society leads to the domination of nature and the dehumanization of individuals. Horkheimer and Adorno highlighted how rationality narrows the scope of human experience and hinders critical thinking.
Michel Foucault, in "Discipline and Punish" (1975) and "The Birth of Biopolitics" (1978), critiqued the notion of rationality as a neutral and objective force. Foucault emphasized the intertwining of rationality with power structures and its role in social control. He famously stated, "Power is not an institution, and not a structure; neither is it a certain strength we are endowed with; it is the name that one attributes to a complex strategical situation in a particular society."
These philosophers' critiques of rationality shed light on its limitations, assumptions, and potential dangers. Their ideas challenge the universal application of rationality as the sole framework for understanding the complexities of human existence and the world.
See also
Bayesian epistemology
Cognitive bias
Coherence (linguistics)
Counterintuitive
Dysrationalia
Flipism
Homo economicus
Imputation (game theory) (individual rationality)
Instinct
Intelligence
Irrationality
Law of thought
LessWrong
List of cognitive biases
Principle of rationality
Rational emotive behavior therapy
Rationalism
Rationalization (making excuses)
Satisficing
Superrationality
Von Neumann–Morgenstern utility theorem
References
Further reading
Reason and Rationality, by Richard Samuels, Stephen Stich, Luc Faucher on the broad field of reason and rationality from descriptive, normative, and evaluative points of view
Stanford Encyclopedia of Philosophy entry on Historicist Theories of Rationality
Legal Reasoning After Post-Modern Critiques of Reason , by Peter Suber
Lucy Suchman (2007). Human-machine Reconfigurations: Plans and Situated Action. Cambridge University Press.
Cristina Bicchieri (1993). Rationality and Coordination, New York: Cambridge University Press
Cristina Bicchieri (2007). "Rationality and Indeterminacy", in D. Ross and H. Kinkaid (eds.) The Handbook of Philosophy of Economics, The Oxford Reference Library of Philosophy, Oxford University Press, vol. 6, n.2.
Anand, P (1993). Foundations of Rational Choice Under Risk, Oxford, Oxford University Press.
Habermas, J. (1984) The Theory of Communicative Action Volume 1; Reason and the Rationalization of Society, Cambridge: Polity Press.
Mosterín, Jesús (2008). Lo mejor posible: Racionalidad y acción humana. Madrid: Alianza Editorial. 318 pp. .
Nozick, Robert (1993). The Nature of Rationality. Princeton: Princeton University Press.
Sciortino, Luca (2023). History of Rationalities: Ways of Thinking from Vico to Hacking and Beyond. New York: Springer- Palgrave McMillan. .
Eagleton, M. (ed) (2003) A Concise Companion to Feminist Theory, Oxford: Blackwell Publishing.
Simons, H. and Hawkins, D. (1949), "Some Conditions in Macro-Economic Stability", Econometrica, 1949.
Johnson-Laird, P.N. & Byrne, R.M.J. (1991). Deduction. Hillsdale: Erlbaum.
Concepts in epistemology
Concepts in ethics
Concepts in logic
Metaphysical properties
Concepts in the philosophy of mind
Concepts in the philosophy of science
Philosophy of law
Philosophy of life | 0.77232 | 0.996346 | 0.769498 |
Applied epistemology | Applied epistemology refers to the study that determines whether the systems of investigation that seek the truth lead to true beliefs about the world. A specific conceptualization cites that it attempts to reveal whether these systems contribute to epistemic aims. It is applied in practices outside of philosophy like science and mathematics.
Once applied epistemology is described as a method in an epistemological search, it implies that the methodology is supported by an epistemological foundation.
Background
Applied epistemology forms part of the concept of "applied philosophy" as theorists begin to distinguish it from "applied ethics". It is argued that "applied philosophy" is a broader field, and that it has parts that are not subdisciplines of applied ethics. The emergence of "applied philosophy" gained traction after it was proposed that philosophy can be applied to contemporary issues.
Applied epistemology emerged out of epistemologists routine examinations that determine whether truth-seeking practices like science and mathematics are capable of delivering truths. It draws from epistemological theorizing to address pressing epistemic matters of practical value. An epistemological question assumes a philosophical form once it deals with the type of knowledge or justification that is presupposed in most ordinary contexts.
In its infancy, applied epistemology had been equated with social epistemology. Later theorizing established that, while there are overlapping aspects, not all social epistemology is applied and not all applied epistemology is social. A proposed analogy to distinguish applied epistemology from epistemology holds that it involves the general opposition between theory and application. In applied epistemology, theories in epistemology are applied for solving practical problems. The theoretical constructions in this environment can be modified or reorganized in function of the primary target.
Concept
Applied epistemology is informed by skepticism in philosophy, as it maintains that things should not be taken at face value – that, in reflection, what people knew as "truths" could turn out to be false. Applied epistemology has been concerned with practical questions about truth, knowledge, and other epistemic values but these are not all social questions. It asks questions about what we know and are justified in believing.
Applied epistomology is also considered one of the three branches of epistemology along with normative epistemology and metaepistemology. The normative branch is concerned with first-order theorizing about the formation of justified beliefs, knowledge, and truths. Metaepistemology, on the other hand, deals with higher order epistemological questions, particularly, the fundamental aspects of epistemic theorizing. According to philosopher Richard Fumerton, metaepistemology is concerned with questions about what knowledge – including justification, rationality, and evidence – is. A conceptualization cites that the applied epistemologist operates within a background of naturalist metaepistemology and reliabilist first-order epistemology.
The following table demonstrates the place of applied epistemology in relation to epistemology and to the parallels between ethics and epistemology covering a specific topic according to Mark Battersby. Other philosophers have different conceptions of the relationships.
The main domains of applied epistemology include education and pedagogy, therapy, politics, science and technology, arts, and artificial intelligence.
Applications
As part of "applied philosophy", applied epistemology has been applied to different contemporary practices and issues. This include its application to critical thinking or informal logic, information systems, and pressing social concerns. In the area of critical thinking, there is the underlying idea that thinking clearly and carefully about any issue needs the understanding and application of fundamental epistemological concepts. Theorists draw from philosophical theories to address real-life epistemic issues.
Communication
According to V.D. Singh, since general semantics is a general theory of evaluation – that it considers the interrelations among events that transpire within ourselves and the world around us as well as how he obtain information or talk about such events and how we behave – makes it an up-to-date and scientifically-based applied epistemology. Scholars cite the case of fake news as an issue that can be addressed by applied epistemology. It is posited that corrupted or fake information can be unmasked through an epistemological investigation that answers three questions: 1. What is fake news?; 2. What are the mechanisms that foster the production and spread of fake news; and, 3. which interventions can address it?
Scientific research
Applied epistemology in science has been described as the specific mental frameworks utilized by scientists in their research and activities that are considered processes of acquiring knowledge. These frameworks also serve as the ground of the sociology of science. There is also the case of the philosophy of science, which provides epistemic justifications for scientific reasoning and choice. It is considered an applied epistemology due to the characterization that it is precise, formal, and normative.
An example of the deployment of applied epistemology in scientific research is the Toolbox Project. It is an initiative that apply philosophical analysis to enhance collaborative, cross-disciplinary scientific research by improving cross-disciplinary communication. There are also scholars who consider the application of epistemologically relevant psychology to science as applied epistemology. Aside from its role in scientific and technological advancement, the concept is also applied in the areas of ethics and policy. It is argued that the instincts that guide actual scientific practice are yet to be fully recognized, scrutinized, and justified.
Informal logic
According to Mark Battersby, the method of critical thinking or informal logic can be considered a form of applied epistemology. This method involves the assessment of the strength of evidences that afford conclusions can only be made if the domain within which the argument is presented is taken into account. For Battersby, this constitutes applied epistemology, since it is about grounding assessments of arguments as they occur within them. Mark Weinstein maintained that a focus on the account of how acceptability is transmitted from premises to conclusion show close theoretical parallel between informal logic and applied epistemology. It is argued that rather than rules of logic, epistemological norms constitute the philosophical core of informal logic and that there is a close parallel between informal logic and applied ethics. Based on these factors, scholars such as Battersby and Weinstein maintain that informal logic should be classified as applied epistemology instead of logic.
Social issues
It has been suggested by scholars such as Jennifer Lackey that applied epistemology provides the tools in contemporary epistemology's evaluation of the issues of social concern. It is relevant to issues affecting social groups since it helps in answering the recurring practical question, "what to believe now". Applied epistemology is also considered capable of unmasking the contribution of the features of public deliberation to a group's reliability and provide a basis for a reliabilist rationale for democracy in the process. Applied epistemology has also been employed in examining feminism, particularly with respect to the evaluation of the agency of women and what is the relevance of giving it authorial primacy within studies of knowledge.
Information studies
According to Tim Gorichanaz, applied epistemology allows information studies to benefit from the field of philosophy particularly since it rarely focuses on the evaluation of epistemic concepts. It is also suggested that applying the concept to information system can bridge the information processing models of cognition and constructivist perspectives on knowledge. Applied epistemology can be prominent in the "schema" or the cognitive organization of meaningful information. Specifically, it is the information structure that can be modified to represent knowledge of interrelationships between events, objects, and situations that we encounter.
Psychology
Applied epistemology is relevant to the field of psychology and cognitive science as it focuses on the study of particular epistemic problems and processes and is characterized as part of an empirical field. It addresses how cognitive agents go about constructing epistemically adequate representations of the world. The content of the psychological experts or therapists' cognitive organization or "knowing" processes has also been described as applied epistemology. This system of knowing allows a better understanding of a patient's problems. It also represents part of the knowledge system in which interventions that facilitate change can be drawn.
Law
Legal epistemology is considered a form of applied epistemology for its evaluation of whether legal systems of investigation that seeks the truth are structured in a manner that actually lead to justified and true beliefs. Applied epistemics allows the legal system to draw from philosophy. For instance, David Hume stated that, "we entertain a suspicion concerning any matter of fact, when the witnesses contradict each other; when they are but few, or a doubtful character; when they have an interest in what they affirm; when they deliver their testimony with hesitation, or on the contrary, with too violent asseverations." This generic view is said to allow legal procedure the effective evaluation of testimonies.
Philosophy
Applied epistemology is also used in evaluating philosophical issues. This is the case when empirical perspective is applied to test philosophical theories. While this approach does not eliminate analytic and conceptual issues, it can make them clearer. It also increases the probability of theorists to examine evidences that tend to be overlooked.
Cybernetics
Applied epistemology is also significant in cybernetics, which involves the control and communication of living and man-made systems. Modern cybernetics, particularly, is considered an applied epistemology for its focus on how the process of the construction of models of the systems is influenced by the living and man-made systems in its goal of understanding the similarities and differences of the inner workings of the organic and machine processes. Once applied to cybernetics, applied epistemology also contributes in shaping responses to global and local issues since it helps construct a type of political epistemology that can lead to a holistic and socially responsible discourse and practice.
References
Epistemology
Applied philosophy
Metaphilosophy
Concepts in epistemology
Science studies | 0.823084 | 0.934892 | 0.769495 |
Definitions of knowledge | Definitions of knowledge try to determine the essential features of knowledge. Closely related terms are conception of knowledge, theory of knowledge, and analysis of knowledge. Some general features of knowledge are widely accepted among philosophers, for example, that it constitutes a cognitive success or an epistemic contact with reality and that propositional knowledge involves true belief. Most definitions of knowledge in analytic philosophy focus on propositional knowledge or knowledge-that, as in knowing that Dave is at home, in contrast to knowledge-how (know-how) expressing practical competence. However, despite the intense study of knowledge in epistemology, the disagreements about its precise nature are still both numerous and deep. Some of those disagreements arise from the fact that different theorists have different goals in mind: some try to provide a practically useful definition by delineating its most salient feature or features, while others aim at a theoretically precise definition of its necessary and sufficient conditions. Further disputes are caused by methodological differences: some theorists start from abstract and general intuitions or hypotheses, others from concrete and specific cases, and still others from linguistic usage. Additional disagreements arise concerning the standards of knowledge: whether knowledge is something rare that demands very high standards, like infallibility, or whether it is something common that requires only the possession of some evidence.
One definition that many philosophers consider to be standard, and that has been discussed since ancient Greek philosophy, is justified true belief (JTB). This implies that knowledge is a mental state and that it is not possible to know something false. There is widespread agreement among analytic philosophers that knowledge is a form of true belief. The idea that justification is an additionally required component is due to the intuition that true beliefs based on superstition, lucky guesses, or erroneous reasoning do not constitute knowledge. In this regard, knowledge is more than just being right about something. The source of most disagreements regarding the nature of knowledge concerns what more is needed. According to the standard philosophical definition, it is justification. The original account understands justification internalistically as another mental state of the person, like a perceptual experience, a memory, or a second belief. This additional mental state supports the known proposition and constitutes a reason or evidence for it. However, some modern versions of the standard philosophical definition use an externalistic conception of justification instead. Many such views affirm that a belief is justified if it was produced in the right way, for example, by a reliable cognitive process.
The justified-true-belief definition of knowledge came under severe criticism in the second half of the 20th century, mainly due to a series of counterexamples given by Edmund Gettier. Most of these examples aim to illustrate cases in which a justified true belief does not amount to knowledge because its justification is not relevant to its truth. This is often termed epistemic luck since it is just a fortuitous coincidence that the justified belief is also true. A few epistemologists have concluded from these counterexamples that the JTB definition of knowledge is deeply flawed and have sought a radical reconception of knowledge. However, many theorists still agree that the JTB definition is on the right track and have proposed more moderate responses to deal with the suggested counterexamples. Some hold that modifying one's conception of justification is sufficient to avoid them. Another approach is to include an additional requirement besides justification. On this view, being a justified true belief is a necessary but not a sufficient condition of knowledge. A great variety of such criteria has been suggested. They usually manage to avoid many of the known counterexamples but they often fall prey to newly proposed cases. It has been argued that, in order to circumvent all Gettier cases, the additional criterion needs to exclude epistemic luck altogether. However, this may require the stipulation of a very high standard of knowledge: that nothing less than infallibility is needed to exclude all forms of luck. The defeasibility theory of knowledge is one example of a definition based on a fourth criterion besides justified true belief. The additional requirement is that there is no truth that would constitute a defeating reason of the belief if the person knew about it. Other alternatives to the JTB definition are reliabilism, which holds that knowledge has to be produced by reliable processes, causal theories, which require that the known fact caused the knowledge, and virtue theories, which identify knowledge with the manifestation of intellectual virtues.
Not all forms of knowledge are propositional, and various definitions of different forms of non-propositional knowledge have also been proposed. But among analytic philosophers this field of inquiry is less active and characterized by less controversy. Someone has practical knowledge or know-how if they possess the corresponding competence or ability. Knowledge by acquaintance constitutes a relation not to a proposition but to an object. It is defined as familiarity with its object based on direct perceptual experience of it.
General characteristics and disagreements
Definitions of knowledge try to describe the essential features of knowledge. This includes clarifying the distinction between knowing something and not knowing it, for example, pointing out what is the difference between knowing that smoking causes cancer and not knowing this. Sometimes the expressions "conception of knowledge", "theory of knowledge", and "analysis of knowledge" are used as synonyms. Various general features of knowledge are widely accepted. For example, it can be understood as a form of cognitive success or epistemic contact with reality, and propositional knowledge may be characterized as "believing a true proposition in a good way". However, such descriptions are too vague to be very useful without further clarifications of what "cognitive success" means, what type of success is involved, or what constitutes "good ways of believing".
The disagreements about the nature of knowledge are both numerous and deep. Some of these disagreements stem from the fact that there are different ways of defining a term, both in relation to the goal one intends to achieve and concerning the method used to achieve it. These difficulties are further exacerbated by the fact that the term "knowledge" has historically been used for a great range of diverse phenomena. These phenomena include theoretical know-that, as in knowing that Paris is in France, practical know-how, as in knowing how to swim, and knowledge by acquaintance, as in personally knowing a celebrity. It is not clear that there is one underlying essence to all of these forms. For this reason, most definitions restrict themselves either explicitly or implicitly to knowledge-that, also termed "propositional knowledge", which is seen as the most paradigmatic type of knowledge.
Even when restricted to propositional knowledge, the differences between the various definitions are usually substantial. For this reason, the choice of one's conception of knowledge matters for questions like whether a particular mental state constitutes knowledge, whether knowledge is fairly common or quite rare, and whether there is knowledge at all. The problem of the definition and analysis of knowledge has been a subject of intense discussion within epistemology both in the 20th and the 21st century. The branch of philosophy studying knowledge is called epistemology.
Goals
An important reason for these disagreements is that different theorists often have very different goals in mind when trying to define knowledge. Some definitions are based mainly on the practical concern of being able to find instances of knowledge. For such definitions to be successful, it is not required that they identify all and only its necessary features. In many cases, easily identifiable contingent features can even be more helpful for the search than precise but complicated formulas. On the theoretical side, on the other hand, there are so-called real definitions that aim to grasp the term's essence in order to understand its place on the conceptual map in relation to other concepts. Real definitions are preferable on the theoretical level since they are very precise. However, it is often very hard to find a real definition that avoids all counterexamples. Real definitions usually presume that knowledge is a natural kind, like "human being" or "water" and unlike "candy" or "large plant". Natural kinds are clearly distinguishable on the scientific level from other phenomena. As a natural kind, knowledge may be understood as a specific type of mental state. In this regard, the term "analysis of knowledge" is used to indicate that one seeks different components that together make up propositional knowledge, usually in the form of its essential features or as the conditions that are individually necessary and jointly sufficient. This may be understood in analogy to a chemist analyzing a sample to discover its chemical compositions in the form of the elements involved in it. In most cases, the proposed features of knowledge apply to many different instances. However, the main difficulty for such a project is to avoid all counterexamples, i.e. there should be no instances that escape the analysis, not even in hypothetical thought experiments. By trying to avoid all possible counterexamples, the analysis of aims at arriving at a necessary truth about knowledge.
However, the assumption that knowledge is a natural kind that has precisely definable criteria is not generally accepted and some hold that the term "knowledge" refers to a merely conventional accomplishment that is artificially constituted and approved by society. In this regard, it may refer to a complex situation involving various external and internal aspects. This distinction is significant because if knowledge is not a natural kind then attempts to provide a real definition would be futile from the start even though definitions based merely on how the word is commonly used may still be successful. However, the term would not have much general scientific importance except for linguists and anthropologists studying how people use language and what they value. Such usage may differ radically from one culture to another. Many epistemologists have accepted, often implicitly, that knowledge has a real definition. But the inability to find an acceptable real definition has led some to understand knowledge in more conventionalist terms.
Methods
Besides these differences concerning the goals of defining knowledge, there are also important methodological differences regarding how one arrives at and justifies one's definition. One approach simply consists in looking at various paradigmatic cases of knowledge to determine what they all have in common. However, this approach is faced with the problem that it is not always clear whether knowledge is present in a particular case, even in paradigmatic cases. This leads to a form of circularity, known as the problem of the criterion: criteria of knowledge are needed to identify individual cases of knowledge and cases of knowledge are needed to learn what the criteria of knowledge are. Two approaches to this problem have been suggested: methodism and particularism. Methodists put their faith in their pre-existing intuitions or hypotheses about the nature of knowledge and use them to identify cases of knowledge. Particularists, on the other hand, hold that our judgments about particular cases are more reliable and use them to arrive at the general criteria. A closely related method, based more on the linguistic level, is to study how the word "knowledge" is used. However, there are numerous meanings ascribed to the term, many of which correspond to the different types of knowledge. This introduces the additional difficulty of first selecting the expressions belonging to the intended type before analyzing their usage.
Standards of knowledge
A further source of disagreement and difficulty in defining of knowledge is posed by the fact that there are many different standards of knowledge. The term "standard of knowledge" refers to how high the requirements are for ascribing knowledge to someone. To claim that a belief amounts to knowledge is to attribute a special epistemic status to this belief. But exactly what status this is, i.e. what standard a true belief has to pass to amount to knowledge, may differ from context to context. While some theorists use very high standards, like infallibility or absence of cognitive luck, others use very low standards by claiming that mere true belief is sufficient for knowledge, that justification is not necessary. For example, according to some standards, having read somewhere that the solar system has eight planets is a sufficient justification for knowing this fact. According to others, a deep astronomical understanding of the relevant measurements and the precise definition of "planet" is necessary. In the history of philosophy, various theorists have set an even higher standard and assumed that certainty or infallibility is necessary. For example, this is René Descartes's approach, who aims to find absolutely certain or indubitable first principles to act as the foundation of all subsequent knowledge. However, this outlook is uncommon in the contemporary approach. Contextualists have argued that the standards depend on the context in which the knowledge claim is made. For example, in a low-stake situation, a person may know that the solar system has 8 planets, even though the same person lacks this knowledge in a high-stake situation.
The question of the standards of knowledge is highly relevant to how common or rare knowledge is. According to the standards of everyday discourse, ordinary cases of perception and memory lead to knowledge. In this sense, even small children and animals possess knowledge. But according to a more rigorous conception, they do not possess knowledge since much higher standards need to be fulfilled. The standards of knowledge are also central to the question of whether skepticism, i.e. the thesis that we have no knowledge at all, is true. If very high standards are used, like infallibility, then skepticism becomes plausible. In this case, the skeptic only has to show that any putative knowledge state lacks absolute certainty, that while the actual belief is true, it could have been false. However, the more these standards are weakened to how the term is used in everyday language, the less plausible skepticism becomes.
Justified true belief
Many philosophers define knowledge as justified true belief (JTB). This definition characterizes knowledge in relation to three essential features: S knows that p if and only if (1) p is true, (2) S believes that p, and (3) this belief is justified. A version of this definition was considered and rejected by Socrates in Plato's Theaetetus. Today, there is wide, though not universal, agreement among analytic philosophers that the first two criteria are correct, i.e., that knowledge implies true belief. Most of the controversy concerns the role of justification: what it is, whether it is needed, and what additional requirements it has to fulfill.
Truth
There is wide agreement that knowledge implies truth. In this regard, one cannot know things that are not true even if the corresponding belief is justified and rational. As an example, nobody can know that Hillary Clinton won the 2016 US Presidential election, since this event did not happen. This reflects the idea that knowledge is a relation through which a person stands in cognitive contact with reality. This contact implies that the known proposition is true.
Nonetheless, some theorists have also proposed that truth may not always be necessary for knowledge. In this regard, a justified belief that is widely held within a community may be seen as knowledge even if it is false. Another doubt is due to some cases in everyday discourse where the term is used to express a strong conviction. For example, a diehard fan of Hillary Clinton might claim that they knew she would win. But such examples have not convinced many theorists. Instead, this claim is probably better understood as an exaggeration than as an actual knowledge claim. Such doubts are minority opinions and most theorists accept that knowledge implies truth.
Belief
Knowledge is usually understood as a form of belief: to know something implies that one believes it. This means that the agent accepts the proposition in question. However, not all theorists agree with this. This rejection is often motivated by contrasts found in ordinary language suggesting that the two are mutually exclusive, as in "I do not believe that; I know it." Some see this difference in the strength of the agent's conviction by holding that belief is a weak affirmation while knowledge entails a strong conviction. However, the more common approach to such expressions is to understand them not literally but through paraphrases, for example, as "I do not merely believe that; I know it." This way, the expression is compatible with seeing knowledge as a form of belief. A more abstract counterargument defines "believing" as "thinking with assent" or as a "commitment to something being true" and goes on to show that this applies to knowledge as well. A different approach, sometimes termed "knowledge first", upholds the difference between belief and knowledge based on the idea that knowledge is unanalyzable and therefore cannot be understood in terms of the elements that compose it. But opponents of this view may simply reject it by denying that knowledge is unanalyzable. So despite the mentioned arguments, there is still wide agreement that knowledge is a form of belief.
A few epistemologists hold that true belief by itself is sufficient for knowledge. However, this view is not very popular and most theorists accept that merely true beliefs do not constitute knowledge. This is based on various counterexamples, in which a person holds a true belief in virtue of faulty reasoning or a lucky guess.
Justification
The third component of the JTB definition is justification. It is based on the idea that having a true belief is not sufficient for knowledge, that knowledge implies more than just being right about something. So beliefs based on dogmatic opinions, blind guesses, or erroneous reasoning do not constitute knowledge even if they are true. For example, if someone believes that Machu Picchu is in Peru because both expressions end with the letter u, this true belief does not constitute knowledge. In this regard, a central question in epistemology concerns the additional requirements for turning a true belief into knowledge. There are many suggestions and deep disagreements within the academic literature about what these additional requirements are. A common approach is to affirm that the additional requirement is justification. So true beliefs that are based on good justification constitute knowledge, as when the belief about Machu Picchu is based on the individual's vivid recent memory of traveling through Peru and visiting Machu Picchu there. This line of thought has led many theorists to the conclusion that knowledge is nothing but true belief that is justified.
However, it has been argued that some knowledge claims in everyday discourse do not require justification. For example, when a teacher is asked how many of his students knew that Vienna is the capital of Austria in their last geography test, he may just cite the number of correct responses given without concern for whether these responses were based on justified beliefs. Some theorists characterize this type of knowledge as "lightweight knowledge" in order to exclude it from their discussion of knowledge.
A further question in this regard is how strong the justification needs to be for a true belief to amount to knowledge. So when the agent has some weak evidence for a belief, it may be reasonable to hold that belief even though no knowledge is involved. Some theorists hold that the justification has to be certain or infallible. This means that the justification of the belief guarantees the belief's truth, similar to how in a deductive argument, the truth of its premises ensures the truth of its conclusion. However, this view severely limits the extension of knowledge to very few beliefs, if any. Such a conception of justification threatens to lead to a full-blown skepticism denying that we know anything at all. The more common approach in the contemporary discourse is to allow fallible justification that makes the justified belief rationally convincing without ensuring its truth. This is similar to how ampliative arguments work, in contrast to deductive arguments. The problem with fallibilism is that the strength of justification comes in degrees: the evidence may make it somewhat likely, quite likely, or extremely likely that the belief is true. This poses the question of how strong the justification needs to be in the case of knowledge. The required degree may also depend on the context: knowledge claims in low-stakes situations, such as among drinking buddies, have lower standards than knowledge claims in high-stakes situations, such as among experts in the academic discourse.
Internalism and externalism
Besides the issue about the strength of justification, there is also the more general question about its nature. Theories of justification are often divided into internalism and externalism depending on whether only factors internal to the subject are responsible for justification. Commonly, an internalist conception is defended. This means that internal mental states of the subject justify beliefs. These states are usually understood as reasons or evidence possessed, like perceptual experiences, memories, rational intuition, or other justified beliefs.
One particular form of this position is evidentialism, which bases justification exclusively on the possession of evidence. It can be expressed by the claim that "Person S is justified in believing proposition p at time t if and only if S'''s evidence for p at t supports believing p". Some philosophers stipulate as an additional requirement to the possession of evidence that the belief is actually based on this evidence, i.e. that there is some kind of mental or causal link between the evidence and belief. This is often referred to as "doxastic justification". In contrast to this, having sufficient evidence for a true belief but coming to hold this belief based on superstition is a case of mere "propositional justification". Such a belief may not amount to knowledge even though the relevant evidence is possessed. A particularly strict version of internalism is access internalism. It holds that only states introspectively available to the subject's experience are relevant to justification. This means that deep unconscious states cannot act as justification. A closely related issue concerns the question of the internal structure of these states or how they are linked to each other. According to foundationalists, some mental states constitute basic reasons that can justify without being themselves in need of justification. Coherentists defend a more egalitarian position: what matters is not a privileged epistemic status of some special states but the relation to all other states. This means that a belief is justified if it fits into the person's full network of beliefs as a coherent part.
Philosophers have commonly espoused an internalist conception of justification. Various problems with internalism have led some contemporary philosophers to modify the internalist account of knowledge by using externalist conceptions of justification. Externalists include factors external to the person as well, such as the existence of a causal relation to the believed fact or to a reliable belief formation process. A prominent theory in this field is reliabilism, the theory that a true belief is justified if it was brought about by a reliable cognitive process that is likely to result in true beliefs. On this view, a true belief based on standard perceptual processes or good reasoning constitutes knowledge. But this is not the case if wishful thinking or emotional attachment is the cause.
However, not all externalists understand their theories as versions of the JTB account of knowledge. Some theorists defend an externalist conception of justification while others use a narrow notion of "justification" and understand externalism as implying that justification is not required for knowledge, for example, that the feature of being produced by a reliable process is not a form of justification but its surrogate. The same ambiguity is also found in the causal theory of knowledge.
In ancient philosophy
In Plato's Theaetetus, Socrates considers a number of theories as to what knowledge is, first excluding merely true belief as an adequate account. For example, an ill person with no medical training, but with a generally optimistic attitude, might believe that he will recover from his illness quickly. Nevertheless, even if this belief turned out to be true, the patient would not have known that he would get well since his belief lacked justification. The last account that Plato considers is that knowledge is true belief "with an account" that explains or defines it in some way. According to Edmund Gettier, the view that Plato is describing here is that knowledge is justified true belief. The truth of this view would entail that in order to know that a given proposition is true, one must not only believe the relevant true proposition, but must also have a good reason for doing so. One implication of this would be that no one would gain knowledge just by believing something that happened to be true.
Gettier problem and cognitive luck
The JTB definition of knowledge, as mentioned above, was already rejected in Plato's Theaetetus. The JTB definition came under severe criticism in the 20th century, mainly due to a series of counterexamples given by Edmund Gettier. This is commonly known as the Gettier problem and includes cases in which a justified belief is true because of lucky circumstances, i.e. where the person's reason for the belief is irrelevant to its truth. A well-known example involves a person driving along a country road with many barn facades. The driver does not know this and finally stops in front of the only real barn. The idea of this case is that they have a justified true belief that the object in front of them is a barn even though this does not constitute knowledge. The reason is that it was just a lucky coincidence that they stopped here and not in front of one of the many fake barns, in which case they wouldn't have been able to tell the difference either.
This and similar counterexamples aim to show that justification alone is not sufficient, i.e. that there are some justified true beliefs that do not amount to knowledge. A common explanation of such cases is based on cognitive or epistemic luck. The idea is that it is a lucky coincidence or a fortuitous accident that the justified belief is true. So the justification is in some sense faulty, not because it relies on weak evidence, but because the justification is not responsible for the belief's truth.Various theorists have responded to this problem by talking about warranted true belief instead. In this regard, warrant implies that the corresponding belief is not accepted on the basis of mere cognitive luck or accident. However, not everyone agrees that this and similar cases actually constitute counterexamples to the JTB definition: some have argued that, in these cases, the agent actually knows the fact in question, e.g. that the driver in the fake barn example knows that the object in front of them is a barn despite the luck involved. A similar defense is based on the idea that to insist on the absence of cognitive luck leads to a form of infallibilism about justification, i.e. that justification has to guarantee the belief's truth. However, most knowledge claims are not that strict and allow instead that the justification involved may be fallible.
The Gettier problem
Edmund Gettier is best known for his 1963 paper entitled "Is Justified True Belief Knowledge?", which called into question the common conception of knowledge as justified true belief. In just two and a half pages, Gettier argued that there are situations in which one's belief may be justified and true, yet fail to count as knowledge. That is, Gettier contended that while justified belief in a true proposition is necessary for that proposition to be known, it is not sufficient.
According to Gettier, there are certain circumstances in which one does not have knowledge, even when all of the above conditions are met. Gettier proposed two thought experiments, which have become known as Gettier cases, as counterexamples to the classical account of knowledge. One of the cases involves two men, Smith and Jones, who are awaiting the results of their applications for the same job. Each man has ten coins in his pocket. Smith has excellent reasons to believe that Jones will get the job (the head of the company told him); and furthermore, Smith knows that Jones has ten coins in his pocket (he recently counted them). From this Smith infers: "The man who will get the job has ten coins in his pocket." However, Smith is unaware that he also has ten coins in his own pocket. Furthermore, it turns out that Smith, not Jones, is going to get the job. While Smith has strong evidence to believe that Jones will get the job, he is wrong. Smith therefore has a justified true belief that the man who will get the job has ten coins in his pocket; however, according to Gettier, Smith does not know that the man who will get the job has ten coins in his pocket, because Smith's belief is "...true by virtue of the number of coins in Jones's pocket, while Smith does not know how many coins are in Smith's pocket, and bases his belief... on a count of the coins in Jones's pocket, whom he falsely believes to be the man who will get the job." These cases fail to be knowledge because the subject's belief is justified, but only happens to be true by virtue of luck. In other words, he made the correct choice (believing that the man who will get the job has ten coins in his pocket) for the wrong reasons. Gettier then goes on to offer a second similar case, providing the means by which the specifics of his examples can be generalized into a broader problem for defining knowledge in terms of justified true belief.
There have been various notable responses to the Gettier problem. Typically, they have involved substantial attempts to provide a new definition of knowledge that is not susceptible to Gettier-style objections, either by providing an additional fourth condition that justified true beliefs must meet to constitute knowledge, or proposing a completely new set of necessary and sufficient conditions for knowledge. While there have been far too many published responses for all of them to be mentioned, some of the most notable responses are discussed below.
Responses and alternative definitions
The problems with the JTB definition of knowledge have provoked diverse responses. Strictly speaking, most contemporary philosophers deny the JTB definition of knowledge, at least in its exact form. Edmund Gettier's counterexamples were very influential in shaping this contemporary outlook. They usually involve some form of cognitive luck whereby the justification is not responsible or relevant to the belief being true. Some responses stay within the standard definition and try to make smaller modifications to mitigate the problems, for example, concerning how justification is defined. Others see the problems as insurmountable and propose radical new conceptions of knowledge, many of which do not require justification at all. Between these two extremes, various epistemologists have settled for a moderate departure from the standard definition. They usually accept that it is a step in the right direction: justified true belief is necessary for knowledge. However, they deny that it is sufficient. This means that knowledge always implies justified true belief but that not every justified true belief constitutes knowledge. Instead, they propose an additional fourth criterion needed for sufficiency. The resulting definitions are sometimes referred to as JTB+X accounts of knowledge. A closely related approach is to replace justification with warrant, which is then defined as justification together with whatever else is needed to amount to knowledge.
The goal of introducing an additional criterion is to avoid counterexamples in the form of Gettier cases. Numerous suggestions for such a fourth feature have been made, for example, the requirement that the belief is not inferred from a falsehood. While alternative accounts are often successful at avoiding many specific cases, it has been argued that most of them fail to avoid all counterexamples because they leave open the possibility of cognitive luck. So while introducing an additional criterion may help exclude various known examples of cognitive luck, the resulting definition is often still susceptible to new cases. The only way to avoid this problem is to ensure that the additional criterion excludes cognitive luck. This is often understood in the sense that the presence of the feature has to entail the belief's truth. So if it is possible that a belief has this feature without being true, then cases of cognitive luck are possible in which a true belief has this feature but is not true because of this feature. The problem is avoided by defining knowledge as non-accidentally true belief. A similar approach introduces an anti-luck condition: the belief is not true merely by luck. But it is not clear how useful these definitions are unless a more precise definition of "non-accidental" or "absence of luck" could be provided. This vagueness makes the application to non-obvious cases difficult. A closely related and more precise definition requires that the belief is safely formed, i.e. that the process responsible would not have produced the corresponding belief if it was not true. This means that, whatever the given situation is like, this process tracks the fact. Richard Kirkham suggests that our definition of knowledge requires that the evidence for the belief necessitates its truth.
Defeasibility theory
Defeasibility theories of knowledge introduce an additional condition based on defeasibility in order to avoid the different problems faced by the JTB accounts. They emphasize that, besides having a good reason for holding the belief, it is also necessary that there is no defeating evidence against it. This is usually understood in a very wide sense: a justified true belief does not amount to knowledge when there is a truth that would constitute a defeating reason of the belief if the person knew about it. This wide sense is necessary to avoid Gettier cases of cognitive luck. So in the barn example above, it explains that the belief does not amount to knowledge because, if the person were aware of the prevalence of fake barns in this area, this awareness would act as a defeater of the belief that this one particular building is a real barn. In this way, the defeasibility theory can identify accidentally justified beliefs as unwarranted. One of its problems is that it excludes too many beliefs from knowledge. This concerns specifically misleading defeaters, i.e. truths that would give the false impression to the agent that one of their reasons was defeated. According to Keith Lehrer, cases of cognitive luck can be avoided by requiring that the justification does not depend on any false statement. On his view, "S knows that p if and only if (i) it is true that p, (ii) S accepts that p, (iii) S is justified in accepting that p, and (iv) S is justified in accepting p in some way that does not depend on any false statement".
Reliabilism and causal theory
Reliabilistic and causal theories are forms of externalism. Some versions only modify the JTB definition of knowledge by reconceptualizing what justification means. Others constitute further departures by holding that justification is not necessary, that reliability or the right causal connections act as replacements of justification. According to reliabilism, a true belief constitutes knowledge if it was produced by a reliable process or method. Putative examples of reliable processes are regular perception under normal circumstances and the scientific method. Defenders of this approach affirm that reliability acts as a safeguard against lucky coincidence. Virtue reliabilism is a special form of reliabilism in which intellectual virtues, such as properly functioning cognitive faculties, are responsible for producing knowledge.
Reliabilists have struggled to give an explicit and plausible account of when a process is reliable. One approach defines it through a high success rate: a belief-forming process is reliable within a certain area if it produces a high ratio of true beliefs in this area. Another approach understands reliability in terms of how the process would fare in counterfactual scenarios. Arguments against both of these definitions have been presented. A further criticism is based on the claim that reliability is not sufficient in cases where the agent is not in possession of any reasons justifying the belief even though the responsible process is reliable.
The causal theory of knowledge holds that the believed fact has to cause the true belief in the right way for the belief to amount to knowledge. For example, the belief that there is a bird in the tree may constitute knowledge if the bird and the tree caused the corresponding perception and belief. The causal connection helps to avoid some cases of cognitive luck since the belief is not accidental anymore. However, it does not avoid all of them, as can be seen in the fake barn example above, where the perception of the real barn caused the belief about the real barn even though it was a lucky coincidence. Another shortcoming of the causal theory is that various beliefs are knowledge even though a causal connection to the represented facts does not exist or may not be possible. This is the case for beliefs in mathematical propositions, like that "2 + 2 = 4", and in certain general propositions, like that "no elephant smaller than a kitten".
Virtue-theoretic definition
Virtue-theoretic approaches try to avoid the problem of cognitive luck by seeing knowledge as a manifestation of intellectual virtues. On this view, virtues are properties of a person that aim at some good. In the case of intellectual virtues, the principal good is truth. In this regard, Linda Zagzebski defines knowledge as "cognitive contact with reality arising out of acts of intellectual virtue". A closely related approach understands intellectual virtues in analogy to the successful manifestation of skills. This is helpful to clarify how cognitive luck is avoided. For example, an archer may hit the bull's eye due to luck or because of their skill. Based on this line of thought, Ernest Sosa defines knowledge as a belief that "is true in a way manifesting, or attributable to, the believer's skill".
"No false premises" response
One of the earliest suggested replies to Gettier, and perhaps the most intuitive ways to respond to the Gettier problem, is the "no false premises" response, sometimes also called the "no false lemmas" response. Most notably, this reply was defended by David Malet Armstrong in his 1973 book, Belief, Truth, and Knowledge. The basic form of the response is to assert that the person who holds the justified true belief (for instance, Smith in Gettier's first case) made the mistake of inferring a true belief (e.g. "The person who will get the job has ten coins in his pocket") from a false belief (e.g. "Jones will get the job"). Proponents of this response therefore propose that we add a fourth necessary and sufficient condition for knowledge, namely, "the justified true belief must not have been inferred from a false belief".
This reply to the Gettier problem is simple, direct, and appears to isolate what goes wrong in forming the relevant beliefs in Gettier cases. However, the general consensus is that it fails. This is because while the original formulation by Gettier includes a person who infers a true belief from a false belief, there are many alternate formulations in which this is not the case. Take, for instance, a case where an observer sees what appears to be a dog walking through a park and forms the belief "There is a dog in the park". In fact, it turns out that the observer is not looking at a dog at all, but rather a very lifelike robotic facsimile of a dog. However, unbeknownst to the observer, there is in fact a dog in the park, albeit one standing behind the robotic facsimile of a dog. Since the belief "There is a dog in the park" does not involve a faulty inference, but is instead formed as the result of misleading perceptual information, there is no inference made from a false premise. It therefore seems that while the observer does in fact have a true belief that her perceptual experience provides justification for holding, she does not actually know that there is a dog in the park. Instead, she just seems to have formed a "lucky" justified true belief.
Infallibilist response
One less common response to the Gettier problem is defended by Richard Kirkham, who has argued that the only definition of knowledge that could ever be immune to all counterexamples is the infallibilist definition. To qualify as an item of knowledge, goes the theory, a belief must not only be true and justified, the justification of the belief must necessitate its truth. In other words, the justification for the belief must be infallible.
While infallibilism is indeed an internally coherent response to the Gettier problem, it is incompatible with our everyday knowledge ascriptions. For instance, as the Cartesian skeptic will point out, all of my perceptual experiences are compatible with a skeptical scenario in which I am completely deceived about the existence of the external world, in which case most (if not all) of my beliefs would be false. The typical conclusion to draw from this is that it is possible to doubt most (if not all) of my everyday beliefs, meaning that if I am indeed justified in holding those beliefs, that justification is not infallible. For the justification to be infallible, my reasons for holding my everyday beliefs would need to completely exclude the possibility that those beliefs were false. Consequently, if a belief must be infallibly justified in order to constitute knowledge, then it must be the case that we are mistaken in most (if not all) instances in which we claim to have knowledge in everyday situations. While it is indeed possible to bite the bullet and accept this conclusion, most philosophers find it implausible to suggest that we know nothing or almost nothing, and therefore reject the infallibilist response as collapsing into radical skepticism.
Tracking condition
Robert Nozick has offered a definition of knowledge according to which S knows that P if and only if:
P is true;
S believes that P;
if P were false, S would not believe that P;
if P were true, S would believe that P.
Nozick argues that the third of these conditions serves to address cases of the sort described by Gettier. Nozick further claims this condition addresses a case of the sort described by D.M. Armstrong: A father believes his daughter is innocent of committing a particular crime, both because of faith in his baby girl and (now) because he has seen presented in the courtroom a conclusive demonstration of his daughter's innocence. His belief via the method of the courtroom satisfies the four subjunctive conditions, but his faith-based belief does not. If his daughter were guilty, he would still believe her innocence, on the basis of faith in his daughter; this would violate the third condition.
The British philosopher Simon Blackburn has criticized this formulation by suggesting that we do not want to accept as knowledge beliefs which, while they "track the truth" (as Nozick's account requires), are not held for appropriate reasons. In addition to this, externalist accounts of knowledge, such as Nozick's, are often forced to reject closure in cases where it is intuitively valid.
An account similar to Nozick's has also been offered by Fred Dretske, although his view focuses more on relevant alternatives that might have obtained if things had turned out differently. Views of both the Nozick variety and the Dretske variety have faced serious problems suggested by Saul Kripke.
Knowledge-first response
Timothy Williamson has advanced a theory of knowledge according to which knowledge is not justified true belief plus some extra conditions, but primary. In his book Knowledge and its Limits, Williamson argues that the concept of knowledge cannot be broken down into a set of other concepts through analysis—instead, it is sui generis. Thus, according to Williamson, justification, truth, and belief are necessary but not sufficient for knowledge. Williamson is also known for being one of the only philosophers who take knowledge to be a mental state; most epistemologists assert that belief (as opposed to knowledge) is a mental state. As such, Williamson's claim has been seen to be highly counterintuitive.
Merely true belief
In his 1991 paper, "Knowledge is Merely True Belief", Crispin Sartwell argues that justification is an unnecessary criterion for knowledge. He argues that common counterexample cases of "lucky guesses" are not in fact beliefs at all, as "no belief stands in isolation... the claim that someone believes something entails that that person has some degree of serious commitment to the claim." He gives the example of a mathematician working on a problem who subconsciously, in a "flash of insight", sees the answer, but is unable to comprehensively justify his belief, and says that in such a case the mathematician still knows the answer, despite not being able to give a step-by-step explanation of how he got to it. He also argues that if beliefs require justification to constitute knowledge, then foundational beliefs can never be knowledge, and, as these are the beliefs upon which all our other beliefs depend for their justification, we can thus never have knowledge at all.
Nyaya philosophy
Nyaya is one of the six traditional schools of Indian philosophy with a particular interest in epistemology. The Indian philosopher B.K. Matilal drew on the Navya-Nyāya fallibilist tradition to respond to the Gettier problem. Nyaya theory distinguishes between know p and know that one knows p—these are different events, with different causal conditions. The second level is a sort of implicit inference that usually follows immediately the episode of knowing p (knowledge simpliciter). The Gettier case is examined by referring to a view of Gangesha Upadhyaya (late 12th century), who takes any true belief to be knowledge; thus a true belief acquired through a wrong route may just be regarded as knowledge simpliciter on this view. The question of justification arises only at the second level, when one considers the knowledge-hood of the acquired belief. Initially, there is lack of uncertainty, so it becomes a true belief. But at the very next moment, when the hearer is about to embark upon the venture of knowing whether he knows p, doubts may arise. "If, in some Gettier-like cases, I am wrong in my inference about the knowledge-hood of the given occurrent belief (for the evidence may be pseudo-evidence), then I am mistaken about the truth of my belief—and this is in accordance with Nyaya fallibilism: not all knowledge-claims can be sustained."
Other definitions
According to J. L. Austin, to know just means to be able to make correct assertions about the subject in question. On this pragmatic view, the internal mental states of the knower do not matter.
Philosopher Barry Allen also downplayed the role of mental states in knowledge and defined knowledge as "superlative artifactual performance", that is, exemplary performance with artifacts, including language but also technological objects like bridges, satellites, and diagrams. Allen criticized typical epistemology for its "propositional bias" (treating propositions as prototypical knowledge), its "analytic bias" (treating knowledge as prototypically mental or conceptual), and its "discursive bias" (treating knowledge as prototypically discursive). He considered knowledge to be too diverse to characterize in terms of necessary and sufficient conditions. He claimed not to be substituting knowledge-how for knowledge-that, but instead proposing a definition that is more general than both. For Allen, knowledge is "deeper than language, different from belief, more valuable than truth".
A different approach characterizes knowledge in relation to the role it plays, for example, regarding the reasons it provides or constitutes for doing or thinking something. In this sense, it can be understood as what entitles the agent to assert a fact, to use this fact as a premise when reasoning, or to act as a trustworthy informant concerning this fact. This definition has been adopted in some argumentation theory.
Paul Silva's "awareness first" epistemology posits that the common core of knowledge is awareness, providing a definition that accounts for both beliefless knowledge and knowledge grounded in belief.
Within anthropology, knowledge is often defined in a very broad sense as equivalent to understanding or culture. This includes the idea that knowledge consists in the affirmation of meaning contents and depends on a substrate, such as a brain. Knowledge characterizes social groups in the sense that different individuals belonging to the same social niche tend to be very similar concerning what they know and how they organize information. This topic is of specific interest to the subfield known as the anthropology of knowledge, which uses this and similar definitions to study how knowledge is reproduced and how it changes on the social level in different cultural contexts.
Non-propositional knowledge
Propositional knowledge, also termed factual knowledge or knowledge-that, is the most paradigmatic form of knowledge in analytic philosophy, and most definitions of knowledge in philosophy have this form in mind. It refers to the possession of certain information. The distinction to other types of knowledge is often drawn based on the differences between the linguistic formulations used to express them. It is termed knowledge-that since it can usually be expressed using a that-clause, as in "I know that Dave is at home". In everyday discourse, the term "knowledge" can also refer to various other phenomena as forms of non-propositional knowledge. Some theorists distinguish knowledge-wh from knowledge-that. Knowledge-wh is expressed using a wh-clause, such as knowing why smoke causes cancer or knowing who killed John F. Kennedy. However, the more common approach is to understand knowledge-wh as a type of knowledge-that since the corresponding expressions can usually be paraphrased using a that-clause.
A clearer contrast is between knowledge-that and knowledge-how (know-how). Know-how is also referred to as practical knowledge or ability knowledge. It is expressed in formulations like, "I know how to ride a bike." All forms of practical knowledge involve some type of competence, i.e., having the ability to do something. So to know how to play the guitar means to have the competence to play it or to know the multiplication table is to be able to recite products of numbers. For this reason, know-how may be defined as having the corresponding competence, skills, or abilities. Some forms of know-how include knowledge-that as well and some theorists even argue that practical and propositional knowledge are of the same type. However, propositional knowledge is usually reserved only to humans while practical knowledge is more common in the animal kingdom. For example, an ant knows how to walk but it presumably does not know that it is currently walking in someone's kitchen. The more common view is, therefore, to see knowledge-how and knowledge-that as two distinct types of knowledge.
Another often-discussed alternative type of knowledge is knowledge by acquaintance. It is defined as a direct familiarity with an individual, often with a person, and only arises if one has met this individual personally. In this regard, it constitutes a relation not to a proposition but to an object. Acquaintance implies that one has had a direct perceptual experience with the object of knowledge and is therefore familiar with it. Bertrand Russell contrasts it with knowledge by description'', which refers to knowledge of things that the subject has not immediately experienced, such as learning through a documentary about a country one has not yet visited. Knowledge by acquaintance can be expressed using a direct object, such as, "I know Dave." It differs in this regard from knowledge-that since no that-clause is needed. One can know facts about an individual without direct acquaintance with that individual. For example, the reader may know that Napoleon was a French military leader without knowing Napoleon personally. There is controversy whether knowledge by acquaintance is a form of non-propositional knowledge. Some theorists deny this and contend that it is just a grammatically different way of expressing propositional knowledge.
References
Concepts in epistemology
Definitions | 0.780046 | 0.986432 | 0.769462 |
Ethos | Ethos ( or ) is a Greek word meaning 'character' that is used to describe the guiding beliefs or ideals that characterize a community, nation, or ideology; and the balance between caution and passion. The Greeks also used this word to refer to the power of music to influence emotions, behaviors, and even morals. Early Greek stories of Orpheus exhibit this idea in a compelling way. The word's use in rhetoric is closely based on the Greek terminology used by Aristotle in his concept of the three artistic proofs or modes of persuasion alongside pathos and logos. It gives credit to the speaker, or the speaker is taking credit.
Etymology and origin
Ethos (, ; plurals: ethe, ; ethea, ) is a Greek word originally meaning "accustomed place" (as in "the habitats of horses/", Iliad 6.511, 15.268), "custom, habit", equivalent to Latin mores.
Ethos forms the root of ethikos, meaning "morality, showing moral character". As an adjective in the neuter plural form ta ethika.
Current usage
In modern usage, ethos denotes the disposition, character, or fundamental values peculiar to a specific person, people, organization, culture, or movement. For example, the poet and critic T. S. Eliot wrote in 1940 that "the general ethos of the people they have to govern determines the behavior of politicians". Similarly the historian Orlando Figes wrote in 1996 that in Soviet Russia of the 1920s "the ethos of the Communist party dominated every aspect of public life".
Ethos may change in response to new ideas or forces. For example, according to the Jewish historian Arie Krampf, ideas of economic modernization which were imported into Palestine in the 1930s brought about "the abandonment of the agrarian ethos and the reception of...the ethos of rapid development".
Rhetoric
In rhetoric, ethos (credibility of the speaker) is one of the three artistic proofs (pistis, πίστις) or modes of persuasion (other principles being logos and pathos) discussed by Aristotle in 'Rhetoric' as a component of argument. Speakers must establish ethos from the start. This can involve "moral competence" only; Aristotle, however, broadens the concept to include expertise and knowledge. Ethos is limited, in his view, by what the speaker says. Others, however, contend that a speaker's ethos extends to and is shaped by the overall moral character and history of the speaker—that is, what people think of his or her character before the speech has even begun (cf Isocrates).
According to Aristotle, there are three categories of ethos:
phronesis – useful skills and practical wisdom
arete – virtue, goodwill
eunoia – goodwill towards the audience
In a sense, ethos does not belong to the speaker but to the audience and it's appealing to the audience's emotions. Thus, it is the audience that determines whether a speaker is a high- or a low-ethos speaker. Violations of ethos include:
The speaker has a direct interest in the outcome of the debate (e.g. a person pleading innocence of a crime);
The speaker has a vested interest or ulterior motive in the outcome of the debate;
The speaker has no expertise (e.g. a lawyer giving a speech on space flight is less convincing than an astronaut giving the same speech).
Completely dismissing an argument based on any of the above violations of ethos is an informal fallacy (Appeal to motive). The argument may indeed be suspect; but is not, in itself, invalid.
Modern interpretations
Although Plato never uses the term "ethos" in his extant corpus; scholar Collin Bjork, a communicator, podcaster, and digital rhetorician, argues that Plato dramatizes the complexity of rhetorical ethos in the Apology of Socrates. For Aristotle, a speaker's ethos was a rhetorical strategy employed by an orator whose purpose was to "inspire trust in his audience" (Rhetorica 1380). Ethos was therefore achieved through the orator's "good sense, good moral character, and goodwill", and central to Aristotelian virtue ethics was the notion that this "good moral character" was increased in virtuous degree by habit (Rhetorica 1380). Ethos also is related to a character's habit as well (The Essential Guide to Rhetoric, 2018). The person's character is related to a person's habits (The Essential Guide to Rhetoric, 2018). Aristotle links virtue, habituation, and ethos most succinctly in Book II of Nicomachean Ethics: "Virtue, then, being of two kinds, intellectual and moral, intellectual virtue in the main owes both its birth and its growth to teaching [...] while moral virtue comes about as a result of habit, whence also its name ethike is one that is formed by a slight variation from the word ethos (habit)" (952). Discussing women and rhetoric, scholar Karlyn Kohrs Campbell notes that entering the public sphere was considered an act of moral transgression for females of the nineteenth century: "Women who formed moral reform and abolitionist societies, and who made speeches, held conventions, and published newspapers, entered the public sphere and thereby lost their claims to purity and piety" (13). Crafting an ethos within such restrictive moral codes, therefore, meant adhering to membership of what Nancy Fraser and Michael Warner have theorized as counter publics. While Warner contends that members of counter publics are afforded little opportunity to join the dominant public and therefore exert true agency, Nancy Fraser has problematized Habermas's conception of the public sphere as a dominant "social totality" by theorizing "subaltern counter publics", which function as alternative publics that represent "parallel discursive arenas where members of subordinated social groups invent and circulate counterdiscourses, which in turn permit them to formulate oppositional interpretations of their identities, interests, and needs" (67).
Though feminist rhetorical theorists have begun to offer ways of conceiving of ethos that are influenced by postmodern concepts of identity, they remain cognizant of how these classical associations have shaped and still do shape women's use of the rhetorical tool. Johanna Schmertz draws on Aristotelian ethos to reinterpret the term alongside feminist theories of subjectivity, writing that, "Instead of following a tradition that, it seems to me, reads ethos somewhat in the manner of an Aristotelian quality proper to the speaker's identity, a quality capable of being deployed as needed to fit a rhetorical situation, I will ask how ethos may be dislodged from identity and read in such a way as to multiply the positions from which women may speak" (83). Rhetorical scholar and Kate Ronald's claim that "ethos is the appeal residing in the tension between the speaker's private and public self", (39) also presents a more postmodern view of ethos that links credibility and identity. Similarly, Nedra Reynolds and Susan Jarratt echo this view of ethos as a fluid and dynamic set of identifications, arguing that "these split selves are guises, but they are not distortions or lies in the philosopher's sense. Rather they are 'deceptions' in the sophistic sense: recognition of the ways one is positioned multiply differently" (56).
Rhetorical scholar Michael Halloran has argued that the classical understanding of ethos "emphasizes the conventional rather than the idiosyncratic, the public rather than the private" (60). Commenting further on the classical etymology and understanding of ethos, Halloran illuminates the interdependence between ethos and cultural context by arguing that "To have ethos is to manifest the virtues most valued by the culture to and for which one speaks" (60). While scholars do not all agree on the dominant sphere in which ethos may be crafted, some agree that ethos is formed through the negotiation between private experience and the public, rhetorical act of self-expression. Karen Burke LeFevre's argument in Invention as Social Act situates this negotiation between the private and the public, writing that ethos "appears in that socially created space, in the 'between', the point of intersection between speaker or writer and listener or reader" (45–46).
According to Nedra Reynolds, "ethos, like postmodern subjectivity, shifts and changes over time, across texts, and around competing spaces" (336). However, Reynolds additionally discusses how one might clarify the meaning of ethos within rhetoric as expressing inherently communal roots. This stands in direct opposition to what she describes as the claim "that ethos can be faked or 'manipulated'" because individuals would be formed by the values of their culture and not the other way around (336). Rhetorical scholar John Oddo also suggests that ethos is negotiated across a community and not simply a manifestation of the self (47). In the era of mass-mediated communication, Oddo contends, one's ethos is often created by journalists and dispersed over multiple news texts. With this in mind, Oddo coins the term intertextual ethos, the notion that a public figure's "ethos is constituted within and across a range of mass media voices" (48).
In "Black Women Writers and the Trouble with Ethos", scholar Coretta Pittman notes that race has been generally absent from theories of ethos construction and that this concept is troubling for black women. Pittman writes, "Unfortunately, in the history of race relations in America, black Americans' ethos ranks low among other racial and ethnic groups in the United States. More often than not, their moral characters have been associated with a criminalized and sexualized ethos in visual and print culture" (43).
In Greek tragedy
The ways in which characters were constructed is important when considering ethos, or character, in Greek tragedy. Augustus Taber Murray explains that the depiction of a character was limited by the circumstances under which Greek tragedies were presented. These include the single unchanging scene, necessary use of the chorus, small number of characters limiting interaction, large outdoor theatres, and the use of masks, which all influenced characters to be more formal and simple. Murray also declares that the inherent characteristics of Greek tragedies are important in the makeup of the characters. One of these is the fact that tragedy characters were nearly always mythical characters. This limited the character, as well as the plot, to the already well-known myth from which the material of the play was taken. The other characteristic is the relatively short length of most Greek plays. This limited the scope of the play and characterization so that the characters were defined by one overriding motivation toward a certain objective from the beginning of the play.
However, Murray clarifies that strict constancy is not always the rule in Greek tragedy characters. To support this, he points out the example of Antigone who, even though she strongly defies Creon at the beginning of the play, begins to doubt her cause and plead for mercy as she is led to her execution.
Several other aspects of the character element in ancient Greek tragedy are worth noting. One of these, which C. Garton discusses, is the fact that either because of contradictory action or incomplete description, the character cannot be viewed as an individual, or the reader is left confused about the character. One method of reconciling this would be to consider these characters to be flat, or type-cast, instead of round. This would mean that most of the information about the character centers around one main quality or viewpoint. Comparable to the flat character option, the reader could also view the character as a symbol. Examples of this might be the Eumenides as vengeance, or Clytemnestra as symbolizing ancestral curse. Yet another means of looking at character, according to Tycho von Wilamowitz and Howald, is the idea that characterization is not important. This idea is maintained by the theory that the play is meant to affect the viewer or reader scene by scene, with attention being only focused on the section at hand. This point of view also holds that the different figures in a play are only characterized by the situation surrounding them, and only enough so that their actions can be understood.
Garet makes three more observations about a character in Greek tragedy. The first is an abundant variety of types of characters in Greek tragedy. His second observation is that the reader or viewer's need for characters to display a unified identity that is similar to human nature is usually fulfilled. Thirdly, characters in tragedies include incongruities and idiosyncrasies.
Another aspect stated by Garet is that tragedy plays are composed of language, character, and action, and the interactions of these three components; these are fused together throughout the play. He explains that action normally determines the major means of characterization. For example, the play Julius Caesar, is a good example for a character without credibility, Brutus. Another principle he states is the importance of these three components' effect on each other; the important repercussion of this being character's impact on action.
Augustus Taber Murray also examines the importance and degree of interaction between plot and character. He does this by discussing Aristotle's statements about plot and character in his Poetics: that plot can exist without character, but the character cannot exist without plot, and so the character is secondary to the plot. Murray maintains that Aristotle did not mean that complicated plot should hold the highest place in a tragedy play. This is because the plot was, more often than not, simple and therefore not a major point of tragic interest. Murray conjectures that people today do not accept Aristotle's statement about character and plot because to modern people, the most memorable things about tragedy plays are often the characters. However, Murray does concede that Aristotle is correct in that "[t]here can be no portrayal of character [...] without at least a skeleton outline of plot".
One other term frequently used to describe the dramatic revelation of character in writing is "persona". While the concept of ethos has traveled through the rhetorical tradition, the concept of persona has emerged from the literary tradition, and is associated with a theatrical mask. Roger Cherry explores the distinctions between ethos and pathos to mark the distance between a writer's autobiographical self and the author's discursive self as projected through the narrator. The two terms also help to refine distinctions between situated and invented ethos. Situated ethos relies on a speaker's or writer's durable position of authority in the world; invented ethos relies more on the immediate circumstances of the rhetorical situation.
In pictorial narrative
Ethos, or character, also appears in the visual art of famous or mythological ancient Greek events in murals, on pottery, and sculpture referred to generally as pictorial narrative. Aristotle even praised the ancient Greek painter Polygnotos because his paintings included characterization. The way in which the subject and his actions are portrayed in visual art can convey the subject's ethical character and through this the work's overall theme, just as effectively as poetry or drama can. This characterization portrayed men as they ought to be, which is the same as Aristotle's idea of what ethos or character should be in tragedy. (Stansbury-O'Donnell, p. 178) Mark D. Stansbury-O'Donnell states that pictorial narratives often had ethos as its focus, and was therefore concerned with showing the character's moral choices. (Stansbury-O'Donnell, p. 175) David Castriota, agreeing with Stansbury-O'Donnell's statement, says that the main way Aristotle considered poetry and visual arts to be on equal levels was in character representation and its effect on action. However, Castriota also maintains about Aristotle's opinion that "his interest has to do with the influence that such ethical representation may exert upon the public". Castriota also explains that according to Aristotle, "[t]he activity of these artists is to be judged worthy and useful above all because exposure of their work is beneficial to the polis". Accordingly, this was the reason for the representation of character, or ethos, in public paintings and sculptures. In order to portray the character's choice, the pictorial narrative often shows an earlier scene than when the action was committed. Stansbury-O'Donnell gives an example of this in the form of a picture by the ancient Greek artist Exekia which shows the Greek hero Ajax planting his sword in the ground in preparation to commit suicide, instead of the actual suicide scene (Stansbury-O'Donnell, p. 177). Additionally, Castriota explains that ancient Greek art expresses the idea that character was the major factor influencing the outcome of the Greeks' conflicts against their enemies. Because of this, "ethos was the essential variable in the equation or analogy between myth and actuality".
See also
References
Further reading
Aristotle. Nicomachean Ethics (transl. W. D. Ross). Oxford: Oxford University Press, 2009. .
Aristotle. On Rhetoric (Transl. G. A. Kennedy). Oxford: Oxford University Press, 2006. .
Barthes, Roland.L'Ancienne rhétorique. Communications, Vol. 16, Nr. 1 (1970), Seuil: pp. 172–223.
Campbell, Karlyn Kohrs Man Cannot Speak for Her: A Critical Study of Early Feminist Rhetoric. Praeger, 1989.
Castriota, David. Myth, Ethos, and Actuality: Official Art in Fifth-Century B.C. Athens. London: University of Wisconsin Press, 1992.
Chiron, Pierre. Aristotle: Rhétorique. Paris: Flammarion, 2007.
Fraser, Nancy. "Rethinking the Public Sphere: A Contribution to the Critique of the Actually Existing Democracy." Social Text 25.26 (1990): 56–80.
Gandler, Stefan "The quadruple modern Ethos: Critical Theory in the Americas." APA Newsletter on Hispanic/Latino Issues in Philosophy, Newark, DE: American Philosophical Association/University of Delaware, vol. 14, núm. 1, fall 2014, pp. 2–4. .
Garton, C. "Characteristics in Greek Tragedy." The Journal of Hellenic Studies, Vol. 77, Part 2. (1957), pp. 247–254. JSTOR.
Garver, Eugene. Aristotle's Rhetoric: An Art of Character. Chicago: University of Chicago Press, 1995.
Givone, Sergio. Eros/Ethos. Turin: Einaudi, 2000. .
Grazia, Margreta. Hamlet without Hamlet. New York, NY: Cambridge, 2007. .
Habermas, Jurgen. The Structural Transformation of the Public Sphere. Cambridge: The MIT Press, 1991.
Halliwell, Stephen. Aristotle's Poetics. Chicago: University of Chicago Press, 1998. .
Halloran, S. Michael. "Aristotle's Concept of Ethos, or if not His, Someone Else's." Rhetoric Review, Vol. 1, No. 1. (Sep. 1982), pp. 58–63. JSTOR. .
Jarratt, Susan, and Nedra Reynolds. "The Splitting Image: Contemporary Feminisms and the Ethics of ethos." Ethos: New Essays in Rhetorical and Critical Theory. Eds. James S. Baumlin and Tita French Baumlin. Dallas: Southern Methodist University Press, 1994. 37–63.
LeFevre, K.B. Invention as a Social Act. Southern Illinois University Press, 1987.
Lundberg, Christian O. and Keith, William M. "The Essential Guide to Rhetoric". 2nd Eds. Bedford/St. Martin's: Macmillan Learning, 2018.
McDonald, Marianne; Walton, J. Michael (eds.). The Cambridge Companion to Greek and Roman Theater. Cambridge (UK): Cambridge University Press, 2007. .
Meyer, Michel. La rhétorique. Paris: Presses Universitaires de France, coll. «Que sais-je? n° 2133», 2004. .
Müller, Jörn. Physics und Ethos: Der Naturbegriff bei Aristoteles und seine Relevanz für die Ethik. Würzburg: Königshausen & Neumann, 2006.
Höffe, Otfried (ed.). Aristoteles. Poetik. Berlin: Akademie Verlag, 2009.
Hyde, Michael J.; Schrag, Calvin O. (eds.). The Ethos of Rhetoric. Columbia (SC): University of South Carolina, 2004. .
Oddo, John. (2014) "The Chief Prosecutor and the Iraqi Regime: Intertextual Ethos and Transitive Chains of Authority." In Intertextuality and the 24-Hour News Cycle: A Day in the Rhetorical Life of Colin Powell's U.N. Address, pp. 45–76. East Lansing, MI: Michigan State University Press.
Paris, Bernard. Character as a Subversive Force in Shakespeare: the history and Roman plays. London: Associated University Presses Inc, 1991.
Pittman, Corretta. "Black Women Writers and the Trouble with Ethos: Harriet Jacobs, Billie Holiday, and Sister Souljah." Rhetoric Society Quarterly. 37 (2007): 43–70.
Proscurcin Jr., Pedro. Der Begriff Ethos bei Homer. Beitrag zu einer philosophischen Interpretation. Heidelberg: Universitätsverlag Winter, 2014. .
Rapp, Christof. Aristoteles: Rhetorik. Berlin: De Gruyter, 2002.
Ronald, Kate. "A Reexamination of Personal and Public Discourse in Classical Rhetoric." Rhetoric Review 9.1 (1990): 36–48.
Rorty, Amélie Oksenberg (ed.). Essays on Aristotle's Rhetoric. Berkeley (CA): University of California Press, 1996.
Schmertz, Johanna. "Constructing Essences: Ethos and the Postmodern Subject of Feminism." Rhetoric Review 18.1 (1999): 82–91.
Vergnières, Solange. Éthique et Politique chez Aristote: Physis, Êthos, Nomos. Paris: PUF, 1995.
Warner, Michael. "Publics and Counterpublics." Public Culture 14.1: 49–90.
Woerther, Frédérique. L'èthos aristotélicien. Paris: Librairie Philosophique Vrin, 2007. .
External links
Ancient Greek theatre
Concepts in ancient Greek ethics
Narratology
Plot (narrative)
Poetics
Rhetoric
Virtue
Social agreement
Theories in ancient Greek philosophy
Writing | 0.771289 | 0.997602 | 0.76944 |
Rigour | Rigour (British English) or rigor (American English; see spelling differences) describes a condition of stiffness or strictness. These constraints may be environmentally imposed, such as "the rigours of famine"; logically imposed, such as mathematical proofs which must maintain consistent answers; or socially imposed, such as the process of defining ethics and law.
Etymology
"Rigour" comes to English through old French (13th c., Modern French rigueur) meaning "stiffness", which itself is based on the Latin rigorem (nominative rigor) "numbness, stiffness, hardness, firmness; roughness, rudeness", from the verb rigere "to be stiff". The noun was frequently used to describe a condition of strictness or stiffness, which arises from a situation or constraint either chosen or experienced passively. For example, the title of the book Theologia Moralis Inter Rigorem et Laxitatem Medi roughly translates as "mediating theological morality between rigour and laxness". The book details, for the clergy, situations in which they are obligated to follow church law exactly, and in which situations they can be more forgiving yet still considered moral. Rigor mortis translates directly as the stiffness (rigor) of death (mortis), again describing a condition which arises from a certain constraint (death).
Intellectualism
Intellectual rigour is a process of thought which is consistent, does not contain self-contradiction, and takes into account the entire scope of available knowledge on the topic. It actively avoids logical fallacy. Furthermore, it requires a sceptical assessment of the available knowledge. If a topic or case is dealt with in a rigorous way, it typically means that it is dealt with in a comprehensive, thorough and complete way, leaving no room for inconsistencies.
Scholarly method describes the different approaches or methods which may be taken to apply intellectual rigour on an institutional level to ensure the quality of information published. An example of intellectual rigour assisted by a methodical approach is the scientific method, in which a person will produce a hypothesis based on what they believe to be true, then construct experiments in order to prove that hypothesis wrong. This method, when followed correctly, helps to prevent against circular reasoning and other fallacies which frequently plague conclusions within academia. Other disciplines, such as philosophy and mathematics, employ their own structures to ensure intellectual rigour. Each method requires close attention to criteria for logical consistency, as well as to all relevant evidence and possible differences of interpretation. At an institutional level, peer review is used to validate intellectual rigour.
Honesty
Intellectual rigour is a subset of intellectual honesty—a practice of thought in which ones convictions are kept in proportion to valid evidence. Intellectual honesty is an unbiased approach to the acquisition, analysis, and transmission of ideas. A person is being intellectually honest when he or she, knowing the truth, states that truth, regardless of outside social/environmental pressures. It is possible to doubt whether complete intellectual honesty exists—on the grounds that no one can entirely master his or her own presuppositions—without doubting that certain kinds of intellectual rigour are potentially available. The distinction certainly matters greatly in debate, if one wishes to say that an argument is flawed in its premises.
Politics and law
The setting for intellectual rigour does tend to assume a principled position from which to advance or argue. An opportunistic tendency to use any argument at hand is not very rigorous, although very common in politics, for example. Arguing one way one day, and another later, can be defended by casuistry, i.e. by saying the cases are different.
In the legal context, for practical purposes, the facts of cases do always differ. Case law can therefore be at odds with a principled approach; and intellectual rigour can seem to be defeated. This defines a judge's problem with uncodified law. Codified law poses a different problem, of interpretation and adaptation of definite principles without losing the point; here applying the letter of the law, with all due rigour, may on occasion seem to undermine the principled approach.
Mathematics
Mathematical rigour can apply to methods of mathematical proof and to methods of mathematical practice (thus relating to other interpretations of rigour).
Mathematical proof
Mathematical rigour is often cited as a kind of gold standard for mathematical proof. Its history traces back to Greek mathematics, especially to Euclid's Elements.
Until the 19th century, Euclid's Elements was seen as extremely rigorous and profound, but in the late 19th century, Hilbert (among others) realized that the work left certain assumptions implicit—assumptions that could not be proved from Euclid's Axioms (e.g. two circles can intersect in a point, some point is within an angle, and figures can be superimposed on each other). This was contrary to the idea of rigorous proof where all assumptions need to be stated and nothing can be left implicit. New foundations were developed using the axiomatic method to address this gap in rigour found in the Elements (e.g., Hilbert's axioms, Birkhoff's axioms, Tarski's axioms).
During the 19th century, the term "rigorous" began to be used to describe increasing levels of abstraction when dealing with calculus which eventually became known as mathematical analysis. The works of Cauchy added rigour to the older works of Euler and Gauss. The works of Riemann added rigour to the works of Cauchy. The works of Weierstrass added rigour to the works of Riemann, eventually culminating in the arithmetization of analysis. Starting in the 1870s, the term gradually came to be associated with Cantorian set theory.
Mathematical rigour can be modelled as amenability to algorithmic proof checking. Indeed, with the aid of computers, it is possible to check some proofs mechanically. Formal rigour is the introduction of high degrees of completeness by means of a formal language where such proofs can be codified using set theories such as ZFC (see automated theorem proving).
Published mathematical arguments have to conform to a standard of rigour, but are written in a mixture of symbolic and natural language. In this sense, written mathematical discourse is a prototype of formal proof. Often, a written proof is accepted as rigorous although it might not be formalised as yet. The reason often cited by mathematicians for writing informally is that completely formal proofs tend to be longer and more unwieldy, thereby obscuring the line of argument. An argument that appears obvious to human intuition may in fact require fairly long formal derivations from the axioms. A particularly well-known example is how in Principia Mathematica, Whitehead and Russell have to expend a number of lines of rather opaque effort in order to establish that, indeed, it is sensical to say: "1+1=2". In short, comprehensibility is favoured over formality in written discourse.
Still, advocates of automated theorem provers may argue that the formalisation of proof does improve the mathematical rigour by disclosing gaps or flaws in informal written discourse. When the correctness of a proof is disputed, formalisation is a way to settle such a dispute as it helps to reduce misinterpretations or ambiguity.
Physics
The role of mathematical rigour in relation to physics is twofold:
First, there is the general question, sometimes called Wigner's Puzzle, "how it is that mathematics, quite generally, is applicable to nature?" Some scientists believe that its record of successful application to nature justifies the study of mathematical physics.
Second, there is the question regarding the role and status of mathematically rigorous results and relations. This question is particularly vexing in relation to quantum field theory, where computations often produce infinite values for which a variety of non-rigorous work-arounds have been devised.
Both aspects of mathematical rigour in physics have attracted considerable attention in philosophy of science (see, for example, ref. and ref. and the works quoted therein).
Education
Rigour in the classroom is a hotly debated topic amongst educators. Even the semantic meaning of the word is contested.
Generally speaking, classroom rigour consists of multi-faceted, challenging instruction and correct placement of the student. Students excelling in formal operational thought tend to excel in classes for gifted students. Students who have not reached that final stage of cognitive development, according to developmental psychologist Jean Piaget, can build upon those skills with the help of a properly trained teacher.
Rigour in the classroom is commonly called "rigorous instruction". It is instruction that requires students to construct meaning for themselves, impose structure on information, integrate individual skills into processes, operate within but at the outer edge of their abilities, and apply what they learn in more than one context and to unpredictable situations
See also
Intellectual honesty
Intellectual dishonesty
Pedant
Scientific method
Self-deception
Sophistry
Cognitive rigor
References
Philosophical logic | 0.777924 | 0.989019 | 0.769382 |
Nomothetic | Nomothetic literally means "proposition of the law" (Greek derivation) and is used in philosophy, psychology, and law with differing meanings.
Etymology
In the general humanities usage, nomothetic may be used in the sense of "able to lay down the law", "having the capacity to posit lasting sense" (from , from nomothetēs νομοθέτης "lawgiver", from νόμος "law" and the Proto-Indo-European etymon nem- meaning to "take, give, account, apportion")), e.g., 'the nomothetic capability of the early mythmakers' or 'the nomothetic skill of Adam, given the power to name things.'
In psychology
In psychology, nomothetic refers to research about general principles or generalizations across a population of individuals. For example, the Big Five model of personality and Piaget's developmental stages are nomothetic models of personality traits and cognitive development respectively. In contrast, idiographic refers to research about the unique and contingent aspects of individuals, as in psychological case studies.
In psychological testing, nomothetic measures are contrasted to ipsative or idiothetic measures, where nomothetic measures are measures that are observed on a relatively large sample and have a more general outlook.
In other fields
In sociology, nomothetic explanation presents a generalized understanding of a given case, and is contrasted with idiographic explanation, which presents a full description of a given case. Nomothetic approaches are most appropriate to the deductive approach to social research inasmuch as they include the more highly structured research methodologies which can be replicated and controlled, and which focus on generating quantitative data with a view to explaining causal relationships.
In anthropology, nomothetic refers to the use of generalization rather than specific properties in the context of a group as an entity.
In history, nomothetic refers to the philosophical shift in emphasis away from traditional presentation of historical text restricted to wars, laws, dates, and such, to a broader appreciation and deeper understanding.
See also
Nomothetic and idiographic
Nomological
References
Sociological terminology | 0.795691 | 0.966758 | 0.769241 |
Anthropocentrism | Anthropocentrism (; ) is the belief that human beings are the central or most important entity on the planet. The term can be used interchangeably with humanocentrism, and some refer to the concept as human supremacy or human exceptionalism. From an anthropocentric perspective, humankind is seen as separate from nature and superior to it, and other entities (animals, plants, minerals, etc.) are viewed as resources for humans to use.
It is possible to distinguish between at least three types of anthropocentrism: perceptual anthropocentrism (which "characterizes paradigms informed by sense-data from human sensory organs"); descriptive anthropocentrism (which "characterizes paradigms that begin from, center upon, or are ordered around Homo sapiens / ‘the human'"); and normative anthropocentrism (which "characterizes paradigms that make assumptions or assertions about the superiority of Homo sapiens, its capacities, the primacy of its values, [or] its position in the universe").
Anthropocentrism tends to interpret the world in terms of human values and experiences. It is considered to be profoundly embedded in many modern human cultures and conscious acts. It is a major concept in the field of environmental ethics and environmental philosophy, where it is often considered to be the root cause of problems created by human action within the ecosphere.
However, many proponents of anthropocentrism state that this is not necessarily the case: they argue that a sound long-term view acknowledges that the global environment must be made continually suitable for humans and that the real issue is shallow anthropocentrism.
Environmental philosophy
Some environmental philosophers have argued that anthropocentrism is a core part of a perceived human drive to dominate or "master" the Earth. Anthropocentrism is believed by some to be the central problematic concept in environmental philosophy, where it is used to draw attention to claims of a systematic bias in traditional Western attitudes to the non-human world that shapes humans' sense of self and identities. Val Plumwood argued that anthropocentrism plays an analogous role in green theory to androcentrism in feminist theory and ethnocentrism in anti-racist theory. Plumwood called human-centredness "anthrocentrism" to emphasise this parallel.
One of the first extended philosophical essays addressing environmental ethics, John Passmore's Man's Responsibility for Nature has been criticised by defenders of deep ecology because of its anthropocentrism, often claimed to be constitutive of traditional Western moral thought. Indeed, defenders of anthropocentrism concerned with the ecological crisis contend that the maintenance of a healthy, sustainable environment is necessary for human well-being as opposed to for its own sake. According to William Grey, the problem with a "shallow" viewpoint is not that it is human-centred: "What's wrong with shallow views is not their concern about the well-being of humans, but that they do not really consider enough in what that well-being consists. According to this view, we need to develop an enriched, fortified anthropocentric notion of human interest to replace the dominant short-term, sectional and self-regarding conception." In turn, Plumwood in Environmental Culture: The Ecological Crisis of Reason argued that Grey's anthropocentrism is inadequate.
Many devoted environmentalists encompass a somewhat anthropocentric-based philosophical view supporting the fact that they will argue in favor of saving the environment for the sake of human populations. Grey writes: "We should be concerned to promote a rich, diverse, and vibrant biosphere. Human flourishing may certainly be included as a legitimate part of such a flourishing." Such a concern for human flourishing amidst the flourishing of life as a whole, however, is said to be indistinguishable from that of deep ecology and biocentrism, which has been proposed as both an antithesis of anthropocentrism and as a generalised form of anthropocentrism.
Judaeo–Christian traditions
In the 1985 CBC series "A Planet For the Taking", David Suzuki explored the Old Testament roots of anthropocentrism and how it shaped human views of non-human animals. Some Christian proponents of anthropocentrism base their belief on the Bible, such as the verse 1:26 in the Book of Genesis:
The use of the word "dominion" in the Genesis has been used to justify an anthropocentric worldview, but recently some have found it controversial, viewing it as possibly a mistranslation from the Hebrew. However an argument can be made that the Bible actually places all the importance on God as creator, and humans as merely another part of creation.
Moses Maimonides, a Torah scholar who lived in the twelfth century AD, was renowned for his staunch opposition to anthropocentrism. He referred to humans as "just a drop in the bucket" and asserted that "humans are not the axis of the world". He also claimed that anthropocentric thinking is what leads humans to believe in the existence of evil things in nature. According to Rabbi Norman Lamm, Moses Maimonides "refuted the exaggerated ideas about the importance of man and urged us to abandon these fantasies.
Catholic social teaching sees the pre-eminence of human beings over the rest of creation in terms of service rather than domination. Pope Francis, in his 2015 encyclical letter Laudato si' , notes that "an obsession with denying any pre-eminence to the human person" endangers the concern which should be shown to protecting and upholding the welfare of all people, which he argues should rank alongside the "care for our common home" which is the subject of his letter. In the same text he acknowledges that "a mistaken understanding" of Christian belief "has at times led us to justify mistreating nature, to exercise tyranny over creation": in such actions, Christian believers have "not [been] faithful to the treasures of wisdom which we have been called to protect and preserve. In his follow-up exhortation, Laudate Deum (2023) he refers to a preferable understanding of "the unique and central value of the human being amid the marvellous concert of all God's creatures" as a "situated anthropocentrism".
Human rights
Anthropocentrism is the grounding for some naturalistic concepts of human rights. Defenders of anthropocentrism argue that it is the necessary fundamental premise to defend universal human rights, since what matters morally is simply being human. For example, noted philosopher Mortimer J. Adler wrote, "Those who oppose injurious discrimination on the moral ground that all human beings, being equal in their humanity, should be treated equally in all those respects that concern their common humanity, would have no solid basis in fact to support their normative principle." Adler is stating here that denying what is now called human exceptionalism could lead to tyranny, writing that if humans ever came to believe that they do not possess a unique moral status, the intellectual foundation of their liberties collapses: "Why, then, should not groups of superior men be able to justify their enslavement, exploitation, or even genocide of inferior human groups on factual and moral grounds akin to those we now rely on to justify our treatment of the animals we harness as beasts of burden, that we butcher for food and clothing, or that we destroy as disease-bearing pests or as dangerous predators?"
Author and anthropocentrism defender Wesley J. Smith from the Discovery Institute has written that human exceptionalism is what gives rise to human duties to each other, the natural world, and to treat animals humanely. Writing in A Rat is a Pig is a Dog is a Boy, a critique of animal rights ideology, "Because we are unquestionably a unique species—the only species capable of even contemplating ethical issues and assuming responsibilities—we uniquely are capable of apprehending the difference between right and wrong, good and evil, proper and improper conduct toward animals. Or to put it more succinctly, if being human isn't what requires us to treat animals humanely, what in the world does?"
Moral status of animals
Anthropocentrism is closely related to the notion of speciecism, defined by Richard D. Ryder as a "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species". One of the earliest of these critics was J. Howard Moore, who in The Universal Kinship (1906) argued that Charles Darwin's On the Origin of Species (1859) "sealed the doom" of anthropocentrism.
While humans cognition is relatively advanced, many traits traditionally used to justify humanity exceptionalism (such as rationality, emotional complexity and social bonds) are not unique to humans. Research in ethology has shown that non-human animals, such as primates, elephants, and cetaceans, also demonstrate complex social structures, emotional depth, and problem-solving abilities. This challenges the claim that humans possess qualities absent in other animals, and which would justify denying moral status to them.
Animal welfare proponents attribute moral consideration to all sentient animals, proportional to their ability to have positive or negative mental experiences. It is notably associated with the ethical theory of utilitarianism, which aims to maximize well-being. It is notably defended by Peter Singer. According to David Pearce, "other things being equal, equally strong interests should count equally." Jeremy Bentham is also known for raising early the issue of animal welfare, arguing that "the question is not, Can they reason? nor, Can they talk? but, Can they suffer?". Animal welfare proponents can in theory accept animal exploitation if the benefits outweigh the harms. But in practice, they generally consider that intensive animal farming causes a massive amount of suffering that outweighs the relatively minor benefit that humans get from consuming animals.
Animal rights proponents argue that all animals have inherent rights, similar to human rights, and should not be used as means to human ends. Unlike animal welfare advocates, who focus on minimizing suffering, animal rights supporters often call for the total abolition of practices that exploit animals, such as intensive animal farming, animal testing, and hunting. Prominent figures like Tom Regan argue that animals are "subjects of a life" with inherent value, deserving moral consideration regardless of the potential benefits humans may derive from using them.
Cognitive psychology
In cognitive psychology, the term anthropocentric thinking has been defined as "the tendency to reason about unfamiliar biological species or processes by analogy to humans." Reasoning by analogy is an attractive thinking strategy, and it can be tempting to apply one's own experience of being human to other biological systems. For example, because death is commonly felt to be undesirable, it may be tempting to form the misconception that death at a cellular level or elsewhere in nature is similarly undesirable (whereas in reality programmed cell death is an essential physiological phenomenon, and ecosystems also rely on death). Conversely, anthropocentric thinking can also lead people to underattribute human characteristics to other organisms. For instance, it may be tempting to wrongly assume that an animal that is very different from humans, such as an insect, will not share particular biological characteristics, such as reproduction or blood circulation.
Anthropocentric thinking has predominantly been studied in young children (mostly up to the age of 10) by developmental psychologists interested in its relevance to biology education. Children as young as 6 have been found to attribute human characteristics to species unfamiliar to them (in Japan), such as rabbits, grasshoppers or tulips. Although relatively little is known about its persistence at a later age, evidence exists that this pattern of human exceptionalist thinking can continue through young adulthood at least, even among students who have been increasingly educated in biology.
The notion that anthropocentric thinking is an innate human characteristic has been challenged by study of American children raised in urban environments, among whom it appears to emerge between the ages of 3 and 5 years as an acquired perspective. Children's recourse to anthropocentric thinking seems to vary with their experience of nature, and cultural assumptions about the place of humans in the natural world. For example, whereas young children who kept goldfish were found to think of frogs as being more goldfish-like, other children tended to think of frogs in terms of humans. More generally, children raised in rural environments appear to use anthropocentric thinking less than their urban counterparts because of their greater familiarity with different species of animals and plants. Studies involving children from some of the indigenous peoples of the Americas have found little use of anthropocentric thinking. Study of children among the Wichí people in South America showed a tendency to think of living organisms in terms of their perceived taxonomic similarities, ecological considerations, and animistic traditions, resulting in a much less anthropocentric view of the natural world than is experienced by many children in Western societies.
In popular culture
In fiction from all eras and societies, there is fiction depicting the actions of humans to ride, eat, milk, and otherwise treat (non-human) animals as inferior. There are occasional fictional exceptions, such as talking animals as aberrations to the rule distinguishing people from animals.
In science fiction, humanocentrism is the idea that humans, as both beings and as a species, are the superior sentients. Essentially the equivalent of racial supremacy on a galactic scale, it entails intolerant discrimination against sentient non-humans, much like race supremacists discriminate against those not of their race. A prime example of this concept is utilized as a story element for the Mass Effect series. After humanity's first contact results in a brief war, many humans in the series develop suspicious or even hostile attitudes towards the game's various alien races. By the time of the first game, which takes place several decades after the war, many humans still retain such sentiments in addition to forming 'pro-human' organizations.
This idea is countered by anti-humanism. At times, this ideal also includes fear of and superiority over strong AIs and cyborgs, downplaying the ideas of integration, cybernetic revolts, machine rule and Tilden's Laws of Robotics.
Mark Twain mocked the belief in human supremacy in Letters from the Earth (written c. 1909, published 1962).
The Planet of the Apes franchise focuses on the analogy of apes becoming the dominant species in society and the fall of humans (see also human extinction). In the 1968 film, Taylor, a human states "take your stinking paws off me, you damn dirty ape!". In the 2001 film, this is contrasted with Attar (a gorilla)'s quote "take your stinking hands off me, you damn dirty human!". This links in with allusions that in becoming the dominant species apes are becoming more like humans (anthropomorphism). In the film Battle for the Planet of the Apes, Virgil, an orangutan states "ape has never killed ape, let alone an ape child. Aldo has killed an ape child. The branch did not break. It was cut with a sword." in reference to planned murder; a stereotypical human concept. Additionally, in Dawn of the Planet of the Apes, Caesar states "I always think...ape better than human. I see now...how much like them we are."
In George Orwell's novel Animal Farm, this theme of anthropocentrism is also present. Whereas originally the animals planned for liberation from humans and animal equality, as evident from the "seven commandments" such as "whatever goes upon two legs is an enemy", "Whatever goes upon four legs, or has wings, is a friend", "All animals are equal"; the pigs would later abridge the commandments with statements such as "All animals are equal, but some animals are more equal than others", and "Four legs good, two legs better."
The 2012 documentary The Superior Human? systematically analyzes anthropocentrism and concludes that value is fundamentally an opinion, and since life forms naturally value their own traits, most humans are misled to believe that they are actually more valuable than other species. This natural bias, according to the film, combined with a received sense of comfort and an excuse for exploitation of non-humans cause anthropocentrism to remain in society.
In his 2009 book Eating Animals, Jonathan Foer describes anthropocentrism as "The conviction that humans are the pinnacle of evolution, the appropriate yardstick by which to measure the lives of other animals, and the rightful owners of everything that lives."
See also
References
Further reading
Bertalanffy, Ludwig Von (1993) General System Theory: Foundations, Development, Applications pp. 239–48
Boddice, Rob (ed.) (2011) Anthropocentrism: Humans, Animals, Environments Leiden and Boston: Brill
Mylius, Ben (2018). "Three Types of Anthropocentrism". Environmental Philosophy 15 (2):159-194.
White, Lynn Townsend, Jr, "The Historical Roots of Our Ecologic Crisis", Science, Vol 155 (Number 3767), 10 March 1967, pp 1203–1207
Human supremacism: why are animal rights activists still the "orphans of the left"?. New Statesman America. April 30, 2019.
Human Supremacy: The Source of All Environmental Crises? Psychology Today December 25, 2021
Animal ethics
Environmental ethics
Posthumanism
Philosophical theories | 0.771676 | 0.996817 | 0.76922 |
Radical constructivism | Radical constructivism is an approach to epistemology that situates knowledge in terms of knowers' experience. It looks to break with the conception of knowledge as a correspondence between a knower's understanding of their experience and the world beyond that experience. Adopting a skeptical position towards correspondence as in principle impossible to verify because one cannot access the world beyond one's experience in order to test the relation, radical constructivists look to redefine epistemology in terms of the viability of knowledge within knowers' experience. This break from the traditional framing of epistemology differentiates it from "trivial" forms of constructivism that emphasise the role of the knower in constructing knowledge while maintaining the traditional perspective of knowledge in terms of correspondence. Radical constructivism has been described as a "post-epistemological" position.
Radical constructivism was initially formulated by Ernst von Glasersfeld, who drew on the work of Jean Piaget, Giambattista Vico, and George Berkeley amongst others. Radical constructivism is closely related to second-order cybernetics, and especially the work of Heinz von Foerster, Humberto Maturana, and Francisco Varela. During the 1980s, Siegfried J. Schmidt played a leading role in establishing radical constructivism as a paradigm within the German speaking academic world.
Radical constructivism has been influential in educational research and the philosophy of science.
Constructivist Foundations is a free online journal publishing peer-reviewed articles on radical constructivism by researchers from multiple domains.
References
Further reading
Foerster, H. von, & Poerksen, B. (2002). Understanding systems (K. Leube, Trans.). Kluwer Academic.
Glanville, R. (2007). The importance of being Ernst. Constructivist Foundations, 2(2/3), 5-6. http://constructivist.info/2/2-3/005.glanville
Glasersfeld, E. von (1995). Radical constructivism: A way of knowing and learning. Routledge Falmer.
Glasersfeld, E. von. (1984). An introduction to radical constructivism. In P. Watzlawick (Ed.), The invented reality (pp. 17-40). Norton. http://www.vonglasersfeld.com/070.1
Glasersfeld, E. von. (1990). An exposition of constructivism: Why some like it radical. Journal for Research in Mathematics Education Monograph, 4, 19-29. https://doi.org/10.2307/749910
Poerksen, B. (2004). The Certainty of Uncertainty: Dialogues Introducing Constructivism. Ingram Pub Services.
Epistemological theories
Cybernetics
Constructivism | 0.790396 | 0.97317 | 0.76919 |
Abstract and concrete | In philosophy and the arts, a fundamental distinction is between things that are abstract and things that are concrete. While there is no general consensus as to how to precisely define the two, examples include that things like numbers, sets, and ideas are abstract objects, while plants, dogs, and planets are concrete objects. Popular suggestions for a definition include that the distinction between concreteness versus abstractness is, respectively: between (1) existence inside versus outside space-time; (2) having causes and effects versus not; 3) being related, in metaphysics, to particulars versus universals; and (4) belonging to either the physical versus the mental realm (or the mental-and-physical realm versus neither). Another view is that it is the distinction between contingent existence versus necessary existence; however, philosophers differ on which type of existence here defines abstractness, as opposed to concreteness. Despite this diversity of views, there is broad agreement concerning most objects as to whether they are abstract or concrete, such that most interpretations agree, for example, that rocks are concrete objects while numbers are abstract objects.
Abstract objects are most commonly used in philosophy, particularly metaphysics, and semantics. They are sometimes called abstracta in contrast to concreta. The term abstract object is said to have been coined by Willard Van Orman Quine. Abstract object theory is a discipline that studies the nature and role of abstract objects. It holds that properties can be related to objects in two ways: through exemplification and through encoding. Concrete objects exemplify their properties while abstract objects merely encode them. This approach is also known as the dual copula strategy.
In philosophy
The type–token distinction identifies physical objects that are tokens of a particular type of thing. The "type" of which it is a part is in itself an abstract object. The abstract–concrete distinction is often introduced and initially understood in terms of paradigmatic examples of objects of each kind:
Abstract objects have often garnered the interest of philosophers because they raise problems for popular theories. In ontology, abstract objects are considered problematic for physicalism and some forms of naturalism. Historically, the most important ontological dispute about abstract objects has been the problem of universals. In epistemology, abstract objects are considered problematic for empiricism. If abstracta lack causal powers and spatial location, how do we know about them? It is hard to say how they can affect our sensory experiences, and yet we seem to agree on a wide range of claims about them.
Some, such as Ernst Mally, Edward Zalta and arguably, Plato in his Theory of Forms, have held that abstract objects constitute the defining subject matter of metaphysics or philosophical inquiry more broadly. To the extent that philosophy is independent of empirical research, and to the extent that empirical questions do not inform questions about abstracta, philosophy would seem especially suited to answering these latter questions.
In modern philosophy, the distinction between abstract and concrete was explored by Immanuel Kant and G. W. F. Hegel.
Gottlob Frege said that abstract objects, such as propositions, were members of a third realm, different from the external world or from internal consciousness. (See Popper's three worlds.)
Abstract objects and causality
Another popular proposal for drawing the abstract–concrete distinction contends that an object is abstract if it lacks causal power. A causal power has the ability to affect something causally. Thus, the empty set is abstract because it cannot act on other objects. One problem with this view is that it is not clear exactly what it is to have causal power. For a more detailed exploration of the abstract–concrete distinction, see the relevant Stanford Encyclopedia of Philosophy article.
Quasi-abstract entities
Recently, there has been some philosophical interest in the development of a third category of objects known as the quasi-abstract. Quasi-abstract objects have drawn particular attention in the area of social ontology and documentality. Some argue that the over-adherence to the platonist duality of the concrete and the abstract has led to a large category of social objects having been overlooked or rejected as nonexistent because they exhibit characteristics that the traditional duality between concrete and abstract regards as incompatible. Specifically, the ability to have temporal location, but not spatial location, and have causal agency (if only by acting through representatives). These characteristics are exhibited by a number of social objects, including states of the international legal system.
Concrete and abstract thought in psychology
Jean Piaget uses the terms "concrete" and "formal" to describe two different types of learning. Concrete thinking involves facts and descriptions about everyday, tangible objects, while abstract (formal operational) thinking involves a mental process.
See also
References
Sources
External links
Nominalism, Realism, Conceptualism, from The Catholic Encyclopedia
Abstract vs. Concrete in Writing, from Writing for Results
Abstract object theory
Abstraction
Cognition
Metaphysical properties
Consciousness
Metaphysical theories
Metaphysics of mind
Theory of mind
Syntax–semantics interface | 0.775928 | 0.991004 | 0.768948 |
Criteria of truth | In epistemology, criteria of truth (or tests of truth) are standards and rules used to judge the accuracy of statements and claims. They are tools of verification, and as in the problem of the criterion, the reliability of these tools is disputed. Understanding a philosophy's criteria of truth is fundamental to a clear evaluation of that philosophy. This necessity is driven by the varying, and conflicting, claims of different philosophies. The rules of logic have no ability to distinguish truth on their own. An individual must determine what standards distinguish truth from falsehood. Not all criteria are equally valid. Some standards are sufficient, while others are questionable.
The criteria listed represent those most commonly used by scholars and the general public.
Authority
The opinions of those with significant experience, highly trained or possessing an advanced degree are often considered a form of proof. Their knowledge and familiarity within a given field or area of knowledge command respect and allow their statements to be criteria of truth. A person may not simply declare themselves an authority, but rather must be properly qualified. Despite the wide respect given to expert testimony, it is not an infallible criterion. For example, multiple authorities may conflict in their claims and conclusions.
Coherence
Coherence refers to a consistent and overarching explanation for all facts. To be coherent, all pertinent facts must be arranged in a consistent and cohesive fashion as an integrated whole. The theory that most effectively reconciles all facts in this fashion may be considered most likely to be true. Coherence is the most potentially effective test of truth because it most adequately addresses all elements. The main limitation lies not in the standard, but in the human inability to acquire all facts of an experience. Only an omniscient mind could be aware of all of the relevant information. A scholar must accept this limitation and accept as true the most coherent explanation for the available facts. Coherence is difficult to dispute as a criterion of truth, since arguing against coherence is validating incoherence, which is inherently illogical.
Consensus gentium
Some view opinions held by all people to be valid criteria of truth. According to consensus gentium, the universal consent of all mankind (all humans holding a distinct belief), proves it is true. There is some value in the criterion if it means innate truth, such as the laws of logic and mathematics. If it merely means agreement, as in a unanimous vote, its value is questionable. For example, general assent once held that the sun revolved about the flat earth.
Consistency (mere)
Mere consistency is when correct statements do not contradict, but are not necessarily related. Accordingly, an individual is consistent if he does not contradict himself. It is inadequate as a criterion because it treats facts in an isolated fashion without true cohesion and integration; nevertheless it remains a necessary condition for the truth of any argument, owing to the law of noncontradiction. The value of a proof largely lies in its ability to reconcile individual facts into a coherent whole.
Consistency (strict)
Strict consistency is when claims are connected in such a fashion that one statement follows from another. Formal logic and mathematical rules are examples of rigorous consistency. An example would be: if all As are Bs and all Bs are Cs, then all As are Cs. While this standard is of high value, it is limited. For example, the premises are a priori (or self-apparent), requiring another test of truth to employ this criterion. Additionally, strict consistency may produce results lacking coherence and completeness. While a philosophical system may demonstrate rigorous consistency with the facts it considers, all facts must be taken into consideration for an adequate criterion of truth, regardless of their detriment to any given system.
Correspondence
Correspondence is quite simply when a claim corresponds with its object. For example, the claim that the White House is in Washington, D.C. is true, if the White House is actually located in Washington. Correspondence is held by many philosophers to be the most valid of the criteria of truth. An idea that corresponds to its object is indeed true, but determining if the correspondence is perfect requires additional tests of truth. This indicates that correspondence is a perfectly valid definition of truth, but is not of itself a valid criterion of truth. An additional test beyond this "definition" is required to determine the precise degree of similarity between what is posited and what exists in objective reality. Establishing correspondence between what is posited and what exists is fraught with its own difficulties, see Map–territory relation.
Custom
Most people consciously or unknowingly employ custom as a criterion of truth, based on the assumption that doing what is customary will prevent error. It is particularly applied in the determination of moral truth and reflected in the statement "when in Rome, do as the Romans do". People stick closely to the principle of custom when they use common vernacular, wear common fashions and so forth; essentially, when they do what is popular. Custom is not considered a serious, or valid, test of truth. For example, public opinion polls do not determine truth.
Emotions
Many people allow feelings to determine judgment, often in the face of contrary evidence or without even attempting to collect evidence and facts. They are implicitly accepting emotions as a criterion of truth. Most people will admit that feelings are not an adequate test for truth. For example, a seasoned businessman will put aside his emotions and search for the best available facts when making an investment. Similarly, scholars are trained to put aside such subjective judgments when evaluating knowledge. Emotions are real, however, and thus must be considered within any social scientific system of coherence.
Instinct
The existence of distinct instincts has long been debated. Proponents of instinct argue that we eat because of hunger, drink because of thirst, and so forth. Some have even argued for the existence of God based on this criterion, arguing that the object of every instinct has a referent in reality. The counterpoint of hunger is food; for thirst it is liquid; for the sex drive it is a mate. Instincts are not accepted as a reliable test because they are most often indistinct, variant and difficult to define. Additionally, universal instincts are so few that they offer little to the greater body of philosophy as a criterion.
Intuition
Intuition is an assumed truth with an unknown, or possibly unexamined, source. It is a judgment that is not dependent on a rational examination of the facts. It is usually experienced as a sudden sensation and/or rush of thoughts that feel "right". Many persons experience intuitive epiphanies which later prove to be true. Scholars have sometimes come upon valid theories and proofs while daydreaming or otherwise mentally occupied with something bearing no apparent relationship to the truth they seek to reveal. Intuition is at best a source for truths, rather than a criterion with which to evaluate them. Intuitive knowledge requires testing by means of other criteria of truth in order to confirm its accuracy.
Majority rule
Majority rule is a statistical method of accepting assertions and proposals. In democratic systems, majority rule is used to determine group decisions, particularly those relating to personal morality and social behavior. Some systems divided into several oppositional factions may depend on mere plurality. While majority rule may make for a good democratic system, it is a poor determinant of truth, subject to the criticisms of the broad version of consensus gentium.
Naïve realism
Naïve realism posits that only that which is directly observable by the human senses is true. First-hand observation determines the truth or falsity of a given statement. Naïve Realism is an insufficient criterion of truth. A host of natural phenomena are demonstrably true, but not observable by the unaided sense. For example, Naïve Realism would deny the existence of sounds beyond the range of human hearing and the existence of x-rays. Similarly, there are a number of sense experiments which show a disconnect between the perceived sensation and the reality of its cause.
Pragmatic
If an idea works then it must be true, to the Pragmatist. The consequences of applying a concept reveal its truth value upon examination of the results. The full meaning of an idea is self-apparent in its application. For example, the therapeutic value and effect of penicillin in relation to infections is proven in its administration. Although pragmatism is considered a valuable criterion, it must be used with caution and reservation, due to its potential for false positives. For example, a doctor may prescribe a patient medication for an illness, but it could later turn out that a placebo is equally effective. Thus, untrue concepts could appear to be working contrary to the purpose of the pragmatic test. However, it has validity as a test, particularly in the form William Ernest Hocking called "negative pragmatism". In essence, it states that ideas that do not work cannot possibly be true, though ideas which do work may or may not be true.
Revelation
The principal distinction between intuition and revelation is that revelation has an assumed source: God (or another higher power). Revelation may be defined as truth emanating from God. Many religions fundamentally rely on revelation as a test of truth. This criterion is subject to the same criticisms as intuition. It may be a valid reference of truth for an individual, but it is inadequate for providing a coherent proof of the knowledge to others.
Time
Time is a criterion commonly appealed to in debate, often referred to as "the test of time". This criterion posits that over time erroneous beliefs and logical errors will be revealed, while if the belief is true, the mere passage of time cannot adversely affect its validity. Time is an inadequate test for truth, since it is subject to similar flaws as custom and tradition (which are simply specific variations of the time factor). Many demonstrably false beliefs have endured for centuries and even millennia (e.g. vitalism). It is commonly rejected as a valid criterion. For example, most people will not convert to another faith simply because the other religion is centuries (or even millennia) older than their current beliefs.
Tradition
Tradition, closely related to custom, is the standard stating that which is held for generations is true. Those accepting tradition argue that ideas gaining the loyalty multiple generations possesses a measure of credibility. Tradition possesses many of the same failings as custom. It is possible for falsehoods to be passed down from generation to generation, since tradition generally emphasizes repetition over critical evaluation.
See also
Footnotes
Concepts in epistemology
Philosophical logic
Theories of truth | 0.785915 | 0.97833 | 0.768884 |
Reason | Reason is the capacity of consciously applying logic by drawing valid conclusions from new or existing information, with the aim of seeking the truth. It is associated with such characteristically human activities as philosophy, religion, science, language, mathematics, and art, and is normally considered to be a distinguishing ability possessed by humans. Reason is sometimes referred to as rationality.
Reasoning involves using more-or-less rational processes of thinking and cognition to extrapolate from one's existing knowledge to generate new knowledge, and involves the use of one's intellect. The field of logic studies the ways in which humans can use formal reasoning to produce logically valid arguments and true conclusions. Reasoning may be subdivided into forms of logical reasoning, such as deductive reasoning, inductive reasoning, and abductive reasoning.
Aristotle drew a distinction between logical discursive reasoning (reason proper), and intuitive reasoning, in which the reasoning process through intuition—however valid—may tend toward the personal and the subjectively opaque. In some social and political settings logical and intuitive modes of reasoning may clash, while in other contexts intuition and formal reason are seen as complementary rather than adversarial. For example, in mathematics, intuition is often necessary for the creative processes involved with arriving at a formal proof, arguably the most difficult of formal reasoning tasks.
Reasoning, like habit or intuition, is one of the ways by which thinking moves from one idea to a related idea. For example, reasoning is the means by which rational individuals understand the significance of sensory information from their environments, or conceptualize abstract dichotomies such as cause and effect, truth and falsehood, or good and evil. Reasoning, as a part of executive decision making, is also closely identified with the ability to self-consciously change, in terms of goals, beliefs, attitudes, traditions, and institutions, and therefore with the capacity for freedom and self-determination.
Psychologists and cognitive scientists have attempted to study and explain how people reason, e.g. which cognitive and neural processes are engaged, and how cultural factors affect the inferences that people draw. The field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason.
Etymology and related words
In the English language and other modern European languages, "reason", and related words, represent words which have always been used to translate Latin and classical Greek terms in their philosophical sense.
The original Greek term was , the root of the modern English word "logic" but also a word that could mean for example "speech" or "explanation" or an "account" (of money handled).
As a philosophical term was translated in its non-linguistic senses in Latin as . This was originally not just a translation used for philosophy, but was also commonly a translation for in the sense of an account of money.
French is derived directly from Latin, and this is the direct source of the English word "reason".
The earliest major philosophers to publish in English, such as Francis Bacon, Thomas Hobbes, and John Locke also routinely wrote in Latin and French, and compared their terms to Greek, treating the words "", "", "" and "reason" as interchangeable. The meaning of the word "reason" in senses such as "human reason" also overlaps to a large extent with "rationality" and the adjective of "reason" in philosophical contexts is normally "rational", rather than "reasoned" or "reasonable". Some philosophers, Hobbes for example, also used the word ratiocination as a synonym for "reasoning".
In contrast to the use of "reason" as an abstract noun, a reason is a consideration that either explains or justifies events, phenomena, or behavior. Reasons justify decisions, reasons support explanations of natural phenomena, and reasons can be given to explain the actions (conduct) of individuals.
The words are connected in this way: using reason, or reasoning, means providing good reasons. For example, when evaluating a moral decision, "morality is, at the very least, the effort to guide one's conduct by reason—that is, doing what there are the best reasons for doing—while giving equal [and impartial] weight to the interests of all those affected by what one does."
Philosophical history
The proposal that reason gives humanity a special position in nature has been argued to be a defining characteristic of western philosophy and later western science, starting with classical Greece. Philosophy can be described as a way of life based upon reason, while reason has been among the major subjects of philosophical discussion since ancient times. Reason is often said to be reflexive, or "self-correcting", and the critique of reason has been a persistent theme in philosophy.
Classical philosophy
For many classical philosophers, nature was understood teleologically, meaning that every type of thing had a definitive purpose that fit within a natural order that was itself understood to have aims. Perhaps starting with Pythagoras or Heraclitus, the cosmos was even said to have reason. Reason, by this account, is not just a characteristic that people happen to have. Reason was considered of higher stature than other characteristics of human nature, because it is something people share with nature itself, linking an apparently immortal part of the human mind with the divine order of the cosmos. Within the human mind or soul, reason was described by Plato as being the natural monarch which should rule over the other parts, such as spiritedness and the passions. Aristotle, Plato's student, defined human beings as rational animals, emphasizing reason as a characteristic of human nature. He described the highest human happiness or well being as a life which is lived consistently, excellently, and completely in accordance with reason.
The conclusions to be drawn from the discussions of Aristotle and Plato on this matter are amongst the most debated in the history of philosophy. But teleological accounts such as Aristotle's were highly influential for those who attempt to explain reason in a way that is consistent with monotheism and the immortality and divinity of the human soul. For example, in the neoplatonist account of Plotinus, the cosmos has one soul, which is the seat of all reason, and the souls of all people are part of this soul. Reason is for Plotinus both the provider of form to material things, and the light which brings people's souls back into line with their source.
Christian and Islamic philosophy
The classical view of reason, like many important Neoplatonic and Stoic ideas, was readily adopted by the early Church as the Church Fathers saw Greek Philosophy as an indispensable instrument given to mankind so that we may understand revelation. For example, the greatest among the early Church Fathers and Doctors of the Church such as Augustine of Hippo, Basil of Caesarea, and Gregory of Nyssa were as much Neoplatonic philosophers as they were Christian theologians, and they adopted the Neoplatonic view of human reason and its implications for our relationship to creation, to ourselves, and to God.
The Neoplatonic conception of the rational aspect of the human soul was widely adopted by medieval Islamic philosophers and continues to hold significance in Iranian philosophy. As European intellectual life reemerged from the Dark Ages, the Christian Patristic tradition and the influence of esteemed Islamic scholars like Averroes and Avicenna contributed to the development of the Scholastic view of reason, which laid the foundation for our modern understanding of this concept.
Among the Scholastics who relied on the classical concept of reason for the development of their doctrines, none were more influential than Saint Thomas Aquinas, who put this concept at the heart of his Natural Law. In this doctrine, Thomas concludes that because humans have reason and because reason is a spark of the divine, every single human life is invaluable, all humans are equal, and every human is born with an intrinsic and permanent set of basic rights. On this foundation, the idea of human rights would later be constructed by Spanish theologians at the School of Salamanca.
Other Scholastics, such as Roger Bacon and Albertus Magnus, following the example of Islamic scholars such as Alhazen, emphasised reason an intrinsic human ability to decode the created order and the structures that underlie our experienced physical reality. This interpretation of reason was instrumental to the development of the scientific method in the early Universities of the high Middle Ages.
Subject-centred reason in early modern philosophy
The early modern era was marked by a number of significant changes in the understanding of reason, starting in Europe. One of the most important of these changes involved a change in the metaphysical understanding of human beings. Scientists and philosophers began to question the teleological understanding of the world. Nature was no longer assumed to be human-like, with its own aims or reason, and human nature was no longer assumed to work according to anything other than the same "laws of nature" which affect inanimate things. This new understanding eventually displaced the previous world view that derived from a spiritual understanding of the universe.
Accordingly, in the 17th century, René Descartes explicitly rejected the traditional notion of humans as "rational animals", suggesting instead that they are nothing more than "thinking things" along the lines of other "things" in nature. Any grounds of knowledge outside that understanding was, therefore, subject to doubt.
In his search for a foundation of all possible knowledge, Descartes decided to throw into doubt all knowledge—except that of the mind itself in the process of thinking:
At this time I admit nothing that is not necessarily true. I am therefore precisely nothing but a thinking thing; that is a mind, or intellect, or understanding, or reason—words of whose meanings I was previously ignorant.
This eventually became known as epistemological or "subject-centred" reason, because it is based on the knowing subject, who perceives the rest of the world and itself as a set of objects to be studied, and successfully mastered, by applying the knowledge accumulated through such study. Breaking with tradition and with many thinkers after him, Descartes explicitly did not divide the incorporeal soul into parts, such as reason and intellect, describing them instead as one indivisible incorporeal entity.
A contemporary of Descartes, Thomas Hobbes described reason as a broader version of "addition and subtraction" which is not limited to numbers. This understanding of reason is sometimes termed "calculative" reason. Similar to Descartes, Hobbes asserted that "No discourse whatsoever, can end in absolute knowledge of fact, past, or to come" but that "sense and memory" is absolute knowledge.
In the late 17th century through the 18th century, John Locke and David Hume developed Descartes's line of thought still further. Hume took it in an especially skeptical direction, proposing that there could be no possibility of deducing relationships of cause and effect, and therefore no knowledge is based on reasoning alone, even if it seems otherwise.
Hume famously remarked that, "We speak not strictly and philosophically when we talk of the combat of passion and of reason. Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." Hume also took his definition of reason to unorthodox extremes by arguing, unlike his predecessors, that human reason is not qualitatively different from either simply conceiving individual ideas, or from judgments associating two ideas, and that "reason is nothing but a wonderful and unintelligible instinct in our souls, which carries us along a certain train of ideas, and endows them with particular qualities, according to their particular situations and relations." It followed from this that animals have reason, only much less complex than human reason.
In the 18th century, Immanuel Kant attempted to show that Hume was wrong by demonstrating that a "transcendental" self, or "I", was a necessary condition of all experience. Therefore, suggested Kant, on the basis of such a self, it is in fact possible to reason both about the conditions and limits of human knowledge. And so long as these limits are respected, reason can be the vehicle of morality, justice, aesthetics, theories of knowledge (epistemology), and understanding.
Substantive and formal reason
In the formulation of Kant, who wrote some of the most influential modern treatises on the subject, the great achievement of reason is that it is able to exercise a kind of universal law-making. Kant was able therefore to reformulate the basis of moral-practical, theoretical, and aesthetic reasoning on "universal" laws.
Here, practical reasoning is the self-legislating or self-governing formulation of universal norms, and theoretical reasoning is the way humans posit universal laws of nature.
Under practical reason, the moral autonomy or freedom of people depends on their ability, by the proper exercise of that reason, to behave according to laws that are given to them. This contrasted with earlier forms of morality, which depended on religious understanding and interpretation, or on nature, for their substance.
According to Kant, in a free society each individual must be able to pursue their goals however they see fit, as long as their actions conform to principles given by reason. He formulated such a principle, called the "categorical imperative", which would justify an action only if it could be universalized:
Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.
In contrast to Hume, Kant insisted that reason itself (German ) could be used to find solutions to metaphysical problems, especially the discovery of the foundations of morality. Kant claimed that these solutions could be found with his "transcendental logic", which unlike normal logic is not just an instrument that can be used indifferently, as it was for Aristotle, but a theoretical science in its own right and the basis of all the others.
According to Jürgen Habermas, the "substantive unity" of reason has dissolved in modern times, such that it can no longer answer the question "How should I live?" Instead, the unity of reason has to be strictly formal, or "procedural". He thus described reason as a group of three autonomous spheres (on the model of Kant's three critiques):
Cognitive–instrumental reason the kind of reason employed by the sciences; used to observe events, to predict and control outcomes, and to intervene in the world on the basis of its hypotheses
Moral–practical reason what we use to deliberate and discuss issues in the moral and political realm, according to universalizable procedures (similar to Kant's categorical imperative)
Aesthetic reason typically found in works of art and literature, and encompasses the novel ways of seeing the world and interpreting things that those practices embody
For Habermas, these three spheres are the domain of experts, and therefore need to be mediated with the "lifeworld" by philosophers. In drawing such a picture of reason, Habermas hoped to demonstrate that the substantive unity of reason, which in pre-modern societies had been able to answer questions about the good life, could be made up for by the unity of reason's formalizable procedures.
The critique of reason
Hamann, Herder, Kant, Hegel, Kierkegaard, Nietzsche, Heidegger, Foucault, Rorty, and many other philosophers have contributed to a debate about what reason means, or ought to mean. Some, like Kierkegaard, Nietzsche, and Rorty, are skeptical about subject-centred, universal, or instrumental reason, and even skeptical toward reason as a whole. Others, including Hegel, believe that it has obscured the importance of intersubjectivity, or "spirit" in human life, and they attempt to reconstruct a model of what reason should be.
Some thinkers, e.g. Foucault, believe there are other forms of reason, neglected but essential to modern life, and to our understanding of what it means to live a life according to reason. Others suggest that there is not just one reason or rationality, but multiple possible systems of reason or rationality which may conflict (in which case there is no super-rational system one can appeal to in order to resolve the conflict).
In the last several decades, a number of proposals have been made to "re-orient" this critique of reason, or to recognize the "other voices" or "new departments" of reason:
For example, in opposition to subject-centred reason, Habermas has proposed a model of communicative reason that sees it as an essentially cooperative activity, based on the fact of linguistic intersubjectivity.
Nikolas Kompridis proposed a widely encompassing view of reason as "that ensemble of practices that contributes to the opening and preserving of openness" in human affairs, and a focus on reason's possibilities for social change.
The philosopher Charles Taylor, influenced by the 20th century German philosopher Martin Heidegger, proposed that reason ought to include the faculty of disclosure, which is tied to the way we make sense of things in everyday life, as a new "department" of reason.
In the essay "What is Enlightenment?", Michel Foucault proposed a critique based on Kant's distinction between "private" and "public" uses of reason:
Private reason the reason that is used when an individual is "a cog in a machine" or when one "has a role to play in society and jobs to do: to be a soldier, to have taxes to pay, to be in charge of a parish, to be a civil servant"
Public reason the reason used "when one is reasoning as a reasonable being (and not as a cog in a machine), when one is reasoning as a member of reasonable humanity"; in these circumstances, "the use of reason must be free and public"
Reason compared to related concepts
Reason compared to logic
The terms logic or logical are sometimes used as if they were identical with reason or rational, or sometimes logic is seen as the most pure or the defining form of reason: "Logic is about reasoning—about going from premises to a conclusion. ... When you do logic, you try to clarify reasoning and separate good from bad reasoning." In modern economics, rational choice is assumed to equate to logically consistent choice.
However, reason and logic can be thought of as distinct—although logic is one important aspect of reason. Author Douglas Hofstadter, in Gödel, Escher, Bach, characterizes the distinction in this way: Logic is done inside a system while reason is done outside the system by such methods as skipping steps, working backward, drawing diagrams, looking at examples, or seeing what happens if you change the rules of the system. Psychologists Mark H. Bickard and Robert L. Campbell argue that "rationality cannot be simply assimilated to logicality"; they note that "human knowledge of logic and logical systems has developed" over time through reasoning, and logical systems "can't construct new logical systems more powerful than themselves", so reasoning and rationality must involve more than a system of logic. Psychologist David Moshman, citing Bickhard and Campbell, argues for a "metacognitive conception of rationality" in which a person's development of reason "involves increasing consciousness and control of logical and other inferences".
Reason is a type of thought, and logic involves the attempt to describe a system of formal rules or norms of appropriate reasoning. The oldest surviving writing to explicitly consider the rules by which reason operates are the works of the Greek philosopher Aristotle, especially Prior Analytics and Posterior Analytics. Although the Ancient Greeks had no separate word for logic as distinct from language and reason, Aristotle's newly coined word "syllogism" identified logic clearly for the first time as a distinct field of study. When Aristotle referred to "the logical", he was referring more broadly to rational thought.
Reason compared to cause-and-effect thinking, and symbolic thinking
As pointed out by philosophers such as Hobbes, Locke, and Hume, some animals are also clearly capable of a type of "associative thinking", even to the extent of associating causes and effects. A dog once kicked, can learn how to recognize the warning signs and avoid being kicked in the future, but this does not mean the dog has reason in any strict sense of the word. It also does not mean that humans acting on the basis of experience or habit are using their reason.
Human reason requires more than being able to associate two ideas—even if those two ideas might be described by a reasoning human as a cause and an effect—perceptions of smoke, for example, and memories of fire. For reason to be involved, the association of smoke and the fire would have to be thought through in a way that can be explained, for example as cause and effect. In the explanation of Locke, for example, reason requires the mental use of a third idea in order to make this comparison by use of syllogism.
More generally, according to Charles Sanders Peirce, reason in the strict sense requires the ability to create and manipulate a system of symbols, as well as indices and icons, the symbols having only a nominal, though habitual, connection to either (for example) smoke or fire. One example of such a system of symbols and signs is language.
The connection of reason to symbolic thinking has been expressed in different ways by philosophers. Thomas Hobbes described the creation of "Markes, or Notes of remembrance" as speech. He used the word speech as an English version of the Greek word so that speech did not need to be communicated. When communicated, such speech becomes language, and the marks or notes or remembrance are called "Signes" by Hobbes. Going further back, although Aristotle is a source of the idea that only humans have reason, he does mention that animals with imagination, for whom sense perceptions can persist, come closest to having something like reasoning and , and even uses the word "" in one place to describe the distinctions which animals can perceive in such cases.
Reason, imagination, mimesis, and memory
Reason and imagination rely on similar mental processes. Imagination is not only found in humans. Aristotle asserted that (imagination: that which can hold images or ) and (a type of thinking that can judge and understand in some sense) also exist in some animals. According to him, both are related to the primary perceptive ability of animals, which gathers the perceptions of different senses and defines the order of the things that are perceived without distinguishing universals, and without deliberation or . But this is not yet reason, because human imagination is different.
Terrence Deacon and Merlin Donald, writing about the origin of language, connect reason not only to language, but also mimesis. They describe the ability to create language as part of an internal modeling of reality, and specific to humankind. Other results are consciousness, and imagination or fantasy. In contrast, modern proponents of a genetic predisposition to language itself include Noam Chomsky and Steven Pinker.
If reason is symbolic thinking, and peculiarly human, then this implies that humans have a special ability to maintain a clear consciousness of the distinctness of "icons" or images and the real things they represent. Merlin Donald writes:
A dog might perceive the "meaning" of a fight that was realistically play-acted by humans, but it could not reconstruct the message or distinguish the representation from its referent (a real fight).... Trained apes are able to make this distinction; young children make this distinction early—hence, their effortless distinction between play-acting an event and the event itself
In classical descriptions, an equivalent description of this mental faculty is , in the philosophy of Plato. This is the ability to perceive whether a perception is an image of something else, related somehow but not the same, and therefore allows humans to perceive that a dream or memory or a reflection in a mirror is not reality as such. What Klein refers to as is the concerned specifically with thinking and mental images, such as those mental symbols, icons, , and marks discussed above as definitive of reason. Explaining reason from this direction: human thinking is special in that we often understand visible things as if they were themselves images of our intelligible "objects of thought" as "foundations" ( in Ancient Greek). This thinking is "...an activity which consists in making the vast and diffuse jungle of the visible world depend on a plurality of more 'precise' ".
Both Merlin Donald and the Socratic authors such as Plato and Aristotle emphasize the importance of , often translated as imitation or representation. Donald writes:
Imitation is found especially in monkeys and apes [...but...] Mimesis is fundamentally different from imitation and mimicry in that it involves the invention of intentional representations.... Mimesis is not absolutely tied to external communication.
is a concept, now popular again in academic discussion, that was particularly prevalent in Plato's works. In Aristotle, it is discussed mainly in the Poetics. In Michael Davis's account of the theory of man in that work:
It is the distinctive feature of human action, that whenever we choose what we do, we imagine an action for ourselves as though we were inspecting it from the outside. Intentions are nothing more than imagined actions, internalizings of the external. All action is therefore imitation of action; it is poetic...
Donald, like Plato (and Aristotle, especially in On Memory and Recollection), emphasizes the peculiarity in humans of voluntary initiation of a search through one's mental world. The ancient Greek , normally translated as "recollection" was opposed to or "memory". Memory, shared with some animals, requires a consciousness not only of what happened in the past, but also that something happened in the past, which is in other words a kind of "...but nothing except man is able to recollect." Recollection is a deliberate effort to search for and recapture something once known. Klein writes that, "To become aware of our having forgotten something means to begin recollecting." Donald calls the same thing autocueing, which he explains as follows: "Mimetic acts are reproducible on the basis of internal, self-generated cues. This permits voluntary recall of mimetic representations, without the aid of external cues—probably the earliest form of representational thinking."
In a celebrated paper, the fantasy author and philologist J.R.R. Tolkien wrote in his essay "On Fairy Stories" that the terms "fantasy" and "enchantment" are connected to not only "the satisfaction of certain primordial human desires" but also "the origin of language and of the mind".
Logical reasoning methods and argumentation
A subdivision of philosophy and a variety of reasoning is logic. The traditional main division made in philosophy is between deductive reasoning and inductive reasoning. Formal logic has been described as the science of deduction. The study of inductive reasoning is generally carried out within the field known as informal logic or critical thinking.
Deductive reasoning
Deduction is a form of reasoning in which a conclusion follows necessarily from the stated premises. A deduction is also the name for the conclusion reached by a deductive reasoning process. A classic example of deductive reasoning is evident in syllogisms like the following:
The reasoning in this argument is deductively valid because there is no way in which both premises could be true and the conclusion be false.
Inductive reasoning
Induction is a form of inference that produces properties or relations about unobserved objects or types based on previous observations or experiences, or that formulates general statements or laws based on limited observations of recurring phenomenal patterns.
Inductive reasoning contrasts with deductive reasoning in that, even in the strongest cases of inductive reasoning, the truth of the premises does not guarantee the truth of the conclusion. Instead, the conclusion of an inductive argument follows with some degree of probability. For this reason also, the conclusion of an inductive argument contains more information than is already contained in the premises. Thus, this method of reasoning is ampliative.
A classic example of inductive reasoning comes from the empiricist David Hume:
Analogical reasoning
Analogical reasoning is a form of inductive reasoning from a particular to a particular. It is often used in case-based reasoning, especially legal reasoning. An example follows:
Analogical reasoning is a weaker form of inductive reasoning from a single example, because inductive reasoning typically uses a large number of examples to reason from the particular to the general. Analogical reasoning often leads to wrong conclusions. For example:
Abductive reasoning
Abductive reasoning, or argument to the best explanation, is a form of reasoning that does not fit in either the deductive or inductive categories, since it starts with incomplete set of observations and proceeds with likely possible explanations. The conclusion in an abductive argument does not follow with certainty from its premises and concerns something unobserved. What distinguishes abduction from the other forms of reasoning is an attempt to favour one conclusion above others, by subjective judgement or by attempting to falsify alternative explanations or by demonstrating the likelihood of the favoured conclusion, given a set of more or less disputable assumptions. For example, when a patient displays certain symptoms, there might be various possible causes, but one of these is preferred above others as being more probable.
Fallacious reasoning
Flawed reasoning in arguments is known as fallacious reasoning. Bad reasoning within arguments can result from either a formal fallacy or an informal fallacy.
Formal fallacies occur when there is a problem with the form, or structure, of the argument. The word "formal" refers to this link to the form of the argument. An argument that contains a formal fallacy will always be invalid.
An informal fallacy is an error in reasoning that occurs due to a problem with the content, rather than the form or structure, of the argument.
Unreasonable decisions and actions
In law relating to the actions of an employer or a public body, a decision or action which falls outside the range of actions or decision available when acting in good faith can be described as "unreasonable". Use of the term is considered in the English law cases of Short v Poole Corporation (1926), Associated Provincial Picture Houses Ltd v Wednesbury Corporation (1947) and Braganza v BP Shipping Limited (2015).
Traditional problems raised concerning reason
Philosophy is often characterized as a pursuit of rational understanding, entailing a more rigorous and dedicated application of human reasoning than commonly employed. Philosophers have long debated two fundamental questions regarding reason, essentially examining reasoning itself as a human endeavor, or philosophizing about philosophizing. The first question delves into whether we can place our trust in reason's ability to attain knowledge and truth more effectively than alternative methods. The second question explores whether a life guided by reason, a life that aims to be guided by reason, can be expected to lead to greater happiness compared to other approaches to life.
Reason versus truth, and "first principles"
Since classical antiquity a question has remained constant in philosophical debate (sometimes seen as a conflict between Platonism and Aristotelianism) concerning the role of reason in confirming truth. People use logic, deduction, and induction to reach conclusions they think are true. Conclusions reached in this way are considered, according to Aristotle, more certain than sense perceptions on their own. On the other hand, if such reasoned conclusions are only built originally upon a foundation of sense perceptions, then our most logical conclusions can never be said to be certain because they are built upon the very same fallible perceptions they seek to better.
This leads to the question of what types of first principles, or starting points of reasoning, are available for someone seeking to come to true conclusions. In Greek, "first principles" are , "starting points", and the faculty used to perceive them is sometimes referred to in Aristotle and Plato as which was close in meaning to awareness or consciousness.
Empiricism (sometimes associated with Aristotle but more correctly associated with British philosophers such as John Locke and David Hume, as well as their ancient equivalents such as Democritus) asserts that sensory impressions are the only available starting points for reasoning and attempting to attain truth. This approach always leads to the controversial conclusion that absolute knowledge is not attainable. Idealism, (associated with Plato and his school), claims that there is a "higher" reality, within which certain people can directly discover truth without needing to rely only upon the senses, and that this higher reality is therefore the primary source of truth.
Philosophers such as Plato, Aristotle, Al-Farabi, Avicenna, Averroes, Maimonides, Aquinas, and Hegel are sometimes said to have argued that reason must be fixed and discoverable—perhaps by dialectic, analysis, or study. In the vision of these thinkers, reason is divine or at least has divine attributes. Such an approach allowed religious philosophers such as Thomas Aquinas and Étienne Gilson to try to show that reason and revelation are compatible. According to Hegel, "...the only thought which Philosophy brings with it to the contemplation of History, is the simple conception of reason; that reason is the Sovereign of the World; that the history of the world, therefore, presents us with a rational process."
Since the 17th century rationalists, reason has often been taken to be a subjective faculty, or rather the unaided ability (pure reason) to form concepts. For Descartes, Spinoza, and Leibniz, this was associated with mathematics. Kant attempted to show that pure reason could form concepts (time and space) that are the conditions of experience. Kant made his argument in opposition to Hume, who denied that reason had any role to play in experience.
Reason versus emotion or passion
After Plato and Aristotle, western literature often treated reason as being the faculty that trained the passions and appetites. Stoic philosophy, by contrast, claimed most emotions were merely false judgements. According to the Stoics the only good is virtue, and the only evil is vice, therefore emotions that judged things other than vice to be bad (such as fear or distress), or things other than virtue to be good (such as greed) were simply false judgements and should be discarded (though positive emotions based on true judgements, such as kindness, were acceptable). After the critiques of reason in the early Enlightenment the appetites were rarely discussed or were conflated with the passions. Some Enlightenment camps took after the Stoics to say reason should oppose passion rather than order it, while others like the Romantics believed that passion displaces reason, as in the maxim "follow your heart".
Reason has been seen as cold, an "enemy of mystery and ambiguity", a slave, or judge, of the passions, notably in the work of David Hume, and more recently of Freud.
Reasoning that claims the object of a desire is demanded by logic alone is called rationalization.
Rousseau first proposed, in his second Discourse, that reason and political life is not natural and is possibly harmful to mankind. He asked what really can be said about what is natural to mankind. What, other than reason and civil society, "best suits his constitution"? Rousseau saw "two principles prior to reason" in human nature. First we hold an intense interest in our own well-being. Secondly we object to the suffering or death of any sentient being, especially one like ourselves. These two passions lead us to desire more than we could achieve. We become dependent upon each other, and on relationships of authority and obedience. This effectively puts the human race into slavery. Rousseau says that he almost dares to assert that nature does not destine men to be healthy. According to Richard Velkley, "Rousseau outlines certain programs of rational self-correction, most notably the political legislation of the Contrat Social and the moral education in Émile. All the same, Rousseau understands such corrections to be only ameliorations of an essentially unsatisfactory condition, that of socially and intellectually corrupted humanity."
This quandary presented by Rousseau led to Kant's new way of justifying reason as freedom to create good and evil. These therefore are not to be blamed on nature or God. In various ways, German Idealism after Kant, and major later figures such Nietzsche, Bergson, Husserl, Scheler, and Heidegger, remain preoccupied with problems coming from the metaphysical demands or urges of reason. Rousseau and these later writers also exerted a large influence on art and politics. Many writers (such as Nikos Kazantzakis) extol passion and disparage reason. In politics modern nationalism comes from Rousseau's argument that rationalist cosmopolitanism brings man ever further from his natural state.
In Descartes' Error, Antonio Damasio presents the "Somatic Marker Hypothesis" which states that emotions guide behavior and decision-making. Damasio argues that these somatic markers (known collectively as "gut feelings") are "intuitive signals" that direct our decision making processes in a certain way that cannot be solved with rationality alone. Damasio further argues that rationality requires emotional input in order to function.
Reason versus faith or tradition
There are many religious traditions, some of which are explicitly fideist and others of which claim varying degrees of rationalism. Secular critics sometimes accuse all religious adherents of irrationality; they claim such adherents are guilty of ignoring, suppressing, or forbidding some kinds of reasoning concerning some subjects (such as religious dogmas, moral taboos, etc.). Though theologies and religions such as classical monotheism typically do not admit to being irrational, there is often a perceived conflict or tension between faith and tradition on the one hand, and reason on the other, as potentially competing sources of wisdom, law, and truth.
Religious adherents sometimes respond by arguing that faith and reason can be reconciled, or have different non-overlapping domains, or that critics engage in a similar kind of irrationalism:
Reconciliation Philosopher Alvin Plantinga argues that there is no real conflict between reason and classical theism because classical theism explains (among other things) why the universe is intelligible and why reason can successfully grasp it.
Non-overlapping magisteria Evolutionary biologist Stephen Jay Gould argues that there need not be conflict between reason and religious belief because they are each authoritative in their own domain (or "magisterium"). If so, reason can work on those problems over which it has authority while other sources of knowledge or opinion can have authority on the big questions.
Philosophers Alasdair MacIntyre and Charles Taylor argue that those critics of traditional religion who are adherents of secular liberalism are also sometimes guilty of ignoring, suppressing, and forbidding some kinds of reasoning about subjects. Similarly, philosophers of science such as Paul Feyarabend argue that scientists sometimes ignore or suppress evidence contrary to the dominant paradigm.
Unification Theologian Joseph Ratzinger, later Benedict XVI, asserted that "Christianity has understood itself as the religion of the Logos, as the religion according to reason," referring to , usually translated as "In the beginning was the Word (Logos)." Thus, he said that the Christian faith is "open to all that is truly rational", and that the rationality of Western Enlightenment "is of Christian origin".
Some commentators have claimed that Western civilization can be almost defined by its serious testing of the limits of tension between "unaided" reason and faith in "revealed" truths—figuratively summarized as Athens and Jerusalem, respectively. Leo Strauss spoke of a "Greater West" that included all areas under the influence of the tension between Greek rationalism and Abrahamic revelation, including the Muslim lands. He was particularly influenced by the Muslim philosopher Al-Farabi. To consider to what extent Eastern philosophy might have partaken of these important tensions, Strauss thought it best to consider whether dharma or tao may be equivalent to Nature ( in Greek). According to Strauss the beginning of philosophy involved the "discovery or invention of nature" and the "pre-philosophical equivalent of nature" was supplied by "such notions as 'custom' or 'ways, which appear to be really universal in all times and places. The philosophical concept of nature or natures as a way of understanding (first principles of knowledge) brought about a peculiar tension between reasoning on the one hand, and tradition or faith on the other.
Reason in particular fields of study
Psychology and cognitive science
Scientific research into reasoning is carried out within the fields of psychology and cognitive science. Psychologists attempt to determine whether or not people are capable of rational thought in a number of different circumstances.
Assessing how well someone engages in reasoning is the project of determining the extent to which the person is rational or acts rationally. It is a key research question in the psychology of reasoning and cognitive science of reasoning. Rationality is often divided into its respective theoretical and practical counterparts.
Behavioral experiments on human reasoning
Experimental cognitive psychologists carry out research on reasoning behaviour. Such research may focus, for example, on how people perform on tests of reasoning such as intelligence or IQ tests, or on how well people's reasoning matches ideals set by logic (see, for example, the Wason test). Experiments examine how people make inferences from conditionals like if A then B and how they make inferences about alternatives like A or else B. They test whether people can make valid deductions about spatial and temporal relations like A is to the left of B or A happens after B, and about quantified assertions like all the A are B. Experiments investigate how people make inferences about factual situations, hypothetical possibilities, probabilities, and counterfactual situations.
Developmental studies of children's reasoning
Developmental psychologists investigate the development of reasoning from birth to adulthood. Piaget's theory of cognitive development was the first complete theory of reasoning development. Subsequently, several alternative theories were proposed, including the neo-Piagetian theories of cognitive development.
Neuroscience of reasoning
The biological functioning of the brain is studied by neurophysiologists, cognitive neuroscientists, and neuropsychologists. This includes research into the structure and function of normally functioning brains, and of damaged or otherwise unusual brains. In addition to carrying out research into reasoning, some psychologists—for example clinical psychologists and psychotherapists—work to alter people's reasoning habits when those habits are unhelpful.
Computer science
Automated reasoning
In artificial intelligence and computer science, scientists study and use automated reasoning for diverse applications including automated theorem proving the formal semantics of programming languages, and formal specification in software engineering.
Meta-reasoning
Meta-reasoning is reasoning about reasoning. In computer science, a system performs meta-reasoning when it is reasoning about its own operation. This requires a programming language capable of reflection, the ability to observe and modify its own structure and behaviour.
Evolution of reason
A species could benefit greatly from better abilities to reason about, predict, and understand the world. French social and cognitive scientists Dan Sperber and Hugo Mercier argue that, aside from these benefits, there could have been other forces driving the evolution of reason. They point out that reasoning is very difficult for humans to do effectively, and that it is hard for individuals to doubt their own beliefs (confirmation bias). Reasoning is most effective when it is done as a collective—as demonstrated by the success of projects like science. They suggest that there are not just individual, but group selection pressures at play. Any group that managed to find ways of reasoning effectively would reap benefits for all its members, increasing their fitness. This could also help explain why humans, according to Sperber, are not optimized to reason effectively alone. Sperber's & Mercier's argumentative theory of reasoning claims that reason may have more to do with winning arguments than with the search for the truth.
Reason in political philosophy and ethics
Aristotle famously described reason (with language) as a part of human nature, because of which it is best for humans to live "politically" meaning in communities of about the size and type of a small city state ( in Greek). For example:
If human nature is fixed in this way, we can define what type of community is always best for people. This argument has remained a central argument in all political, ethical, and moral thinking since then, and has become especially controversial since firstly Rousseau's Second Discourse, and secondly, the Theory of Evolution. Already in Aristotle there was an awareness that the had not always existed and had to be invented or developed by humans themselves. The household came first, and the first villages and cities were just extensions of that, with the first cities being run as if they were still families with Kings acting like fathers.
Friendship seems to prevail in man and woman according to nature []; for people are by nature [] pairing more than political [], in as much as the household [] is prior and more necessary than the and making children is more common [] with the animals. In the other animals, community [] goes no further than this, but people live together [] not only for the sake of making children, but also for the things for life; for from the start the functions [] are divided, and are different for man and woman. Thus they supply each other, putting their own into the common []. It is for these reasons that both utility and pleasure seem to be found in this kind of friendship.
Rousseau in his Second Discourse finally took the shocking step of claiming that this traditional account has things in reverse: with reason, language, and rationally organized communities all having developed over a long period of time merely as a result of the fact that some habits of cooperation were found to solve certain types of problems, and that once such cooperation became more important, it forced people to develop increasingly complex cooperation—often only to defend themselves from each other.
In other words, according to Rousseau, reason, language, and rational community did not arise because of any conscious decision or plan by humans or gods, nor because of any pre-existing human nature. As a result, he claimed, living together in rationally organized communities like modern humans is a development with many negative aspects compared to the original state of man as an ape. If anything is specifically human in this theory, it is the flexibility and adaptability of humans. This view of the animal origins of distinctive human characteristics later received support from Charles Darwin's Theory of Evolution.
The two competing theories concerning the origins of reason are relevant to political and ethical thought because, according to the Aristotelian theory, a best way of living together exists independently of historical circumstances. According to Rousseau, we should even doubt that reason, language, and politics are a good thing, as opposed to being simply the best option given the particular course of events that led to today. Rousseau's theory, that human nature is malleable rather than fixed, is often taken to imply (for example by Karl Marx) a wider range of possible ways of living together than traditionally known.
However, while Rousseau's initial impact encouraged bloody revolutions against traditional politics, including both the French Revolution and the Russian Revolution, his own conclusions about the best forms of community seem to have been remarkably classical, in favor of city-states such as Geneva, and rural living.
See also
Outline of thought – Topic tree that identifies many types of thoughts/thinking, types of reasoning, aspects of thought, related fields, and more
Outline of human intelligence – Topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more
References
Further reading
Beer, Francis A., "Words of Reason", Political Communication 11 (Summer, 1994): 185–201.
Concepts in epistemology
Concepts in logic
Concepts in the philosophy of mind
Concepts in the philosophy of science
Critical thinking
Knowledge
Metaphysics of mind
Ontology
Philosophical logic
Philosophy of logic
Problem solving skills
Rationalism | 0.769848 | 0.998701 | 0.768848 |
Animacy | Animacy (antonym: inanimacy) is a grammatical and semantic feature, existing in some languages, expressing how sentient or alive the referent of a noun is. Widely expressed, animacy is one of the most elementary principles in languages around the globe and is a distinction acquired as early as six months of age.
Concepts of animacy constantly vary beyond a simple animate and inanimate binary; many languages function off an hierarchical general animacy scale that ranks animacy as a "matter of gradience". Typically (with some variation of order and of where the cutoff for animacy occurs), the scale ranks humans above animals, then plants, natural forces, concrete objects, and abstract objects, in that order. In referring to humans, this scale contains a hierarchy of persons, ranking the first- and second-person pronouns above the third person, partly a product of empathy, involving the speaker and interlocutor.
Examples
The distinction between he, she, and other personal pronouns, on one hand, and it, on the other hand is a distinction in animacy in English and in many Indo-European languages. The same can be said about distinction between who and what. Some languages, such as Turkish, Georgian, Spoken Finnish and Italian, do not distinguish between s/he and it. In Finnish, there is a distinction in animacy between , "he/she", and , "it", but in Spoken Finnish can mean "he/she". English shows a similar lack of distinction between they animate and they inanimate in the plural but, as shown above, it has traditionally had such a distinction in the singular, although this is being eroded by the growing use of they as an ungendered singular pronoun.
There is another example of how animacy plays some role in English. For example, the higher animacy a referent has, the less preferable it is to use the preposition of for possession (that can also be interpreted in terms of alienable or inalienable possession):
My face is correct while the face of mine would sound strange.
The man's face and the face of the man are both correct, but the former is preferred.
The clock's face and the face of the clock are both correct.
Examples of languages in which an animacy hierarchy is important include the Totonac language in Mexico and the Southern Athabaskan languages (such as Western Apache and Navajo) whose animacy hierarchy has been the subject of intense study. The Tamil language has a noun classification based on animacy.
Proto-Indo-European language
Because of the similarities in morphology of feminine and masculine grammatical gender inflections in Indo-European languages, there is a theory that in an early stage, the Proto-Indo-European language had only two grammatical genders: "animate" and "inanimate/neuter"; the most obvious difference being that inanimate/neuter nouns used the same form for the nominative, vocative, and accusative noun cases. The distinction was preserved in Anatolian languages like Hittite, all of which are now extinct.
The animate gender would then later, after the separation of the Anatolian languages, have developed into the feminine and masculine genders. The plural of neuter/inanimate nouns is believed to have had the same ending as collective nouns in the singular, and some words with the collective noun ending in singular were later to become words with the feminine gender. Traces can be found in Ancient Greek in which the singular form of verbs was used when they referred to neuter words in plural. In many Indo-European languages, such as Latin and the Slavic languages, the plural ending of many neuter words in the merged nominative–accusative–vocative corresponds to the feminine singular nominative form.
Navajo (Diné)
Like most other Athabaskan languages, Southern Athabaskan languages show various levels of animacy in their grammar, with certain nouns taking specific verb forms according to their rank in this animacy hierarchy. For instance, Navajo (Diné) nouns can be ranked by animacy on a continuum from most animate (a human) to least animate (an abstraction) (Young & Morgan 1987: 65–66):
Adult human/lightning > infant/big animal > medium-sized animal > small animal > natural force > abstraction
Generally, the most animate noun in a sentence must occur first while the noun with lesser animacy occurs second. If both nouns are equal in animacy, either noun can occur in the first position. Both sentences (1) and (2) are correct. The prefix on the verb indicates that the first noun is the subject and indicates that the second noun is the subject.
Sentence (3), however, sounds wrong to most Navajo speakers because the less animate noun occurs before the more animate noun:
In order to express that idea, the more animate noun must occur first, as in sentence (4):
There is evidence suggesting that the word order itself is not the important factor. Instead, the verb construction usually interpreted as the passive voice (e.g. "the girl was pecked by the bird") instead indicates that the more animate noun allowed the less animate noun to perform the action (e.g. "the girl let herself be pecked by the bird"). The idea is that things ranked higher in animacy are presumed to be in control of the situation, and that the less-animate thing can only act if the more-animate thing permits it.
Japanese
Although nouns in Japanese are not marked for animacy, it has two existential/possessive verbs; one for implicitly animate nouns (usually humans and animals) and one for implicitly inanimate nouns (often non-living objects and plants). The verb iru (, also written ) is used to show the existence or possession of an animate noun. The verb aru (, sometimes written when existential or when possessive) is used to show the existence or possession of an inanimate noun.
An animate noun, here 'cat', is marked as the subject of the verb with the subject particle ga, but no topic or location is marked. That implies the noun is indefinite and merely exists.
In the second example, a topic is introduced, in this case "I", with the topic particle wa. The animate noun is again marked with a subject particle, and no location is denoted. That implies that the topic owns or is holding onto the noun.
In the third example, the noun is marked as the topic (and by default functions as the subject of the verb) while a location, here the top of a chair, is marked with the location particle ni. That implies that the noun is a definite noun and is at the specified location.
In all these cases, if the noun is not animate, such as a stone, instead of a cat, the verb iru must be replaced with the verb aru ( or [possessive] / [existential, locative]).
In some cases in which "natural" animacy is ambiguous, whether a noun is animate or not is the decision of the speaker, as in the case of a robot, which could be correlated with the animate verb (to signify sentience or anthropomorphism) or with the inanimate verb (to emphasise that is a non-living thing).
Ryukyuan languages
The Ryukyuan languages, spoken in the Ryukyu Islands agree in animacy in their case systems.
Slavic languages
Overview
Slavic languages that have case (all of them except Bulgarian and Macedonian) have a somewhat complex hierarchy of animacy in which syntactically animate nouns may include both animate and inanimate objects (like mushrooms and dances). Overall, the border between animate and inanimate places humans and animals in the former and plants, etc., in the latter, thus basing itself more so on sentience than life.
Animacy functions as a subgender through which noun cases intersect in a phenomenon called syncretism, which here can be either nominative-accusative or genitive-accusative. Inanimate nouns have accusative forms that take on the same forms as their nominative, with animate nouns marked by having their accusative forms resemble the genitive.
For example, syncretism in Polish conditioned by referential animacy results in forms like the following:
NOM stół 'table' -> ACC stół, like nom -> GEN stołu (exhibiting nom-acc syncretism);
NOM kot 'cat' -> ACC kota, like gen -> GEN kota (exhibiting gen-acc syncretism).
That syncretism also occurs when restricted by declension class, resulting in syncretism in multiple pronominal forms, such as the Russian reflexive pronoun , personal pronouns, and the indefinite interrogative and relative pronoun kto.
In their plural forms, nouns of all genders may distinguish the categories of animate vs. inanimate by that syncretism, but only masculine nouns of the first declension (and their modifiers) show it in the singular (Frarie 1992:12), and other declensions and genders of nouns "restrict (morphological) expression of animacy to the plural" (Frarie 1992:47).
Masc nouns that show acc-gen (sg & plural) syncretism: [muʂ] husband, [sɨn] son, [lʲef] lion, [konʲ] horse.
Fem animate nouns that show acc-gen (plural) syncretism: [ˈʐɛnʲɕːɪnə] woman, [ˈɫoʂətʲ] horse.
Neut animate nouns that show acc-nom (sg) and acc-gen (plural) syncretism: 'animal', 'insect'.
Elsewhere, animacy is displayed syntactically, such as in endings of modifiers for masc nouns of the second declension.
Animacy as a "subgender"
While animacy is viewed as primarily semantic when approached diachronically, a synchronic view suggests animacy as a sublevel of gender. Syntactic gender is defined through patterns in agreement, not necessarily semantic value. For example, Russian has "common gender" nouns that refer to traditionally masculine roles but act as syntactically feminine.
Animacy occurs as a subgender of nouns and modifiers (and pronouns only when adjectival) and is primarily reflected in modifier-head agreement (as opposed to subject-predicate agreement).
Controversy
Some consider the system to be based on marking inanimacy in which case the gen-acc distinguishes a "non-inanimate" subgender of nouns and modifiers, and others claim that ultimately it is indeed animacy that is marked.
Sinhala
In spoken Sinhala, there are two existential/possessive verbs: hiţinawā / innawā are used only for animate nouns (humans and animals), and තියෙනවා tiyenawā for inanimate nouns (like non-living objects, plants, things):
Spanish
Nouns
In Spanish, the preposition (meaning "to" or "at") has gained a second role as a marker of concrete animate direct objects:
{|
|
| "I can see that cathedral."
| (inanimate direct object)
|-
|
| "I can see that person."
| (animate direct object)
|-
|
| "I come to Spain."
| ( used in its literal sense)
|}
The usage is standard and is found around the Spanish-speaking world.
Pronouns
Spanish personal pronouns are generally omitted if the subject of the sentence is obvious, but when they are explicitly stated, they are used only with people or humanized animals or things. The inanimate subject pronoun in Spanish is , like it in English (except "ello" can only be used to refer to verbs and clauses, not objects, as all nouns are either masculine or feminine and are referred to with the appropriate pronouns).
Spanish direct-object pronouns do not differentiate between animate and inanimate entities, and only the third persons have a gender distinction. Thus, for example, the third-person singular feminine pronoun, , could refer to a woman, an animal (like , butterfly), or an object (like , house), if their genders are feminine.
In certain dialects, there is a tendency to use (which is usually an indirect object pronoun, meaning "to him/her") as a direct-object pronoun, at the expense of the direct-object pronouns , if the referent is animate. That tendency is especially strong if (a) the pronoun is being used as a special second-person pronoun of respect, (b) the referent is male, (c) certain verbs are used, (d) the subject of the verb happens to be inanimate.
Arabic
In Classical and Modern Standard Arabic and some other varieties of Arabic, animacy has a limited application in the agreement of plural and dual nouns with verbs and adjectives. Verbs follow nouns in plural agreement only when the verb comes after the subject. When a verb comes before an explicit subject, the verb is always singular. Also, only animate plural and dual nouns take plural agreement; inanimate plural nouns are always analyzed as singular feminine or plural feminine for the purpose of agreement. Thus, Arabic (Al-muhandisūn yaṭīrūn ’ilā ’Almāniyā, "The engineers fly to Germany") is masculine plural agreement, but (Al-ṭā’irāt taṭīr ’ilā ’Almāniyā, "The planes fly to Germany") is feminine singular. Compare them to (Taṭīr al-muhandisāt ’ilā ’Almāniyā) and (Al-muhandisāt yaṭirna ’ilā ’Almāniyā) for "The [female] engineers fly to Germany."
In general, Arabic divides animacy between (thinking, or rational) and (unthinking, or irrational). Animals fall in the latter category, but their status may change depending on the usage, especially with personification. Different writers might use (Al-ġurbān yaṭīrūn ’ilā ’Almāniyā) or (Al-ġurbān taṭīr ’ilā ’Almāniyā) for "The ravens fly to Germany."
Animacy hierarchy and morphosyntactic alignment
Split ergativity
Animacy can also condition the nature of the morphologies of split-ergative languages. In such languages, participants more animate are more likely to be the agent of the verb, and therefore are marked in an accusative pattern: unmarked in the agent role and marked in the patient or oblique role.
Likewise, less animate participants are inherently more patient-like, and take ergative marking: unmarked when in the patient role and marked when in the agent role. The hierarchy of animacy generally, but not always, is ordered:
{|
| 1st person
| > || 2nd person
| > || 3rd person
| > || proper names
| > || humans
| > ||
| > || inanimates
|}
The location of the split (the line which divides the inherently agentive participants from the inherently patientive participants) varies from language to language, and, in many cases, the two classes overlap, with a class of nouns near the middle of the hierarchy being marked for both the agent and patient roles.
Hierarchical alignment
In a direct–inverse language, clauses with transitive verbs can be expressed with either a direct or an inverse construction. The direct construction is used when the subject of the transitive clause outranks the object in salience or animacy. The inverse construction is used when the "notional object" outranks the "notional subject".
Thematic roles
A noun essentially requires the traits of animacy in order to receive the role of Actor and Experiencer. Additionally, the Agent role is generally assigned to the NP with highest ranking in the animacy hierarchy – ultimately, only animate beings can function as true agents. Similarly, languages universally tend to place animate nouns earlier in the sentence than inanimate nouns.
Animacy is a key component of agency – combined with other factors like "awareness of action". Agency and animacy are intrinsically linked – with each as a "conceptual property" of the other.
See also
Grammatical gender
Noun class
Classifier (linguistics)
References
LOCAT:location
Sources
Crespo Cantalapiedra, I. (2024). La diversidad en las lenguas: la animacidad. Online book (in Spanish).
Frishberg, Nancy. (1972). Navajo object markers and the great chain of being. In J. Kimball (ed.), Syntax and semantics, vol. 1, p. 259–266. New York: Seminar Press.
Hale, Kenneth L. (1973). A note on subject–object inversion in Navajo. In B. B. Kachru, R. B. Lees, Y. Malkiel, A. Pietrangeli, & S. Saporta (eds.), Issues in linguistics: Papers in honor of Henry and Renée Kahane, p. 300–309. Urbana: University of Illinois Press.
Thomas E. Payne, 1997. Describing morphosyntax: A guide for field linguists. Cambridge University Press. .
Young, Robert W., & Morgan, William, Sr. (1987). The Navajo language: A grammar and colloquial dictionary (rev. ed.). Albuquerque: University of New Mexico Press. .
Nouns by type
Grammatical gender
Linguistic morphology
Syntax–semantics interface | 0.778419 | 0.987703 | 0.768847 |
Feminist epistemology | Feminist epistemology is an examination of epistemology from a feminist standpoint.
Overview
Feminist epistemology claims that ethical and political values are important in shaping epistemic practices, and interpretations of evidence. Feminist epistemology has been in existence for over 25 years. Feminist epistemology studies how gender influences our understanding of knowledge, justification and theory of knowledge; it describes how knowledge and justification disadvantage women. Feminist epistemology is derived from the terms feminism and epistemology. Feminism is concerned with the abolition of gender and sex inequalities, from the perspective that only women suffer inequalities while epistemology is the inquiry into knowledge's meaning. Scholars of feminist epistemology claim that some theories of knowledge discriminate against women by disbarring them from inquiry, unfairly criticizing their cognitive styles, and producing theories of women and social phenomena that reinforce gender hierarchies and represent women as inferior. The easy and uncontroversial point is that much of what has been recognised as knowledge and passed on in academic and industrial circles has been produced by men. Consequently their experiences and concerns have served to determine its direction. According to feminist epistemologists, these failures in dominant knowledge result from faulty scientific methodologies and knowledge conceptions. Therefore, feminist epistemologists attempt to propagate theories that aid liberation and egalitarian causes and protect these endeavors as advances in knowledge.
The central idea of feminist epistemology is that knowledge reflects the particular perspectives of the theory. The main interest of feminist philosophers is how gender stereotypes situate knowing subjects. They approach this interest from three different perspectives: feminist standpoint theory, feminist postmodernism, and feminist empiricism. Standpoint theory defines a specific social perspective as epistemically privileged. Feminist postmodernism emphasizes the instability of the social identity explorers and therefore their representations. Empiricism focuses on combining the main ideas of feminism and their observations to prove feministic theories through evidence.
Elizabeth Anderson argues that the concept of situated knowledge is central to feminist epistemology. Donna Haraway asserts that most knowledge (in particular academic knowledge) is always situated and "produced by positioned actors working in/between all kinds of locations, working up/on/through all kinds of research relation(ships)", and thus what is known and the ways in which this knowledge can be known is subject to the position—the situation and perspective—of the knower.
The English feminist philosopher Miranda Fricker has argued that in addition to social or political injustices, there can be epistemic injustices in two forms: testimonial injustice and hermeneutical injustice. Testimonial injustice consists in prejudices that cause one to "give a deflated level of credibility to a speaker's word": Fricker gives the example of a woman who due to her gender is not believed in a business meeting. She may make a good case, but prejudice causes the listeners to believe her arguments to be less competent or sincere and thus less believable. In this kind of case, Fricker argues that as well as there being an injustice caused by possible outcomes (such as the speaker missing a promotion at work), there is a testimonial injustice: "a kind of injustice in which someone is wronged specifically in her capacity as a knower". Such an awareness allows a hearer to account for the likely impact of the identity power relation that mediates between himself and the speaker on his spontaneous perception, essentially correcting for the problems that can result in transactions of testimonial injustice.
In the case of hermeneutical injustice, "speakers' knowledge claims fall into lacunae in the available conceptual resources, thus blocking their capacity to interpret, and thence to understand or claim a hearing for their experiences." For example, when the language of 'sexual harassment' or 'homophobia' were not generally available, those who experienced these wrongs lacked the resources to make a claim to being wronged in morally relevant ways.
The philosopher Susan Haack is a notable critic of feminist epistemology.
Sandra Harding organized feminist epistemology into three categories: feminist empiricism, standpoint epistemology, and post-modern epistemology. While potentially a limited set of categories, post-modern feminism was a transitional ideology that denounced absolute objectivity and asserted the death of the meta-narrative. While these three categories of feminist epistemology have their place in history (see feminist empiricism, standpoint feminism, postmodern feminism), as ideological frameworks they hold epistemic insights in contemporary feminist method. Feminist theorist Nina Lykke, has expanded upon these three categories to include "postmodern feminist (anti-)epistemology...[and]...postconstructionist feminist epistemology"
Feminist empiricism
Feminist empiricism emerged from a feminist critique that gave attention to male bias in positivistic practices of science. 2nd Wave feminist researchers identified how quantification and objectivity, as facets of positivism, have been held as the "gold standard" for social and political science research. Quantification, and its political relationships to notions of objectivity, maintains methodological dominance and preference primarily in the United States. This is perpetuated by how funding authorities tend to prioritize quantitative research with positivist frameworks.
Feminist empiricists believe in the concept of positivism; that all knowledge can be understood objectively and can be accessed through empirical research. They assert that pre-feminist positivism was actually not objective at all, since traditional positivism's 'androcentric bias' led to only partial or 'subjective' knowledge of the world. In essence, all empirical inquiry is inherently skewed by value judgments and biased interpretation of evidence by male-biased authorities. For instance, it was not until retrieving statistical data on the prevalence of women in the workplace experiencing (what is now known to be) 'sexual harassment' through surveys in the 1970s that sexual harassment became identified by political authorities as a commonality. Without this intervention of feminists in an empirical field, this commonality would never have been identified as an issue, since males had no reason to pursue this phenomenon. Londa Schiebinger further asserts that empirical research "embodies many core feminist values", in that feminist empiricists are actively seeking out and eliminating exploitative research whilst resisting strategic, oppressive explanations of data.
Feminist empiricism is critiqued for its belief that "objectivity" is best achieved through quantification, whether or not viewed through a feminist lens or utilized for feminist ideals. The division between quantitative and qualitative data has historically reinforced gendered dichotomies of "hard/soft, emotional/rational, worthy/worthless". Many assert that 'objective truth' is a false concept, and thus feminist empiricists may overestimate the extent to which they can increase objectivity. Furthermore, positivism and quantitative research has been critiqued as a "detached" philosophical framework that inherently objectifies its research subjects.
Feminist empiricists respond to the problem of value-neutrality by lengthening Quine's argument: theory is not determined by evidence. Any observation counts as proof for
particular thesis only if connected with certain background presumptions, because similar observation might support different hypotheses. In daily life, scientists face some restrictions in selecting the background assumptions, that are based on cognitive values like simplicity and conservatism, which a political and social philosophy that is based on retaining traditional social establishments. Feminist empiricists state that no logical or methodological principle categorically prohibits scientists from choosing their background assumptions as their political and social values or other interests. Therefore, feminist scientists may select their background presumptions on account of their opinions on some feminist values.
Two paradoxes
There are two central paradoxes with feminist empiricism—the paradox of bias and the paradox of social construction.
Paradox of bias
Many feminist empiricists advocate for exposing the androcentric and sexist biases in scientific research, namely that people have a bias towards gender difference and sexuality. However, while feminist empiricists would claim that the feminist inquiry helps the development of science, their own perspective adopts certain biases about gender and science.
The paradox of bias emerges from arguments that revise or reject conceptions of impartiality and objectivity in research. This bias lies between feminist empiricism's two main commitments. First, feminist empiricism is committed to the feminist project. In other words, feminists are determined to expose, subvert and overcome all forms of oppression. In the context of feminist epistemology, the consequence of this outcome is that feminists constantly attack impartiality as a disguise of the subjective interests of the powerful in society. The second commitment is to empiricism, where feminists pledge allegiance to the methods and tools of analytical philosophy. While analytic philosophy(?) (feminism?) requires a commitment to subjectivity, empiricism requires partakers to endorse impartiality. Therefore, there is a paradox of bias that confronts both empiricism and epistemological views that attempt to balance subjectivism and objectivism in knowledge acquisition.
Simply stated, the paradox of bias is the existing tension between feminists who criticize male bias for lacking impartiality and feminists who reject the impartiality ideal. The latter claim that objectivity and neutrality are unattainable, which becomes problematic when claiming the objectivity of their viewpoints. According to Andrew, all epistemological views are biased. Besides, it is difficult to distinguish between various subjective principles without biased or partial evaluation standards. Therefore, it becomes difficult to conceptualize and evaluate bias while rejecting impartiality. Anthony makes several claims in formulating the paradox of bias. First, he asserts that impartiality is not a tenable epistemic practice ideal. Secondly, he states that the untenable state of impartiality means that all epistemic practices have an inherent bias. Thirdly, it is impossible to develop impartial criteria for assessing the epistemic value of biases if all practices are biased. Lastly, all biases are equal since there are no unbiased criteria for evaluating practices. These claims suggest that people should either endorse objectivity or stop distinguishing between good and biases.
Paradox of social construction
Many critics of feminist science argue that science is generally influences by political and social factors. These critics advance sexist and androcentric theories due to the influence of sexist values in society. This implies the existence of social biases in feminist science, which may be eliminated through the adoption of individual epistemology. Nonetheless, many feminists believe in the openness of scientific practices to diverse social influences, resulting in the paradox of social construction.
Criticism of empiricism theory
It is the most criticized theory by others, for its assumptions that transhistorical subject of knowledge exists outside of social determination (Harding 1990). Also feminist empiricism theory states that science will correct all the biases and errors in theories about women and other groups by itself. According to Harding, this criticism stems from the perception that it is sufficient to eliminate sexist bias without altering traditional scientific methods further. Feminist empiricism has also been criticized for ignoring the role of feminist political activity as a vital source of evidence and hypotheses to challenge androcentric and sexist theories. This criticism applies especially to the development of oppositional consciousness as an element of feminist political activity.
Standpoint epistemology
At a basic level, standpoint epistemology asserts that marginalized groups such as women are bestowed with an "epistemic privilege", where there exists the potential for less distorted understandings of the world than dominant groups, such as men. This methodology presents many new ideas to the feminist empiricist notion that androcentric dominance and bias presents an incomplete understanding of the world. A "standpoint" is not so much about a subject's biased perspective, but instead the 'realities' that structure social relationships of power.
Standpoint theories portray the universe from a concrete situated perspective. Every standpoint theory must specify: the social location from the feminist perspective, the scope of its privileges, the social role and the identity that generates knowledge and the justification of these privileges. Feminist standpoint theory states a privilege in gender relations, various feminist standpoint theories are based on the statement about the epistemic privilege in different feministic situations. Feminist standpoint theory is one of the types of critical theory, their main intention is to improve their situation. In order to achieve this critical aim, social theories must represent the understanding of feministic problems and try to improve their condition. Critical theory is theory of, by, and for the subjects of study. Feminism and feminist epistemology is all about inquiry, assumptions, and theories. Through these methods feminist epistemology overcomes the tension between bias on which feminist empiricism is based on. It presents an elaborate map or method for maximizing "strong objectivity" in natural and social science, yet does not necessarily focus on encouraging positivistic scientific practices, like is central to Feminist Empiricism.
Although standpoint epistemology has been critiqued for focusing too closely on a distinctive women's perspective which may render invisible concepts of historically and sociologically variable knowledge, Harding strongly asserts that standpoint epistemology does not essentialize any particular marginalized identity. Harding further argues that the methodology does not subscribe to notions of "maximizing neutrality" between groups in an effort to maximize objectivity, but instead recognizes that the power relations between groups are what complicate these relationships. This is in some ways contrary to Doucet's assertion that the controversy of how power influenced knowledge production is a post-standpoint, more contemporary debate. Standpoint epistemology also poses a necessity to ask critical questions about the lives and social institutions created by dominant groups; where the field becomes a sociology for women and not solely about women.
In practicality, standpoint theory has widespread use as "a philosophy of knowledge, a philosophy of science, a sociology of knowledge, a moral/political advocacy of the expansion of democratic rights". Although it has been asserted that "epistemic privilege" is inherent to marginalized groups, Harding poses standpoint theory as an explanatory means for both marginalized and dominant group individuals to be able to achieve liberatory perspectives. In building her standpoint epistemology, Sandra Harding used and built on her interpretation of the work of philosophers of science Thomas Kuhn and Willard Quine. Harding's standpoint theory is also grounded in Marxism, although she largely rejected classical Marxism for its portrayal of women in merely class terms.
In The Structure of Scientific Revolutions, Kuhn argued that scientific progress does not occur through gradual accumulation of correct ideas. Rather, he believed that there were occasionally large revolutions that completely overturned the previous scientific theories. When a crisis occurs within the prevailing theory of a time, revolutionary scientists will challenge them and build new scientific theories. For example, in his view, the transition from the geocentrism of Ptolemy to the heliocentric theory of Copernicus did not occur through a gradual series of challenges and improvements to the previous model. Rather, it was a sudden and complete revolution because it is impossible to conceptualize the theory of heliocentrism within the dominant geocentric theory. Kuhn argued that together, the ideas of Newton, Galileo, and Kepler completed the revolution that Copernicus started. However, most students of science do not learn of the many failed and alternative scientific paradigms. They are taught a version of the history of science where progress is guaranteed and linear. In Harding's view, Kuhn's theories showed that all science was situated within its historical context, and that any theory could remain accepted if its believers held power.
Criticism of standpoint theory: Philosopher Helen Longino is against standpoint theory, because she claims that standpoint theory can not provide the knowledge of which standpoints have the most privilege. Bar On (1993) said that if feminine ethics of care provides privileged perspective on morality, then our moral knowledge is convinced only by existence of gender relations. Bar On also claims that theory which explains structural relationship between advanced and less developed, which dictates epistemic privilege can not be applied to women. Marx claimed that class conflict derives other conflicts such as racism, sexism, national and religious conflicts.
Feminist epistemology is criticized by different philosophers. Feminist postmodernists blame feminist empiricists for assuming the existence of an individual and for admitting an uncritical concept of experience. Naturalized Quine epistemology of some feminist empiricists perceives knowers as socially situated; Hundleby, a standpoint theorist, criticizes feminist empiricism for disregarding the key role of women in political activities.
Standpoint theory is often criticized for the lack of evidence available to support it and the ideas underlying it, such as the lack of justification for the underdetermination theory Harding uses. Pinnick, to illustrate her point about Harding's poor evidence, points to standpoint theory's claim that science is more objective if it is politically motivated, which Pinnick claims runs contrary to what has happened in the past when scientists deliberately injected politics into their theories (she cites eugenics and intelligence test designs as examples of politicized science). She also criticizes Harding for claiming that marginalized groups produce better, less biased scientific results because, according to Pinnick, Harding fails to provide any empirical evidence for this idea.
Postmodernism
Postmodernism is inspired by postmodernist and postculturalist theorists such as Lyotard and Foucalt, who question universality and objectivity as ways to transcend situatedness. In other words, postmodernism focuses partiality, locality and contestability of worldviews. By delegitimizing dominant ideas, postmodernism allows for imagination that was previously obscured. Post-modern thought marks a feminist group shift away from dominant, positivistic ideals of objectivity and universal understanding. Instead, it acknowledges a diversity of unique human perspectives, none of which can claim absolute knowledge authority. Post-Modern feminism has thus been critiqued for having a relativist-stance, where ongoing power relations between key identities have been often neglected attention. It is possible to see this political stance in direct opposition to the "emancipatory aspirations" of women. However, Saba Mahmood would argue this critique is in some ways oppositional to global understandings of female desire, where the idea of 'freedom' is an essential, conditionally oppressive component to western feminism which may wrongly assume that women of eastern countries dominated by male power are victims needing to be liberated. As such, feminist postmodernism opposes traditional theories that justify sexist practices. Such theories perpetrate the ideas that the differences between men and women are natural, or that women have innate characteristics that justify their inferior position in society. For instance, while essentialism claims that gender identity is universal, feminist postmodernism suggests that these theories exclude marginalized groups such as lesbians and women of color. Such exclusions reproduce power relations as the heterosexual white middle-class women are assumed to represent all women.
Donna Haraway, a postmodern feminist, claims that postmodern feminism recognizes positivism as an inherently oppressive ideology, where science's rhetoric of truth was supposedly used to undermine marginalized people's agency and delegitimize 'embodied' accounts of truth. Furthermore, they argue that 'objectivity' is an external, disembodied point of view left only to privileged (unmarked bodies), because marginalized (marked bodies) cannot have perspectives dissociated from 'who they are'. Despite post-modern relativist criticism, this theory resists relativism in firmly recognizing power relations in that objectivity is a privilege of unmarked bodies. Haraway's theory of "situated knowledges" holds true to post-modern ideology, where knowledge should be placed in context; this creates a more limited range of knowledge than theoretical "objectivity", but is richer in allowing for exchange of understanding between individual experiences. Positivism inherently gives way to authoritarian positions of knowledge which hinder discussion and render limited understanding of the world. Both positivist science and relativism have been recognized as contrary to post-modern feminist thought, since both minimize the significance of context (geographic, demographic, power) on knowledge claims.
Haraway in Postmodern Bodies: Haraway introduced Biopolitics- a concept connecting policy to life- as a primary category during the Postmodern period. In one of Haraway's most famous essays, "The Biopolitics of Postmodern Bodies: Determinations of Self in Immune System Discourse," she regards the human body as a subject composed of Independent systems that interact with one another, in a political or strategic sense. According to Haraway, these bodily functions coexist while operating as separate strategic entities.
Criticism of postmodernism: Feminist postmodernism has been criticized on the basis of its rejection of the woman as a category of study and its fragmentation of perspectives. They claim that although women experience sexism differently, it is still a common characteristic among them (MacKinnon 2000). While differences exist between different classes of women, different modes of sexism may be accommodated through an intersectional approach. The postmodernism theory dissolves all groups, and supports the ideas that knowledge from any source is better than no knowledge at all (Bordo 1990).
Theory in the flesh
Post-modern feminism's assertion of "situated knowledges", plays well into Cherrie Moraga's piece "Theory in the Flesh", where the 'physical realities' of indigenous peoples' lives are said to be the means of creating a decolonial politic against oppressive, inaccessible, Eurowestern academic methods of knowledge production. In her piece, Moraga highlights the various forms of oppression that stem from various forms of discrimination. Though women of color are disproportionately stigmatized, all women, in general, suffer from societal repression. Moraga asserts that internalized racism and classism determine the disparity of treatment between blacks and whites.
This epistemological framework has been utilized by feminists like bell hooks, who claims that theorizing is often tied to a process of self-recovery and collective liberation; it is not thus limited to those in the western academic realm, nor does it require 'scientific' research. Hooks asserts that theory and practical application of emancipatory politics can, and often do, exist simultaneously and reciprocally. Post-Modern feminism has given way to the question of whether or not there should be any particular feminist ways of knowing. A 'theory in the flesh' seems to suggest that prioritizing or normalizing any specific feminist epistemology would in itself be, and has been, oppressive. According to Morgana, feminism needs to function as a united, all-inclusive body that promotes gender equality across all spectrums. Racism integrated within feminism needs to be dismantled to achieve true equity. Internalized oppression needs to be avoided at all costs because it exacerbates systematic racist and classist discrimination.
Feminist epistemic virtue theory
This theory focuses on how power and gender relations behave in terms of value theory and epistemology. Bordo's (1990) and Lloyd's (1984) examined how "maleness" and "femaleness" are used in philosophical theories and discussions about relationship such as, reason/unreason, reason/emotion and objectivity/subjectivity. Lorraine Code's (1987, 1991, 1995, 1996) with other feminist co-workers determined in which ways political and social routine shapes our identities and perspectives of our world and especially gender, how it leads to understanding of epistemic responsibility. Code's works also have been influential in epistemological fields, which can be described as version of naturalism takes and reinvents simple and uncontroversial empirical beliefs, for example the belief like "I know that I am seeing a bird", deforms the epistemic animal nature. Feminist epistemic virtue theorists rejects almost all the assumptions. Skeptical problems can not get any connections with it, so it is ignored and considered as a pseudo-problem.
Feminist science criticism and feminist science
Feminist science criticism: bias as error
Feminist science criticism mainly has five different kinds of research about gender and science to address five identified biases. These are studies of how:
Exclusion or marginalization of women scientists impair scientific progress.
Applications of science and technology disadvantage women and other vulnerable groups and treat their interests as less important.
Science has ignored women and gender, and how turning attention to these issues may require revisions of accepted theories.
Biases toward working with "masculine" cognitive styles (and in some cases even the words related to them) that may — through limiting, partial, or incomplete perspective — lead to errors of omission or unjustified conclusions.
Research into sex differences that reinforces sex stereotypes and sexist practices fail to live up to standards of good science.
Feminist science: bias as resource
Research bias is partial or limiting but not wrong if it has some empirical success and keeps off error. Such bias may be considered acceptable and suitable to serve as a basis for epistemic inquiry. They help in gaining more understanding of the world by producing new hypotheses, methods, and concepts, thus serving as epistemic resources. According to feminist philosophers, research should not be dominated by few limiting biases that exclude other generative standpoints, which would result in wider conceptions of research subjects.
Proponents of feminist science claim that scientific studies informed by feminist values are founded on sound biases that are generative limiting. This paints a pluralistic picture of science, where it appears to be disunified due to the presence of diverse structures that are not encompassed in any single theory. In other words, allowing communities to freely explore their interests reveals multiple structures and patterns. In opposing this view, some scientists claim that feminist science should follow specific methodologies and ontologies. However, this view has been opposed by supporters of pluralism, who argue that there are no unique methods for feminist science. They also claim that sticking to specific methods tends to favor certain representation types, which may also reinforce sexism.
See also
Epistemic advantage
Feminist philosophy of science
References
External links
'Positionality/Situated Knowledge', by Ian Cook (PDF)
Feminism
Social epistemology
Feminist philosophy
Postmodernism
Postmodern theory | 0.785224 | 0.97905 | 0.768773 |
Analytic philosophy | Analytic philosophy is an analysis focused, broad, contemporary movement or tradition within Western philosophy, especially anglophone philosophy. Analytic philosophy is characterized by a clarity of prose; rigor in arguments; and making use of formal logic and mathematics, and, to a lesser degree, the natural sciences. It is further characterized by an interest in language and meaning known as the linguistic turn. It has developed several new branches of philosophy and logic, notably philosophy of language, philosophy of mathematics, philosophy of science, modern predicate logic and mathematical logic.
The proliferation of analysis in philosophy began around the turn of the 20th century and has been dominant since the latter half of the 20th century. Central figures in its historical development are Gottlob Frege, Bertrand Russell, G. E. Moore, and Ludwig Wittgenstein. Other important figures in its history include Franz Brentano, the logical positivists (particularly Rudolf Carnap), the ordinary language philosophers, W. V. O. Quine, and Karl Popper. After the decline of logical positivism, Saul Kripke, David Lewis, and others led a revival in metaphysics.
Analytic philosophy is often contrasted with continental philosophy, which was coined as a catch-all term for other methods that were prominent in continental Europe, most notably existentialism, phenomenology, and Hegelianism. There is widespread influence and debate between the analytic and continental traditions; some philosophers see the differences between the two traditions as being based on institutions, relationships, and ideology, rather than anything of significant philosophical substance. The distinction has also been drawn between "analytic" being academic or technical philosophy and "continental" being literary philosophy.
History of analytic philosophy
Austrian realism
Analytic philosophy was deeply influenced by what is called Austrian realism in the former state of Austria-Hungary, so much so that Michael Dummett has remarked that analytic philosophy is better characterized as Anglo-Austrian rather than the usual Anglo-American.
Brentano
University of Vienna philosopher and psychologist Franz Brentano—in Psychology from an Empirical Standpoint (1874) and through the subsequent influence of the School of Brentano and its members, such as Edmund Husserl and Alexius Meinong—gave to analytic philosophy the problem of intentionality or of aboutness. For Brentano, all mental events have a real, non-mental intentional object, which the thinking is directed at or "about".
Meinong
Meinong is known for his unique ontology of real nonexistent objects as a solution to the problem of empty names. The Graz School followed Meinong.
Lwów–Warsaw
The Polish Lwów–Warsaw school, founded by Kazimierz Twardowski in 1895, grew as an offshoot of the Graz School. It was closely associated with the Warsaw School of Mathematics.
Frege
Gottlob Frege (1848–1925) was a German geometry professor at the University of Jena who is understood as the father of analytic philosophy. Frege proved influential as a philosopher of mathematics in Germany at the beginning of the 20th century. He advocated logicism, the project of reducing arithmetic to pure logic.
Logic
As a result of his logicist project, Frege developed predicate logic in his book Begriffsschrift (English: Concept-script, 1879), which allowed for a much greater range of sentences to be parsed into logical form than was possible using the ancient Aristotelian logic. An example of this is the problem of multiple generality.
Number
Neo-Kantianism dominated the late 19th century in German philosophy. Edmund Husserl's 1891 book Philosophie der Arithmetik argued that the concept of the cardinal number derived from psychical acts of grouping objects and counting them.
In contrast to this "psychologism", Frege in The Foundations of Arithmetic (1884) and The Basic Laws of Arithmetic (, 1893–1903), argued similarly to Plato or Bolzano that mathematics and logic have their own public objects, independent of the private judgments or mental states of individual mathematicians and logicians. Following Frege, the logicists tended to advocate a kind of mathematical Platonism.
Language
Frege also proved influential in the philosophy of language and analytic philosophy's interest in meaning. Michael Dummett traces the linguistic turn to Frege's Foundations of Arithmetic and his context principle.
Frege's paper "On Sense and Reference" (1892) is seminal, containing Frege's puzzles and providing a mediated reference theory. His paper "The Thought: A Logical Inquiry" (1918) reflects both his anti-idealism or anti-psychologism and his interest in language. In the paper, he argues for a Platonist account of propositions or thoughts.
Russell
British philosophy in the 19th century had seen a revival of logic started by Richard Whately, in reaction to the anti-logical tradition of British empiricism. The major figure of this period is English mathematician George Boole. Other figures include William Hamilton, Augustus de Morgan, William Stanley Jevons, Alice's Adventures in Wonderland author Lewis Carroll, Hugh MacColl, and American pragmatist Charles Sanders Peirce.
British philosophy in the late 19th century was dominated by British idealism, a neo-Hegelian movement, as taught by philosophers such as F. H. Bradley (1846–1924) and T. H. Green (1836–1882).
Analytic philosophy in the narrower sense of 20th and 21st century anglophone philosophy is usually thought to begin with Cambridge philosophers Bertrand Russell and G. E. Moore's rejection of Hegelianism for being obscure; or the "revolt against idealism"—see for example Moore's "A Defence of Common Sense". Russell summed up Moore's influence:
Paradox
Bertrand Russell, during his early career, was much influenced by Frege. Russell famously discovered the paradox in Basic Law V which undermined Frege's logicist project. However, like Frege, Russell argued that mathematics is reducible to logical fundamentals, in The Principles of Mathematics (1903). He also argued for Meinongianism.
On Denoting
Russell sought to resolve various philosophical problems by applying Frege's new logical apparatus, most famously in his theory of definite descriptions in "On Denoting", published in Mind in 1905. Russell here argues against Meinongianism. He argues all names (aside from demonstratives like "this" or "that") are disguised definite descriptions, using this to solve ascriptions of nonexistence. This position came to be called descriptivism.
Principia Mathematica
Later, his book written with Alfred North Whitehead, Principia Mathematica (1910–1913), the seminal text of classical logic and of the logicist project, encouraged many philosophers to renew their interest in the development of symbolic logic. It used a notation from Italian logician Giuseppe Peano, and it uses a theory of types to avoid the pitfalls of Russell's paradox. Whitehead developed process metaphysics in Process and Reality.
Ideal language
Additionally, Russell adopted Frege's predicate logic as his primary philosophical method, a method Russell thought could expose the underlying structure of philosophical problems. Logical form would be made clear by syntax. For example, the English word "is" has three distinct meanings, which predicate logic can express as follows:
For the sentence 'the cat is asleep', the is of predication means that "x is P" (denoted as P(x)).
For the sentence 'there is a cat', the is of existence means that "there is an x" (∃x).
For the sentence 'three is half of six', the is of identity means that "x is the same as y" (x=y).
From about 1910 to 1930, analytic philosophers like Frege, Russell, Moore, and Russell's student Ludwig Wittgenstein emphasized creating an ideal language for philosophical analysis, which would be free from the ambiguities of ordinary language that, in their opinion, often made philosophy invalid. During this phase, they sought to understand language (and hence philosophical problems) by using logic to formalize how philosophical statements are made.
Logical atomism
An important aspect of Hegelianism and British idealism was logical holism—the opinion that there are aspects of the world that can be known only by knowing the whole world. This is closely related to the doctrine of internal relations, the opinion that relations between items are internal relations, that is, essential properties of the nature of those items.
Russell and Moore in response promulgated logical atomism and the doctrine of external relations—the belief that the world consists of independent facts. Inspired by developments in modern formal logic, the early Russell claimed that the problems of philosophy can be solved by showing the simple constituents of complex notions.
Early Wittgenstein
Wittgenstein developed a comprehensive system of logical atomism with a picture theory of meaning in his Tractatus Logico-Philosophicus (, 1921) sometimes known as simply the Tractatus. He claimed the universe is the totality of actual states of affairs and that these states of affairs can be expressed and mirrored by the language of first-order predicate logic. Thus a picture of the universe can be constructed by expressing facts in the form of atomic propositions and linking them using logical operators.
Wittgenstein thought he had solved all the problems of philosophy with the Tractatus. The work further ultimately concludes that all of its propositions are meaningless, illustrated with a ladder one must toss away after climbing up it.
Logical positivism
During the late 1920s to 1940s, a group of philosophers known as the Vienna Circle, and another one known as the Berlin Circle, developed Russell and Wittgenstein's philosophy into a doctrine known as "logical positivism" (or logical empiricism). The Vienna Circle was led by Moritz Schlick and included Rudolf Carnap and Otto Neurath. The Berlin Circle was led by Hans Reichenbach and included Carl Hempel and mathematician David Hilbert.
Logical positivists used formal logical methods to develop an empiricist account of knowledge. They adopted the verification principle, according to which every meaningful statement is either analytic or synthetic. The truths of logic and mathematics were tautologies, and those of science were verifiable empirical claims. These two constituted the entire universe of meaningful judgments; anything else was nonsense.
This led the logical positivists to reject many traditional problems of philosophy, especially those of metaphysics, as meaningless. It had the additional effect of making (ethical and aesthetic) value judgments (as well as religious statements and beliefs) meaningless.
Logical positivists therefore typically considered philosophy as having a minimal function. For them, philosophy concerned the clarification of thoughts, rather than having a distinct subject matter of its own.
Several logical positivists were Jewish, such as Neurath, Hans Hahn, Philipp Frank, Friedrich Waissmann, and Reichenbach. Others, like Carnap, were gentiles but socialists or pacifists. With the coming to power of Adolf Hitler and Nazism in 1933, many members of the Vienna and Berlin Circles fled to Britain and the United States, which helped to reinforce the dominance of logical positivism and analytic philosophy in anglophone countries.
In 1936, Schlick was murdered in Vienna by his former student Hans Nelböck. The same year, A. J. Ayer's work Language Truth and Logic introduced the English speaking world to logical positivism.
The logical positivists saw their rejection of metaphysics in some ways as a recapitulation of a quote by David Hume:
If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.
Ordinary language
After World War II, from the late 1940s to the 1950s, analytic philosophy became involved with ordinary-language analysis. This resulted in two main trends.
Later Wittgenstein
One strain of language analysis continued Wittgenstein's later philosophy, from the Philosophical Investigations (1953), which differed dramatically from his early work of the Tractatus. The criticisms of Frank P. Ramsey on color and logical form in the Tractatus led to some of Wittgenstein's first doubts with regard to his early philosophy. Philosophers refer to them like two different philosophers: "early Wittgenstein" and "later Wittgenstein". In his later philosophy, Wittgenstein develops the concept of a "language-game" and, rather than his prior picture theory of meaning, advocates a theory of meaning as use. It also contains the private language argument and the notion of family resemblance.
Oxford philosophy
The other trend was known as "Oxford philosophy", in contrast to earlier analytic Cambridge philosophers (including the early Wittgenstein) who thought philosophers should avoid the deceptive trappings of natural language by constructing ideal languages. Influenced by Moore's Common Sense and what they perceived as the later Wittgenstein's quietism, the Oxford philosophers claimed that ordinary language already represents many subtle distinctions not recognized in the formulation of traditional philosophical theories or problems.
While schools such as logical positivism emphasize logical terms, which are supposed to be universal and separate from contingent factors (such as culture, language, historical conditions), ordinary-language philosophy emphasizes the use of language by ordinary people. The most prominent ordinary-language philosophers during the 1950s were P. F. Strawson, J. L. Austin, and Gilbert Ryle.
Ordinary-language philosophers often sought to resolve philosophical problems by showing them to be the result of misunderstanding ordinary language. Ryle, in The Concept of Mind (1949), criticized Cartesian dualism, arguing in favor of disposing of "Descartes' myth" via recognizing "category errors".
Strawson first became well known with his article "On Referring" (1950), a criticism of Russell's theory of descriptions explained in the latter's famous "On Denoting" article. In his book Individuals (1959), Strawson examines our conceptions of basic particulars. Austin, in the posthumously published How to Do Things with Words (1962), emphasized the theory of speech acts and the ability of words to do things (e. g. "I promise") and not just say things. This influenced several fields to undertake what is called a performative turn. In Sense and Sensibilia (1962), Austin criticized sense-data theories.
Spread of Analytic philosophy
Australia and New Zealand
The school known as Australian realism began when John Anderson accepted the Challis Chair of Philosophy at the University of Sydney in 1927. His elder brother was William Anderson, Professor of Philosophy at Auckland University College from 1921 to his death in 1955, who was described as "the most dominant figure in New Zealand philosophy." J. N. Findlay was a student of Ernst Mally of the Austrian realists and taught at the University of Otago.
Finland
The Finnish Georg Henrik von Wright succeeded Wittgenstein at Cambridge in 1948.
Contemporary analytic philosophy
Metaphysics
One striking difference with respect to early analytic philosophy was the revival of metaphysical theorizing during the second half of the 20th century, and metaphysics remains a fertile topic of research. Although many discussions are continuations of old ones from previous decades and centuries, the debates remains active.
Decline of logical positivism
The rise of metaphysics mirrored the decline of logical positivism, first challenged by the later Wittgenstein.
Sellars
Wilfred Sellars's criticism of the "Myth of the Given", in Empiricism and the Philosophy of Mind (1956), challenged logical positivism by arguing against sense-data theories. In his "Philosophy and the Scientific Image of Man" (1962), Sellars distinguishes between the "manifest image" and the "scientific image" of the world. Sellars's goal of a synoptic philosophy that unites the everyday and scientific views of reality is the foundation and archetype of what is sometimes called the Pittsburgh School, whose members include Robert Brandom, John McDowell, and John Haugeland.
Quine
Also among the developments that resulted in the decline of logical positivism and the revival of metaphysical theorizing was Harvard philosopher W. V. O. Quine's attack on the analytic–synthetic distinction in "Two Dogmas of Empiricism", published in 1951 in The Philosophical Review and republished in Quine's book From A Logical Point of View (1953), a paper "sometimes regarded as the most important in all of twentieth-century philosophy".
From a Logical Point of View also contains Quine's essay "On What There Is" (1948), which elucidates Russell's theory of descriptions and contains Quine's famous dictum of ontological commitment, "To be is to be the value of a variable". He also dubbed the problem of nonexistence Plato's beard.
Quine sought to naturalize philosophy and saw philosophy as continuous with science, but instead of logical positivism advocated a kind of semantic holism and ontological relativity, which explained that every term in any statement has its meaning contingent on a vast network of knowledge and belief, the speaker's conception of the entire world. In his magnum opus Word and Object (1960), Quine introduces the idea of radical translation, an introduction to his theory of the indeterminacy of translation, and specifically to prove the inscrutability of reference.
Kripke
Important also for the revival of metaphysics was the further development of modal logic, first introduced by pragmatist C. I. Lewis, especially the work of Saul Kripke and his Naming and Necessity (1980).
According to one author, Naming and Necessity "played a large role in the implicit, but widespread, rejection of the view—so popular among ordinary language philosophers—that philosophy is nothing more than the analysis of language."
Kripke was influential in arguing that flaws in common theories of descriptions and proper names are indicative of larger misunderstandings of the metaphysics of necessity and possibility. Kripke also argued that necessity is a metaphysical notion distinct from the epistemic notion of a priori, and that there are necessary truths that are known a posteriori, such as that water is H2O.
Kripke is widely regarded as having revived theories of essence and identity as respectable topics of philosophical discussion. Kripke and Hilary Putnam argued for realism about natural kinds. Kripke holds that it is essential that water is H2O, or for gold to be atomic number 79. Putnam's Twin Earth thought experiment can be used to illustrate the same point with water.
David Lewis
American philosopher David Lewis defended a number of elaborate metaphysical theories. In works such as On the Plurality of Worlds (1986) and Counterfactuals (1973) he argued for modal realism and counterpart theorythe belief in real, concrete possible worlds. According to Lewis, "actual" is merely an indexical label we give a world when we are in it. Lewis also defended what he called Humean supervenience, a counterfactual theory of causation, and contributed to abstract object theory. He became closely associated with Australia, whose philosophical community he visited almost annually for more than 30 years.
Universals
In response to the problem of universals, Australian David Malet Armstrong defended a kind of moderate realism. Quine and Lewis defended nominalism.
Mereology
Polish philosopher Stanisław Leśniewski coined the term mereology, which is the formal study of parts and wholes, a subject that arguably goes back to the time of the pre-Socratics. David Lewis believed in perdurantism and introduced the term 'gunk'. Peter Van Inwagen believes in mereological nihilism, except for living beings, a view called organicism.
Free will and determinism
Peter van Inwagen's 1983 monograph An Essay on Free Will played an important role in rehabilitating libertarianism with respect to free will, in mainstream analytical philosophy. In the book, he introduces the consequence argument and the term incompatibilism about free will and determinism, to stand in contrast to compatibilism—the view that free will is compatible with determinism. Charlie Broad had previously made similar arguments.
Personal identity
Since John Locke, philosophers have been concerned with the problem of personal identity. Derek Parfit in Reasons and Persons (1984) defends a kind of bundle theory, while David Lewis again defends perdurantism. Bernard Williams in The Self and the Future (1970) argues that personal identity is bodily identity rather than mental continuity.
Principle of sufficient reason
Since Leibniz philosophers have discussed the principle of sufficient reason or PSR. Van Inwagen criticizes the PSR. Alexander Pruss defends it.
Philosophy of time
Analytic philosophy of time traces its roots to the British idealist J. M. E. McTaggart's article "The Unreality of Time" (1908). In it, McTaggart distinguishes between the dynamic, A-, or tensed, theory of time (past, present, future), in which time flows; and the static or tenseless B-theory of time (earlier than, simultaneous with, later than). Eternalism holds that past, present, and future are equally real. In contrast, Presentism holds that only entities in the present exist.
The theory of special relativity seems to advocate a B-theory of time. David Lewis's perdurantism, or four-dimensionalism, requires a B-theory of time. A. N. Prior, who invented tense logic, advocated the A-theory of time.
Logical pluralism
Many-valued and non-classical logics have been popular since the Polish logician Jan Lukasiewicz. Graham Priest is a dialetheist, seeing it as the most natural solution to problems such as the liar paradox. JC Beall, together with Greg Restall, is a pioneer of a widely-discussed version of logical pluralism.
Epistemology
Justification
Gettier
Owing largely to Edmund Gettier's 1963 paper "Is Justified True Belief Knowledge?", and the so-called Gettier problem, epistemology has enjoyed a resurgence as a topic of analytic philosophy during the last 50 years. A large portion of current epistemological research is intended to resolve the problems that Gettier's examples presented to the traditional "justified true belief" model of knowledge, found as early as Plato's dialogue Theaetetus. These include developing theories of justification to deal with Gettier's examples, or giving alternatives to the justified-true-belief model.
Theories
Chisholm defended foundationalism. Quine defended coherentism, a "web of belief". Quine proposed naturalized epistemology.
Internalism and externalism
The debate between internalism and externalism still exists in analytic philosophy. Alvin Goldman is an externalist known for developing a popular form of externalism called reliabilism. Most externalists reject the KK thesis, which has been disputed since the introduction of the epistemic logic by Jaakko Hintikka in 1962.
Problem of the Criterion
While a problem since antiquity, American philosopher Roderick Chisholm, in his Theory of Knowledge, details the problem of the criterion with two sets of questions:
What do we know? or What is the extent of our knowledge?
How do we know? or What is the criterion for deciding whether we have knowledge in any particular case?
An answer to either set of questions will allow us to devise a means of answering the other. Answering the former question-set first is called particularism, whereas answering the latter set first is called methodism. A third solution is skepticism, or doubting there is such a thing as knowledge.
Truth
Frege questioned standard theories of truth, and sometimes advocated a redundancy theory of truth. Frank Ramsey also advocated a redundancy theory. Alfred Tarski put forward a semantic theory of truth.
In Truth-Makers (1984), Kevin Mulligan, Peter Simons, and Barry Smith introduced the truth-maker idea as a contribution to the correspondence theory of truth. A truth-maker is contrasted with a truth-bearer.
Closure
Epistemic closure is the claim that knowledge is closed under entailment; in other words epistemic closure is a property or the principle that if a subject knows , and knows that entails , then can thereby come to know . Most epistemological theories involve a closure principle, and many skeptical arguments assume a closure principle. In Proof of An External World, G. E. Moore uses closure in his famous anti-skeptical "here is one hand" argument. Shortly before his death, Wittgenstein wrote On Certainty in response to Moore.
While the principle of epistemic closure is generally regarded as intuitive, philosophers, such as Fred Dretske with relevant alternatives theory and Robert Nozick in Philosophical Explanations, have argued against it.
Induction
In his book Fact, Fiction, and Forecast, Nelson Goodman introduced the "new riddle of induction", so-called by analogy with Hume's classical problem of induction. Goodman's famous example was to introduce the predicates grue and bleen. "Grue" applies to all things before a certain time t, just in case they are green, but also just in case they are blue after time t; and "bleen" applies to all things before a certain time t, just in the case they are blue, but also just in case they are green after time t.
Other topics
Other, related topics of contemporary research include debates over basic knowledge, the nature of evidence, the value of knowledge, epistemic luck, virtue epistemology, the role of intuitions in justification, and treating knowledge as a primitive concept.
Ethics
Due to the commitments to empiricism and symbolic logic in the early analytic period, early analytic philosophers often thought that inquiry in the ethical domain could not be made rigorous enough to merit any attention. It was only with the emergence of ordinary-language philosophers that ethics started to become an acceptable area of inquiry for analytic philosophers. Philosophers working within the analytic tradition have gradually come to distinguish three major types of moral philosophy.
Meta-ethics, which investigates moral terms and concepts;
Normative ethics, which examines and produces normative ethical judgments;
Applied ethics, which investigates how existing normative principles should be applied to difficult or borderline cases, often cases created by new technology or new scientific knowledge.
Meta-ethics
As well as Hume's famous is/ought distinction, twentieth-century meta-ethics has two original strains.
Principia Ethica
The first is G. E. Moore's investigation into the nature of ethical terms (e.g., good) in his Principia Ethica (1903), which advances a kind of moral realism called ethical non-naturalism and is known for the open question argument and identifying the naturalistic fallacy, a major topic of investigation for analytical philosophers. According to Moore, "Goodness is a simple, undefinable, non-natural property."
Contemporary philosophers, such as Russ Shafer-Landau in Moral Realism: A Defence, defend ethical non-naturalism.
Emotivism
The second is founded on logical positivism and its attitude that unverifiable statements are meaningless. As a result, they avoided normative ethics and instead began meta-ethical investigations into the nature of moral terms, statements, and judgments.
The logical positivists opined that statements about value—including all ethical and aesthetic judgments—are non-cognitive; that is, they cannot be objectively verified or falsified. Instead, the logical positivists adopted an emotivist theory, which was that value judgments expressed the attitude of the speaker. It is also known as the boo/hurrah theory. For example, in this view, saying, "Murder is wrong", is equivalent to saying, "Boo to murder", or saying the word "murder" with a particular tone of disapproval.
While analytic philosophers generally accepted non-cognitivism, emotivism had many deficiencies. It evolved into more sophisticated non-cognitivist theories, such as the expressivism of Charles Stevenson, and the universal prescriptivism of R. M. Hare, which was based on J. L. Austin's philosophy of speech acts.
Critics
As non-cognitivism, the is/ought distinction, and the naturalistic fallacy were questioned, analytic philosophers showed a renewed interest in the traditional questions of moral philosophy.
Philippa Foot defended naturalist moral realism and contributed several essays attacking other theories. Foot introduced the famous "trolley problem" into the ethical discourse.
Perhaps the most influential critic was Elizabeth Anscombe, whose monograph Intention was called by Donald Davidson "the most important treatment of action since Aristotle". A favorite student and friend of Ludwig Wittgenstein, her 1958 article "Modern Moral Philosophy" declared the "is-ought" impasse to be unproductive. J.O. Urmson's article "On Grading" also called the is/ought distinction into question.
Australian J. L. Mackie, in Ethics: Inventing Right And Wrong, defended anti-realist error theory. Bernard Williams also influenced ethics by advocating a kind of moral relativism and rejecting all other theories.
Normative ethics
The first half of the 20th century was marked by skepticism toward, and neglect of, normative ethics. However, contemporary normative ethics is dominated by three schools: consequentialism, virtue ethics, and deontology.
Consequentialism, or Utilitarianism
During the early 20th century, utilitarianism was the only non-skeptical type of ethics to remain popular among analytic philosophers. However, as the influence of logical positivism declined mid-century, analytic philosophers had a renewed interest in ethics. Utilitarianism: For and Against was written with J. J. C. Smart arguing for and Bernard Williams arguing against.
Virtue ethics
Anscombe, Foot, and Alasdair Macintyre's After Virtue sparked a revival of Aristotle's virtue ethical approach. This increased interest in virtue ethics has been dubbed the "aretaic turn" mimicking the linguistic turn.
Deontology
John Rawls's 1971 A Theory of Justice restored interest in Kantian ethical philosophy.
Applied ethics
Since around 1970, a significant feature of analytic philosophy has been the emergence of applied ethics—an interest in the application of moral principles to specific practical issues. The philosophers following this orientation view ethics as involving humanistic values, which involve practical implications and applications in the way people interact and lead their lives socially.
Topics of special interest for applied ethics include environmental ethics, animal rights, and the many challenges created by advancing medical science. In education, applied ethics addressed themes such as punishment in schools, equality of educational opportunity, and education for democracy.
Political philosophy
Liberalism
Isaiah Berlin had a lasting influence on both analytic political philosophy and liberalism with his lecture "Two Concepts of Liberty". Berlin defined 'negative liberty' as absence of coercion or interference in private actions. 'Positive liberty' Berlin maintained, could be thought of as self-mastery, which asks not what we are free from, but what we are free to do.
Current analytic political philosophy owes much to John Rawls, who in a series of papers from the 1950s onward (most notably "Two Concepts of Rules" and "Justice as Fairness") and his 1971 book A Theory of Justice, produced a sophisticated defense of a generally liberal egalitarian account of distributive justice. Rawls introduced the term the veil of ignorance.
This was followed soon by Rawls's colleague Robert Nozick's book Anarchy, State, and Utopia, a defense of free-market libertarianism. Consequentialist libertarianism also derives from the analytic tradition .
During recent decades there have also been several critics of liberalism, including the feminist critiques by Catharine MacKinnon and Andrea Dworkin, the multiculturalist critiques by Amy Gutmann and Charles Taylor, and the communitarian critiques by Michael Sandel and Alasdair MacIntyre (although neither of them endorses the term).
Analytical Marxism
Another development of political philosophy was the emergence of the school of analytical Marxism. Members of this school seek to apply techniques of analytic philosophy and modern social science to clarify the theories of Karl Marx and his successors. The best-known member of this school is G. A. Cohen, whose 1978 book, Karl Marx's Theory of History: A Defence, is generally considered to represent the genesis of this school. In that book, Cohen used logical and linguistic analysis to clarify and defend Marx's materialist conception of history. Other prominent analytical Marxists include the economist John Roemer, the social scientist Jon Elster, and the sociologist Erik Olin Wright. The work of these later philosophers has furthered Cohen's work by bringing to bear modern social science methods, such as rational choice theory, to supplement Cohen's use of analytic philosophical techniques in the interpretation of Marxian theory.
Cohen himself would later engage directly with Rawlsian political philosophy to advance a socialist theory of justice that contrasts with both traditional Marxism and the theories advanced by Rawls and Nozick. In particular, he indicates Marx's principle of from each according to his ability, to each according to his need.
Although not an analytic philosopher, Jürgen Habermas is another influential—if controversial—author in contemporary analytic political philosophy, whose social theory is a blend of social science, Marxism, neo-Kantianism, and American pragmatism.
Communitarianism
Communitarians such as Alasdair MacIntyre, Charles Taylor, Michael Walzer, and Michael Sandel advance a critique of liberalism that uses analytic techniques to isolate the main assumptions of liberal individualists, such as Rawls, and then challenges these assumptions. In particular, communitarians challenge the liberal assumption that the individual can be considered as fully autonomous from the community in which he is brought up and lives. Instead, they argue for a conception of the individual that emphasizes the role that the community plays in forming his or her values, thought processes, and opinions. While in the analytic tradition, its major exponents often also engage at length with figures generally considered continental, notably G. W. F. Hegel and Friedrich Nietzsche.
Aesthetics
As a result of logical positivism, as well as what seemed like rejections of the traditional aesthetic notions of beauty and sublimity from post-modern thinkers, analytic philosophers were slow to consider art and aesthetic judgment. Susanne Langer and Nelson Goodman addressed these problems in an analytic style during the 1950s and 1960s. Since Goodman, aesthetics as a discipline for analytic philosophers has flourished.
Arthur Danto argued for a "institutional definition of art" in the 1964 essay "The Artworld" in which Danto coined the term "artworld" (as opposed to the existing "art world", though they mean the same), by which he meant cultural context or "an atmosphere of art theory".
Rigorous efforts to pursue analyses of traditional aesthetic concepts were performed by Guy Sircello in the 1970s and 1980s, resulting in new analytic theories of love, sublimity, and beauty. In the opinion of Władysław Tatarkiewicz, there are six conditions for the presentation of art: beauty, form, representation, reproduction of reality, artistic expression, and innovation. However, one may not be able to pin down these qualities in a work of art.
George Dickie was an influential philosopher of art. Dickie's student Noël Carroll is a leading philosopher of art.
Philosophy of language
Given the linguistic turn, it can be hard to separate logic, metaphysics, and the philosophy of language in analytic philosophy. Philosophy of language is a topic that has decreased in activity during the last four decades, as evidenced by the fact that few major philosophers today treat it as a primary research topic. While the debate remains fierce, it is still strongly influenced by those authors from the first half of the century, e.g. Frege, Russell, Wittgenstein, Austin, Tarski, and Quine.
Semantics
Saul Kripke provided a semantics for modal logic. In his book Naming and Necessity (1980), Kripke challenges the descriptivist theory with a causal theory of reference. In it he introduced the term rigid designator. According to one author, "In the philosophy of language, Naming and Necessity is among the most important works ever." Ruth Barcan Marcus also challenged descriptivism. So did Keith Donnellan.
Hilary Putnam used the Twin Earth thought experiment to argue for semantic externalism, or the view that the meanings of words are not psychological. Donald Davidson uses the thought experiment of Swampman to advocate for semantic externalism.
Kripke in Wittgenstein on Rules and Private Language provides a rule-following paradox that undermines the possibility of our ever following rules in our use of language and, so, calls into question the idea of meaning. Kripke writes that this paradox is "the most radical and original skeptical problem that philosophy has seen to date". The portmanteau "Kripkenstein" has been coined as a term for a fictional person who holds the views expressed by Kripke's reading of Wittgenstein.
Another influential philosopher, Pavel Tichý initiated Transparent Intensional Logic, an original theory of the logical analysis of natural languages—the theory is devoted to the problem of saying exactly what it is that we learn, know, and can communicate when we come to understand what a sentence means.
Pragmatics
Paul Grice and his maxims and theory of implicature established the discipline of pragmatics.
Philosophy of mind and cognitive science
John Searle suggests that the obsession with the philosophy of language during the 20th century has been superseded by an emphasis on the philosophy of mind.
Physicalism
Motivated by the logical positivists' interest in verificationism, logical behaviorism was the most prominent theory of mind of analytic philosophy for the first half of the 20th century. Behaviorism later became much less popular, in favor of either type physicalism or functionalism. During this period, topics of the philosophy of mind were often related strongly to topics of cognitive science, such as modularity or innateness.
Behaviorism
Behaviorists such as B. F. Skinner tended to opine either that statements about the mind were equivalent to statements about behavior and dispositions to behave in particular ways or that mental states were directly equivalent to behavior and dispositions to behave.
Hilary Putnam criticized behaviorism by arguing that it confuses the symptoms of mental states with the mental states themselves, positing "super Spartans" who never display signs of pain.
See also:
Type Identity
Type physicalism or type identity theory identified mental states with brain states. Former students of Ryle at the University of Adelaide J. J. C. Smart and Ullin Place argued for type physicalism.
Functionalism
Functionalism remains the dominant theory. Type identity was criticized using multiple realizability.
Searle's Chinese room argument criticized functionalism and holds that while a computer can understand syntax, it could never understand semantics.
Eliminativism
The view of eliminative materialism is most closely associated with Paul and Patricia Churchland, who deny the existence of propositional attitudes, and with Daniel Dennett, who is generally considered an eliminativist about qualia and phenomenal aspects of consciousness.
Dualism
Finally, analytic philosophy has featured a certain number of philosophers who were dualists, and recently forms of property dualism have had a resurgence; the most prominent representative is David Chalmers. Kripke also makes a notable argument for dualism.
Thomas Nagel's "What is it like to be a bat?" challenged the physicalist account of mind. So did Frank Jackson's knowledge argument, which argues for qualia.
Theories of consciousness
In recent years, a central focus of research in the philosophy of mind has been consciousness and the philosophy of perception. While there is a general consensus for the global neuronal workspace model of consciousness, there are many opinions as to the specifics. The best known theories are Searle's naive realism, Fred Dretske and Michael Tye's representationalism, Daniel Dennett's heterophenomenology, and the higher-order theories of either David M. Rosenthal—who advocates a higher-order thought (HOT) model—or David Armstrong and William Lycan—who advocate a higher-order perception (HOP) model. An alternative higher-order theory, the higher-order global states (HOGS) model, is offered by Robert van Gulick.
Philosophy of mathematics
Since the beginning, analytic philosophy has had an interest in the philosophy of mathematics. Kurt Gödel, a student of Hans Hahn of the Vienna Circle, produced his incompleteness theorems showing that Russell and Whitehead's Principia Mathematica also failed to reduce arithmetic to logic. Gödel has been ranked as one of the four greatest logicians of all time, along with Aristotle, Frege, and Tarski. Ernst Zermelo and Abraham Fraenkel established Zermelo Fraenkel Set Theory. Quine developed his own system, dubbed New Foundations.
Physicist Eugene Wigner's seminal paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" poses the question of why a formal pursuit like mathematics can have real utility. José Benardete argued for the reality of infinity.
Akin to the medieval debate on universals, between realists, idealists, and nominalists; the philosophy of mathematics has the debate between logicists or platonists, conceptualists or intuitionists, and formalists.
Platonism
Gödel was a platonist who postulated a special kind of mathematical intuition that lets us perceive mathematical objects directly. Quine and Putnam argued for platonism with the indispensability argument. Crispin Wright, along with Bob Hale, led a Neo-Fregean revival with his work Frege's Conception of Numbers as Objects.
Critics
Structuralist Paul Benacerraf has an epistemological objection to mathematical platonism.
Intuitionism
The intuitionists, led by L. E. J. Brouwer, are a constructivist school of mathematics that argues that mathematics is a cognitive construct rather than a type of objective truth.
Formalism
The formalists, best exemplified by David Hilbert, considered mathematics to be merely the investigation of formal axiom systems. Hartry Field defended mathematical fictionalism.
Philosophy of religion
In Analytic Philosophy of Religion, James Franklin Harris noted that:
As with the study of ethics, early analytic philosophy tended to avoid the study of religion, largely dismissing (as per the logical positivists) the subject as a part of metaphysics and therefore meaningless. The demise of logical positivism led to a renewed interest in the philosophy of religion, prompting philosophers not only to introduce new problems, but to re-study classical topics such as the existence of God, the nature of miracles, the problem of evil, the rationality of belief in God, concepts of the nature of God, and several others. The Society of Christian Philosophers was established in 1978.
Reformed epistemology
Analytic philosophy formed the basis for some sophisticated Christian arguments, such as those of the reformed epistemologists such as Alvin Plantinga, William Alston, and Nicholas Wolterstorff.
Plantinga was awarded the Templeton Prize in 2017 and was once described by Time magazine as "America's leading orthodox Protestant philosopher of God". His seminal work God and Other Minds (1967) argues that belief in God is a properly basic belief akin to the belief in other minds. Plantinga also developed a modal ontological argument in The Nature of Necessity (1974).
Plantinga, J. L. Mackie, and Antony Flew debated the use of the free will defense as a way to solve the problem of evil. Plantinga's evolutionary argument against naturalism contends that there is a problem in asserting both evolution and naturalism. Plantinga further issued a trilogy on epistemology, and especially justification, Warrant: The Current Debate, Warrant and Proper Function, and Warranted Christian Belief.
Alston defended divine command theory and applied the analytic philosophy of language to religious language. Robert Merrihew Adams also defended divine command theory, and worked on the relationship between faith and morality. William Lane Craig defends the Kalam cosmological argument in the book of the same name.
Analytic Thomism
Catholic philosophers in the analytic tradition—such as Elizabeth Anscombe, Peter Geach, Anthony Kenny, Alasdair MacIntyre, John Haldane, Eleonore Stump, and others—developed an analytic approach to Thomism.
Orthodox
Richard Swinburne wrote a trilogy of books, arguing for God, consisting of The Coherence of Theism, The Existence of God, and Faith and Reason.
Wittgenstein and religion
The analytic philosophy of religion has been preoccupied with Wittgenstein, as well as his interpretation of Søren Kierkegaard's philosophy of religion. Wittgenstein fought for the Austrian army in the First World War and came upon a copy of Leo Tolstoy's Gospel in Brief. At that time, he underwent some kind of religious conversion.
Using first-hand remarks (which were later published in Philosophical Investigations, Culture and Value, and other works), philosophers such as Peter Winch and Norman Malcolm developed what has come to be known as "contemplative philosophy", a Wittgensteinian school of thought rooted in the "Swansea school", and which includes Wittgensteinians such as Rush Rhees, Peter Winch, and D.Z. Phillips, among others.
The name "contemplative philosophy" was coined by D. Z. Phillips in Philosophy's Cool Place, which rests on an interpretation of a passage from Wittgenstein's Culture and Value. This interpretation was first labeled "Wittgensteinian Fideism" by Kai Nielsen, but those who consider themselves members of the Swansea school have relentlessly and repeatedly rejected this construal as a caricature of Wittgenstein's position; this is especially true of Phillips. Responding to this interpretation, Nielsen and Phillips became two of the most prominent interpreters of Wittgenstein's philosophy of religion.
Philosophy of science
Science and the philosophy of science have also had increasingly significant roles in analytic metaphysics. The theory of special relativity has had a profound effect on the philosophy of time, and quantum physics is routinely discussed in the free will debate. The weight given to scientific evidence is largely due to commitments of philosophers to scientific realism and naturalism. Others will see a commitment to using science in philosophy as scientism.
Confirmation theory
Carl Hempel advocated confirmation theory or Bayesian epistemology. He introduced the famous raven's paradox.
Falsification
In reaction to what he considered excesses of logical positivism, Karl Popper, in The Logic of Scientific Discovery, insisted on the role of falsification in the philosophy of science, using it to solve the demarcation problem.
Confirmation holism
The Duhem–Quine thesis, or problem of underdetermination, posits that no scientific hypothesis can be understood in isolation, a viewpoint called confirmation holism.
Constructivism
In reaction to both the logical positivists and Popper, discussions of the philosophy of science during the last 40 years were dominated by social constructivist and cognitive relativist theories of science. Following Quine and Duhem, subsequent theories emphasized theory-ladenness. Thomas Samuel Kuhn, with his formulation of paradigm shifts, and Paul Feyerabend, with his epistemological anarchism, are significant for these discussions.
Biology
The philosophy of biology has also undergone considerable growth, particularly due to the considerable debate in recent years over the nature of evolution, particularly natural selection. Daniel Dennett and his 1995 book Darwin's Dangerous Idea, which defends Neo-Darwinism, stand at the forefront of this debate. Jerry Fodor criticizes natural selection.
Notes
References
Books and articles
Aristotle, Metaphysics
Geach, P., Mental Acts, London 1957
Kenny, A.J.P., Wittgenstein, London 1973.
Soames, Scott. Philosophical Analysis in the Twentieth Century: Volume 1, The Dawn of Analysis. Princeton: Princeton University Press, 2003.
Wittgenstein, Tractatus Logico-Philosophicus
Further reading
The London Philosophy Study Guide offers many suggestions on what to read, depending on the student's familiarity with the subject: Frege, Russell, and Wittgenstein
Hirschberger, Johannes. A Short History of Western Philosophy, ed. Clare Hay. Short History of Western Philosophy, A.
Hylton, Peter. Russell, Idealism, and the Emergence of Analytic Philosophy. Oxford: Oxford University Press, 1990.
Passmore, John. A Hundred Years of Philosophy, revised ed. New York: Basic Books, 1966.
Weitz, Morris, ed. Twentieth Century Philosophy: The Analytic Tradition. New York: Free Press, 1966.
External links
20th century in philosophy
21st century in philosophy
Contemporary philosophy
History of logic
History of mathematics
Intellectual history
Philosophical schools and traditions
Western culture | 0.769674 | 0.998829 | 0.768773 |
Worldview | A worldview or a world-view or is the fundamental cognitive orientation of an individual or society encompassing the whole of the individual's or society's knowledge, culture, and point of view. A worldview can include natural philosophy; fundamental, existential, and normative postulates; or themes, values, emotions, and ethics.
Etymology
The term worldview is a calque of the German word , composed of ('world') and ('perception' or 'view'). The German word is also used in English. It is a concept fundamental to German philosophy, especially epistemology and refers to a wide world perception. Additionally, it refers to the framework of ideas and beliefs forming a global description through which an individual, group or culture watches and interprets the world and interacts with it as a social reality.
and cognitive philosophy
Within cognitive philosophy and the cognitive sciences is the German concept of Weltanschauung. This expression is used to refer to the "wide worldview" or "wide world perception" of a people, family, or person. The of a people originates from the unique world experience of a people, which they experience over several millennia. The language of a people reflects the of that people in the form of its syntactic structures and untranslatable connotations and its denotations.
The term is often wrongly attributed to Wilhelm von Humboldt, the founder of German ethnolinguistics. However, Humboldt's key concept was Weltansicht. Weltansicht was used by Humboldt to refer to the overarching conceptual and sensorial apprehension of reality shared by a linguistic community (Nation). On the other hand, , first used by Immanuel Kant and later popularized by Hegel, was always used in German and later in English to refer more to philosophies, ideologies and cultural or religious perspectives, than to linguistic communities and their mode of apprehending reality.
In 1911, the German philosopher Wilhelm Dilthey published an essay entitled "The Types of and their Development in Metaphysics" that became quite influential. Dilthey characterized worldviews as providing a perspective on life that encompasses the cognitive, evaluative, and volitional aspects of human experience. Although worldviews have always been expressed in literature and religion, philosophers have attempted to give them conceptual definition in their metaphysical systems. On that basis, Dilthey found it possible to distinguish three general recurring types of worldview. The first of these he called naturalism because it gives priority to the perceptual and experimental determination of what is and allows contingency to influence how we evaluate and respond to reality. Naturalism can be found in Democritus, Hobbes, Hume and many other modern philosophers. The second type of worldview is called the idealism of freedom and is represented by Plato, Descartes, Kant, and Bergson among others. It is dualistic and gives primacy to the freedom of the will. The organizational order of our world is structured by our mind and the will to know. The third type is called objective idealism and Dilthey sees it in Heraclitus, Parmenides, Spinoza, Leibniz and Hegel. In objective idealism the ideal does not hover above what is actual but inheres in it. This third type of worldview is ultimately monistic and seeks to discern the inner coherence and harmony among all things. Dilthey thought it impossible to come up with a universally valid metaphysical or systematic formulation of any of these worldviews, but regarded them as useful schema for his own more reflective kind of life philosophy. See Makkreel and Rodi, Wilhelm Dilthey, Selected Works, volume 6, 2019.
Anthropologically, worldviews can be expressed as the "fundamental cognitive, affective, and evaluative presuppositions a group of people make about the nature of things, and which they use to order their lives."
If it were possible to draw a map of the world on the basis of , it would probably be seen to cross political borders— is the product of political borders and common experiences of a people from a geographical region, environmental-climatic conditions, the economic resources available, socio-cultural systems, and the language family. (The work of the population geneticist Luigi Luca Cavalli-Sforza aims to show the gene-linguistic co-evolution of people).
According to James W. Underhill, worldview can periodically be used very differently by certain linguists and sociologists. It is for this reason that Underhill, and those who influenced him, attempted to wed metaphor in, for example, the sociology of religion, with discourse analysis. Underhill also proposed five subcategories for the study of worldview: world-perceiving, world-conceiving, cultural mindset, personal world, and perspective.
Comparison of worldviews
One can think of a worldview as comprising a number of basic beliefs which are philosophically equivalent to the axioms of the worldview considered as a logical or consistent theory. These basic beliefs cannot, by definition, be proven (in the logical sense) within the worldview – precisely because they are axioms, and are typically argued from rather than argued for. However their coherence can be explored philosophically and logically.
If two different worldviews have sufficient common beliefs it may be possible to have a constructive dialogue between them.
On the other hand, if different worldviews are held to be basically incommensurate and irreconcilable, then the situation is one of cultural relativism and would therefore incur the standard criticisms from philosophical realists.
Additionally, religious believers might not wish to see their beliefs relativized into something that is only "true for them".
Subjective logic is a belief-reasoning formalism where beliefs explicitly are subjectively held by individuals but where a consensus between different worldviews can be achieved.
A third alternative sees the worldview approach as only a methodological relativism, as a suspension of judgment about the truth of various belief systems but not a declaration that there is no global truth. For instance, the religious philosopher Ninian Smart begins his Worldviews: Cross-cultural Explorations of Human Beliefs with "Exploring Religions and Analysing Worldviews" and argues for "the neutral, dispassionate study of different religious and secular systems—a process I call worldview analysis."
The comparison of religious, philosophical or scientific worldviews is a delicate endeavor, because such worldviews start from different presuppositions and cognitive values. Clément Vidal has proposed metaphilosophical criteria for the comparison of worldviews, classifying them in three broad categories:
objective: objective consistency, scientificity, scope
subjective: subjective consistency, personal utility, emotionality
intersubjective: intersubjective consistency, collective utility, narrativity
Characteristics
While Leo Apostel and his followers clearly hold that individuals can construct worldviews, other writers regard worldviews as operating at a community level, or in an unconscious way. For instance, if one's worldview is fixed by one's language, as according to a strong version of the Sapir–Whorf hypothesis, one would have to learn or invent a new language in order to construct a new worldview.
According to Apostel, a worldview is an ontology, or a descriptive model of the world. It should comprise these six elements:
An explanation of the world
A futurology, answering the question "Where are we heading?"
Values, answers to ethical questions: "What should we do?"
A praxeology, or methodology, or theory of action: "How should we attain our goals?"
An epistemology, or theory of knowledge: "What is true and false?"
An etiology. A constructed world-view should contain an account of its own "building blocks", its origins and construction.
Terror management theory
A worldview, according to terror management theory (TMT), serves as a buffer against death anxiety. It is theorized that living up to the ideals of one's worldview provides a sense of self-esteem which provides a sense of transcending the limits of human life (e.g. literally, as in religious belief in immortality; symbolically, as in art works or children to live on after one's death, or in contributions to one's culture). Evidence in support of terror management theory includes a series of experiments by Jeff Schimel and colleagues in which a group of Canadians found to score highly on a measure of patriotism were asked to read an essay attacking the dominant Canadian worldview.
Using a test of death-thought accessibility (DTA), involving an ambiguous word completion test (e.g. "COFF__" could either be completed as either "COFFEE" or "COFFIN" or "COFFER"), participants who had read the essay attacking their worldview were found to have a significantly higher level of DTA than the control group, who read a similar essay attacking Australian cultural values. Mood was also measured following the worldview threat, to test whether the increase in death thoughts following worldview threat were due to other causes, for example, anger at the attack on one's cultural worldview. No significant changes on mood scales were found immediately following the worldview threat.
To test the generalisability of these findings to groups and worldviews other than those of nationalistic Canadians, Schimel et al conducted a similar experiment on a group of religious individuals whose worldview included that of creationism. Participants were asked to read an essay which argued in support of the theory of evolution, following which the same measure of DTA was taken as for the Canadian group. Religious participants with a creationist worldview were found to have a significantly higher level of death-thought accessibility than those of the control group.
Goldenberg et al found that highlighting the similarities between humans and other animals increases death-thought accessibility, as does attention to the physical rather than meaningful qualities of sex.
Religion
Nishida Kitaro wrote extensively on "the Religious Worldview" in exploring the philosophical significance of Eastern religions.
According to Neo-Calvinist David Naugle's World view: The History of a Concept, "Conceiving of Christianity as a worldview has been one of the most significant developments in the recent history of the church."
The Christian thinker James W. Sire defines a worldview as "a commitment, a fundamental orientation of the heart, that can be expressed as a story or in a set of presuppositions (assumptions which may be true, partially true, or entirely false) which we hold (consciously or subconsciously, consistently or inconsistently) about the basic construction of reality, and that provides the foundation on which we live and move and have our being." He suggests that "we should all think in terms of worldviews, that is, with a consciousness not only of our own way of thought but also that of other people, so that we can first understand and then genuinely communicate with others in our pluralistic society."
The commitment mentioned by James W. Sire can be extended further. The worldview increases the commitment to serve the world. With the change of a person's view towards the world, he/she can be motivated to serve the world. This serving attitude has been illustrated by Tareq M Zayed as the 'Emancipatory Worldview' in his writing "History of emancipatory worldview of Muslim learners".
David Bell has also raised questions on religious worldviews for the designers of superintelligences – machines much smarter than humans.
See also
Life stance
References
External links
Wikibook:The scientific world view
Wiki Worldview Themes: A Structure for Characterizing and Analyzing Worldviews includes links to roughly 1000 Wikipedia articles
– a 2002 essay on research in linguistic relativity (Lera Boroditsky)
inTERRAgation.com—A documentary project. Collecting and evaluating answers to "the meaning of life" from around the world.
The God Contention—Comparing various worldviews, faiths, and religions through the eyes of their advocates.
Cole, Graham A., Do Christians have a Worldview? A paper examining the concept of worldview as it relates to and has been used by Christianity. Contains a helpful annotated bibliography.
World View article on the Principia Cybernetica Project
Pogorskiy, E. (2015). Using personalisation to improve the effectiveness of global educational projects. E-Learning and Digital Media, 12(1), 57–67.
Worldviews – An Introduction from Project Worldview
"Studies on World Views Related to Science" (list of suggested books and resources) from the American Scientific Affiliation (a Christian perspective)
Eugene Webb, Worldview and Mind: Religious Thought and Psychological Development. Columbia, MO: University of Missouri Press, 2009.
Benjamin Gal-Or, Cosmology, Physics and Philosophy, Springer Verlag, 1981, 1983, 1987, , .
Conceptual modelling
Belief
Consensus reality
Psychological attitude
Psychological concepts
Concepts in epistemology
Epistemology of religion | 0.770677 | 0.997345 | 0.768631 |
Emic and etic | In anthropology, folkloristics, linguistics, and the social and behavioral sciences, emic and etic refer to two kinds of field research done and viewpoints obtained.
The "emic" approach is an insider's perspective, which looks at the beliefs, values, and practices of a particular culture from the perspective of the people who live within that culture. This approach aims to understand the cultural meaning and significance of a particular behavior or practice, as it is understood by the people who engage in it.
The "etic" approach, on the other hand, is an outsider's perspective, which looks at a culture from the perspective of an outside observer or researcher. This approach tends to focus on the observable behaviors and practices of a culture, and aims to understand them in terms of their functional or evolutionary significance. The etic approach often involves the use of standardized measures and frameworks to compare different cultures and may involve the use of concepts and theories from other disciplines, such as psychology or sociology.
The emic and etic approaches each have their own strengths and limitations, and each can be useful in understanding different aspects of culture and behavior. Some anthropologists argue that a combination of both approaches is necessary for a complete understanding of a culture, while others argue that one approach may be more appropriate depending on the specific research question being addressed.
Definitions
"The emic approach investigates how local people think...". How they perceive and categorize the world, their rules for behavior, what has meaning for them, and how they imagine and explain things. "The etic (scientist-oriented) approach shifts the focus from local observations, categories, explanations, and interpretations to those of the anthropologist. The etic approach realizes that members of a culture often are too involved in what they are doing... to interpret their cultures impartially. When using the etic approach, the ethnographer emphasizes what he or she considers important."
Although emics and etics are sometimes regarded as inherently in conflict and one can be preferred to the exclusion of the other, the complementarity of emic and etic approaches to anthropological research has been widely recognized, especially in the areas of interest concerning the characteristics of human nature as well as the form and function of human social systems.
Emic and etic approaches of understanding behavior and personality fall under the study of cultural anthropology. Cultural anthropology states that people are shaped by their cultures and their subcultures, and we must account for this in the study of personality. One way is looking at things through an emic approach. This approach "is culture specific because it focuses on a single culture and it is understood on its own terms." As explained below, the term "emic" originated from the specific linguistic term "phonemic", from phoneme, which is a language-specific way of abstracting speech sounds.
An 'emic' account is a description of behavior or a belief in terms meaningful (consciously or unconsciously) to the actor; that is, an emic account comes from a person within the culture. Almost anything from within a culture can provide an emic account.
An 'etic' account is a description of a behavior or belief by a social analyst or scientific observer (a student or scholar of anthropology or sociology, for example), in terms that can be applied across cultures; that is, an etic account attempts to be 'culturally neutral', limiting any ethnocentric, political or cultural bias or alienation by the observer.
When these two approaches are combined, the "richest" view of a culture or society can be understood. On its own, an emic approach would struggle with applying overarching values to a single culture. The etic approach is helpful in enabling researchers to see more than one aspect of one culture, and in applying observations to cultures around the world.
History
The terms were coined in 1954 by linguist Kenneth Pike, who argued that the tools developed for describing linguistic behaviors could be adapted to the description of any human social behavior. As Pike noted, social scientists have long debated whether their knowledge is objective or subjective. Pike's innovation was to turn away from an epistemological debate, and turn instead to a methodological solution. Emic and etic are derived from the linguistic terms phonemic and phonetic, respectively, where a phone is a distinct speech sound or gesture, regardless of whether the exact sound is critical to the meanings of words, whereas a phoneme is a speech sound in a given language that, if swapped with another phoneme, could change one word to another. The possibility of a truly objective description was discounted by Pike himself in his original work; he proposed the emic-etic dichotomy in anthropology as a way around philosophic issues about the very nature of objectivity.
The terms were also championed by anthropologists Ward Goodenough and Marvin Harris with slightly different connotations from those used by Pike. Goodenough was primarily interested in understanding the culturally specific meaning of specific beliefs and practices; Harris was primarily interested in explaining human behavior.
Pike, Harris, and others have argued that cultural "insiders" and "outsiders" are equally capable of producing emic and etic accounts of their culture. Some researchers use "etic" to refer to objective or outsider accounts, and "emic" to refer to subjective or insider accounts.
Margaret Mead was an anthropologist who studied the patterns of adolescence in Samoa. She discovered that the difficulties and the transitions that adolescents faced are culturally influenced. The hormones that are released during puberty can be defined using an etic framework, because adolescents globally have the same hormones being secreted. However, Mead concluded that how adolescents respond to these hormones is greatly influenced by their cultural norms. Through her studies, Mead found that simple classifications about behaviors and personality could not be used because peoples’ cultures influenced their behaviors in such a radical way. Her studies helped create an emic approach of understanding behaviors and personality. Her research deduced that culture has a significant impact in shaping an individual's personality.
Carl Jung, a Swiss psychoanalyst, is a researcher who took an emic approach in his studies. Jung studied mythology, religion, ancient rituals, and dreams, leading him to believe that there are archetypes that can be identified and used to categorize people's behaviors. Archetypes are universal structures of the collective unconscious that refer to the inherent way people are predisposed to perceive and process information. The main archetypes that Jung studied were the persona (how people choose to present themselves to the world), the anima and animus (part of people experiencing the world in viewing the opposite sex, that guides how they select their romantic partner), and the shadow (dark side of personalities because people have a concept of evil; well-adjusted people must integrate both good and bad parts of themselves). Jung looked at the role of the mother and deduced that all people have mothers and see their mothers in a similar way; they offer nurture and comfort. His studies also suggest that "infants have evolved to suck milk from the breast, it is also the case that all children have inborn tendencies to react in certain ways." This way of looking at the mother is an emic way of applying a concept cross-culturally and universally.
Importance as regards personality
Emic and etic approaches are important to understanding personality because problems can arise "when concepts, measures, and methods are carelessly transferred to other cultures in attempts to make cross-cultural generalizations about personality." It is hard to apply certain generalizations of behavior to people who are so diverse and culturally different. One example of this is the F-scale (Macleod). The F-scale, which was created by Theodor Adorno, is used to measure authoritarian personality, which can, in turn, be used to predict prejudiced behaviors. This test, when applied to Americans accurately depicts prejudices towards black individuals. However, when a study was conducted in South Africa using the F-Scale, (Pettigrew and Friedman) results did not predict any prejudices towards black individuals. This study used emic approaches of study by conducting interviews with the locals and etic approaches by giving participants generalized personality tests.
See also
Exonym and endonym
Other explorations of the differences between reality and humans' models of it:
Blind men and an elephant
Emic and etic units
Internalism and externalism
Map–territory relation
References
Further reading
External links
Emic and Etic Standpoints for the Description of Behavior, chapter 2 in Language in Relation to a Unified Theory of the Structure of Human Behavior, vol 2, by Kenneth Pike (published in 1954 by Summer Institute of Linguistics)
Anthropology
Dichotomies
Ethnography
Folklore
Metatheory | 0.772441 | 0.994972 | 0.768558 |
Innatism | In the philosophy of mind, innatism is the view that the mind is born with already-formed ideas, knowledge, and beliefs. The opposing doctrine, that the mind is a tabula rasa (blank slate) at birth and all knowledge is gained from experience and the senses, is called empiricism.
Difference from nativism
Innatism and nativism are generally synonymous terms referring to the notion of preexisting ideas in the mind. However, more specifically, innatism refers to the philosophy of Descartes, who assumed that God or a similar being or process placed innate ideas and principles in the human mind. The innatist principles in this regard may overlap with similar concepts such as natural order and state of nature, in philosophy.
Nativism represents an adaptation of this, grounded in the fields of genetics, cognitive psychology, and psycholinguistics. Nativists hold that innate beliefs are in some way genetically programmed in our mind—they are the phenotypes of certain genotypes that all humans share in common. Nativism is a modern view rooted in innatism. The advocates of nativism are mainly philosophers who also work in the field of cognitive psychology or psycholinguistics: most notably Noam Chomsky and Jerry Fodor (although the latter adopted a more critical attitude toward nativism in his later writings). The nativist's general objection against empiricism is still the same as was raised by the rationalists; the human mind of a newborn child is not a tabula rasa but is equipped with an inborn structure.
History
Although individual human beings vary in many ways (culturally, ethnically, linguistically, and so on), innate ideas are the same for everyone everywhere. For example, the philosopher René Descartes theorized that knowledge of God is innate in everybody. Philosophers such as Descartes and Plato were rationalists. Other philosophers, most notably the empiricists, were critical of innate ideas and denied they existed.
The debate over innate ideas is central to the conflict between rationalists (who believe certain ideas exist independently of experience) and empiricists (who believe knowledge is derived from experience).
Many believe the German philosopher Immanuel Kant synthesized these two early modern traditions in his philosophical thought.
Plato
Plato argues that if there are certain concepts that we know to be true but did not learn from experience, then it must be because we have an innate knowledge of it and that this knowledge must have been gained before birth. In Plato's Meno, he recalls a situation where his mentor Socrates questioned a slave boy about geometry. Though the slave boy had no previous experience with geometry, he was able to answer correctly. Plato reasoned that this was possible because Socrates' questions sparked the innate knowledge of math the boy had from birth.
Descartes
Descartes conveys the idea that innate knowledge or ideas is something inborn such as one would say, that a certain disease might be 'innate' to signify that a person might be at risk of contracting such a disease. He suggests that something that is 'innate' is effectively present from birth and while it may not reveal itself then, is more than likely to present itself later in life. Descartes’ comparison of innate knowledge to an innate disease, whose symptoms may show up only later in life, unless prohibited by a factor like age or puberty, suggests that if an event occurs prohibiting someone from exhibiting an innate behaviour or knowledge, it doesn't mean the knowledge did not exist at all but rather it wasn't expressed – they were not able to acquire that knowledge. In other words, innate beliefs, ideas and knowledge require experiences to be triggered or they may never be expressed. Experiences are not the source of knowledge as proposed by John Locke, but catalysts to the uncovering of knowledge.
Gottfried Wilhelm Leibniz
Gottfried Wilhelm Leibniz suggested that we are born with certain innate ideas, the most identifiable of these being mathematical truisms. The idea that is evident to us without the necessity for empirical evidence. Leibniz argues that empiricism can show us show that concepts are true in the present; the observation of one apple and then another in one instance, and in that instance only, leads to the conclusion that one and another equals two. However, the suggestion that one and another will always equal two requires an innate idea, as that would be a suggestion of things unwitnessed.
Leibniz called such concepts as mathematical truisms "necessary truths". Another example of such may be the phrase, "What is, is" or "It is impossible for the same thing to be and not to be". Leibniz argues that such truisms are universally assented to (acknowledged by all to be true); this being the case, it must be due to their status as innate ideas. Often some ideas are acknowledged as necessarily true but are not universally assented to. Leibniz would suggest that this is simply because the person in question has not become aware of the innate idea, not because they do not possess it. Leibniz argues that empirical evidence can serve to bring to the surface certain principles that are already innately embedded in our minds. This is similar to needing to hear only the first few notes to recall the rest of the melody.
John Locke
The main antagonist to the concept of innate ideas is John Locke, a contemporary of Leibniz. Locke argued that the mind is in fact devoid of all knowledge or ideas at birth; it is a blank sheet or tabula rasa. He argued that all our ideas are constructed in the mind via a process of constant composition and decomposition of the input that we receive through our senses.
Locke, in An Essay Concerning Human Understanding, suggests that the concept of universal assent in fact proves nothing, except perhaps that everyone is in agreement; in short universal assent proves that there is universal assent and nothing else. Moreover, Locke goes on to suggest that in fact there is no universal assent. Even a phrase such as "What is, is" is not universally assented to; infants and severely mentally disabled adults do not generally acknowledge this truism. Locke also attacks the idea that an innate idea can be imprinted on the mind without the owner realizing it. For Locke, such reasoning would allow one to conclude the absurd: "All the Truths a Man ever comes to know, will, by this account, be, every one of them, innate." To return to the musical analogy, we may not be able to recall the entire melody until we hear the first few notes, but we were aware of the fact that we knew the melody and that upon hearing the first few notes we would be able to recall the rest.
Locke ends his attack upon innate ideas by suggesting that the mind is a tabula rasa or "blank slate", and that all ideas come from experience; all our knowledge is founded in sensory experience.
Essentially, the same knowledge thought to be a priori by Leibniz is, according to Locke, the result of empirical knowledge, which has a lost origin [been forgotten] in respect to the inquirer. However, the inquirer is not cognizant of this fact; thus, he experiences what he believes to be a priori knowledge.
The theory of innate knowledge is excessive. Even innatists accept that most of our knowledge is learned through experience, but if that can be extended to account for all knowledge, we learn color through seeing it, so therefore, there is no need for a theory about an innate understanding of color.
No ideas are universally held. Do we all possess the idea of God? Do we all believe in justice and beauty? Do we all understand the law of identity? If not, it may not be the case that we have acquired these ideas through impressions/experience/social interaction.
Even if there are some universally agreed statements, it is just the ability of the human brain to organize learned ideas/words, that is, innate. An "ability to organize" is not the same as "possessing propositional knowledge" (e.g., a computer with no saved files has all the operations programmed in but has an empty memory).
Contemporary approaches
Linguistics
In his Meno, Plato raises an important epistemological quandary: How is it that we have certain ideas that are not conclusively derivable from our environments? Noam Chomsky has taken this problem as a philosophical framework for the scientific inquiry into innatism. His linguistic theory, which derives from 18th century classical-liberal thinkers such as Wilhelm von Humboldt, attempts to explain in cognitive terms how we can develop knowledge of systems which are said, by supporters of innatism, to be too rich and complex to be derived from our environment. One such example is our linguistic faculty. Our linguistic systems contain a systemic complexity which supposedly could not be empirically derived: the environment seems too poor, variable and indeterminate, according to Chomsky, to explain the extraordinary ability to learn complex concepts possessed by very young children. Essentially, their accurate grammatical knowledge cannot have originated from their experiences as their experiences are not adequate. It follows that humans must be born with a universal innate grammar, which is determinate and has a highly organized directive component, and enables the language learner to ascertain and categorize language heard into a system. Chomsky states that the ability to learn how to properly construct sentences or know which sentences are grammatically incorrect is an ability gained from innate knowledge. Noam Chomsky cites as evidence for this theory, the apparent invariability, according to his views, of human languages at a fundamental level. In this way, linguistics may provide a window into the human mind, and establish scientific theories of innateness which otherwise would remain merely speculative.
One implication of Noam Chomsky's innatism, if correct, is that at least a part of human knowledge consists in cognitive predispositions, which are triggered and developed by the environment, but not determined by it. Chomsky suggests that we can look at how a belief is acquired as an input-output situation. He supports the doctrine of innatism as he states that human beliefs gathered from sensory experience are much richer and complex than the experience itself. He asserts that the extra information gathered is from the mind itself as it cannot solely be from experiences. Humans derive excess amount of information from their environment so some of that information must be pre-determined.
See also
Anamnesis
Bouba/kiki effect
Concept
Fitra
Idea
Instinct
Nature versus nurture
Platonism
Psychological nativism
Tabula rasa
Rationalism
Theory of Forms
Idealism
Qualia
Hard problem of consciousness
References
Citations
Classical texts
Descartes, René. Meditations on First Philosophy with Selections from the Objections and Replies, translated by John Cottingham (Cambridge: Cambridge University Press, 1986).
Locke, John. An Essay Concerning Human Understanding. 1690.
Leibniz, Gottfried. Discourse on Metaphysics and Related Writings, edited and translated by R. N. D. Martin and Stuart Brown (Manchester and New York:Manchester University Press, 1988).
Recent studies
Carruthers, Peter. Human Knowledge and Human Nature. A New Introduction to an Ancient Debate, New York : Oxford University Press, 1992.
Chomsky, Noam. Aspects of the Theory of Syntax. (Cambridge, Mass, 1965)
Kaldis, Byron. "Leibniz' Argument for Innate Ideas" in Just the Arguments: 100 of the Most Important Arguments in Western Philosophy edited by M Bruce & S Barbone (Blackwell, 2011).
Ridling, Zaine (2001). "Philosophy: Then and Now A look back at 26 centuries of thought." Types and Expressions of Rationalism, pp. 514–515. Access Foundation.
Unger, Wolfgang. "Nativism in the Light of Locke's Critique on Innate Principles." Term Paper in Phil 702, Locke's Essay. Department of Philosophy, University of Massachusetts, Amhernt.
University of California Santa Barbara, Department of Philosophy: PowerPoint: Locke's attack on innatism.
External links
Essay: Nativism in the Light of Locke’s Critique on Innate Principles
The Rationalist Tradition
A priori
Epistemological theories
Rationalism | 0.775456 | 0.991086 | 0.768544 |
Transcendence (philosophy) | In philosophy, transcendence is the basic ground concept from the word's literal meaning (from Latin), of climbing or going beyond, albeit with varying connotations in its different historical and cultural stages. It includes philosophies, systems, and approaches that describe the fundamental structures of being, not as an ontology (theory of being), but as the framework of emergence and validation of knowledge of being. These definitions are generally grounded in reason and empirical observation and seek to provide a framework for understanding the world that is not reliant on religious beliefs or supernatural forces. "Transcendental" is a word derived from the scholastic, designating the extra-categorical attributes of beings.
Religious definition
In religion, transcendence refers to the aspect of God's nature and power which is wholly independent of the material universe, beyond all physical laws. This is contrasted with immanence, where a god is said to be fully present in the physical world and thus accessible to creatures in various ways. In religious experience, transcendence is a state of being that has overcome the limitations of physical existence and by some definitions has also become independent of it. This is typically manifested in prayer, séance, meditation, psychedelics and paranormal "visions".
It is affirmed in various religious traditions' concept of the divine, which contrasts with the notion of a god (or, the Absolute) that exists exclusively in the physical order (immanentism), or indistinguishable from it (pantheism). Transcendence can be attributed to the divine not only in its being, but also in its knowledge. Thus, God may transcend both the universe and knowledge (is beyond the grasp of the human mind). Although transcendence is defined as the opposite of immanence, the two are not necessarily mutually exclusive. Some theologians and metaphysicians of various religious traditions affirm that a god is both within and beyond the universe (panentheism); in it, but not of it; simultaneously pervading it and surpassing it.
Modern philosophy
The Ethics of Baruch Spinoza used the expression "transcendental terms" (in Latin: termini transcendentales) to indicate concepts like Being, Thing, Something, which are so general not to be included in the definitions of species, genus and category. In modern philosophy, Immanuel Kant introduced a new term, transcendental, thus instituting a new, third meaning. In his theory of knowledge, this concept is concerned with the condition of possibility of knowledge itself. He also opposed the term transcendental to the term transcendent, the latter meaning "that which goes beyond" (transcends) any possible knowledge of a human being. For him transcendental meant knowledge about our cognitive faculty with regard to how objects are possible a priori. "I call all knowledge transcendental if it is occupied, not with objects, but with the way that we can possibly know objects even before we experience them." Therefore, metaphysics, as a fundamental and universal theory, turns out to be an epistemology. Transcendental philosophy, consequently, is not considered a traditional ontological form of metaphysics.
Kant equated transcendental with that which is "... in respect of the subject's faculty of cognition." Something is transcendental if it plays a role in the way in which the mind "constitutes" objects and makes it possible for us to experience them as objects in the first place. Ordinary knowledge is knowledge of objects; transcendental knowledge is knowledge of how it is possible for us to experience those objects as objects. This is based on Kant's acceptance of David Hume's argument that certain general features of objects (e.g. persistence, causal relationships) cannot be derived from the sense impressions we have of them. Kant argues that the mind must contribute those features and make it possible for us to experience objects as objects. In the central part of his Critique of Pure Reason, the "Transcendental Deduction of the Categories", Kant argues for a deep interconnection between the ability to have consciousness of self and the ability to experience a world of objects. Through a process of synthesis, the mind generates both the structure of objects and its own unity.
A metaphilosophical question discussed by many Kantian scholars is what transcendental reflection is and how transcendental reflection is itself possible. Valentin Balanovskiy shows that this is a special instrument inherent in our consciousness, something by what individuals can distinguish themselves from any other objects of reality. Stephen Palmquist argues that Kant's solution to this problem is an appeal to faith. For Kant, the "transcendent", as opposed to the "transcendental", is that which lies beyond what our faculty of knowledge can legitimately know. Hegel's counter-argument to Kant was that to know a boundary is also to be aware of what it bounds and as such what lies beyond it – in other words, to have already transcended it.
Contemporary philosophy
In phenomenology, the "transcendent" is that which transcends our own consciousness: that which is objective rather than only a phenomenon of consciousness. Jean-Paul Sartre also speaks of transcendence in his works. In Being and Nothingness, Sartre uses transcendence to describe the relation of the self to the object-oriented world, as well as our concrete relations with others. For Sartre, the for-itself is sometimes called a transcendence. Additionally, if the other is viewed strictly as an object, much like any other object, then the other is, for the for-itself, a transcendence-transcended. When the for-itself grasps the other in the others world, and grasps the subjectivity that the other has, it is referred to as transcending-transcendence. Thus, Sartre defines relations with others in terms of transcendence.
Contemporary transcendental philosophy is developed by German philosopher Harald Holz with a holistic approach. Holz distanced transcendental philosophy from the convergence of neo-Kantianism. He critically discussed transcendental pragmatism and the relation between transcendental philosophy, neo-empiricism, and so-called postmodernism.
Comparison to religious definitions
Philosophical definitions of transcendence often emphasize the idea of going beyond or exceeding the limits of human experience, and may focus on concepts such as rationality, consciousness, or the nature of reality. These definitions are generally grounded in reason and empirical observation, and seek to provide a framework for understanding the world that is not reliant on religious beliefs or supernatural forces. Religious definitions of transcendence, on the other hand, often emphasize the idea of connecting with something beyond the self or the material world, and may focus on concepts such as God, the soul, or the afterlife. These definitions are often grounded in faith and revelation, and may be seen as offering a way to access a higher or divine reality that cannot be directly observed or explained through reason alone.
While there may be some overlap between these two definitions of transcendence, they are ultimately grounded in different epistemological frameworks and ways of understanding the world. Therefore, the scope derived from the philosophical definition of transcendence could contain the scope derived from the religious definition of transcendence, but not vice versa. This is because the philosophical definition of transcendence is broader and more abstract than the religious definition, which is more specific and focused on a particular faith or belief system.
Colloquial usage
In everyday language, "transcendence" means "going beyond", and "self-transcendence" means going beyond a prior form or state of oneself. Mystical experience is thought of as a particularly advanced state of self-transcendence, in which the sense of a separate self is abandoned. "Self-transcendence" is believed to be psychometrically measurable, and (at least partially) inherited, and has been incorporated as a personality dimension in the Temperament and Character Inventory. The discovery of this is described in the book The God Gene by Dean Hamer, although this has been criticized by commentators such as Carl Zimmer.
Comparison to Immanence
The doctrine or theory of immanence holds that the divine encompasses or is manifested in the material world. It is held by some philosophical and metaphysical theories of divine presence. Immanence is usually applied in monotheistic, pantheistic, pandeistic, or panentheistic faiths to suggest that the spiritual world permeates the mundane. It is often contrasted with theories of transcendence, in which the divine is seen to be outside the material world.
See also
God gene
Ignoramus et ignorabimus
Immanence
Maslow's hierarchy of needs
Materialism
Meta
Transcendental empiricism
Transcendental hermeneutic phenomenology
Transcendental humanism
Transcendental materialism
Transcendental Meditation
Transcendental naturalism
Transcendental nihilism
Transcendental nominalism
Transcendental realism
Transcendental Thomism
Transcendentalism
Transcendentals
Tzimtzum, the traditional kabbalistic understanding
References
Bibliography
External links
Aldous Huxley on Self-Transcedence - The Epilog of The Devils of Loudun
Stephen Palmquist, Kant's System of Perspectives (Lanham: University Press of America, 1993). See especially Part Two.
A priori
Aesthetics
The arts
Concepts in aesthetics
Concepts in epistemology
Metaphysical properties
Concepts in the philosophy of mind
Kantianism
Nonduality
Ontology
Panentheism
Pantheism
Perennial philosophy
Philosophy of religion
Religious philosophical concepts
Spirituality
Thought
Transcendentalism
Unitarian Universalism | 0.771499 | 0.995966 | 0.768387 |
Philosophical razor | In philosophy, a razor is a principle or rule of thumb that allows one to eliminate (shave off) unlikely explanations for a phenomenon, or avoid unnecessary actions.
Examples
Razors include:
Alder's razor (also known as Newton's flaming laser sword): If something cannot be settled by experiment or observation, then it is not worthy of debate.
Einstein's razor: "The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience." It is often paraphrased as some variant of: "Make things as simple as possible, but no simpler."
Grice's razor (also known as Guillaume's razor): As a principle of parsimony, conversational implicatures are to be preferred over semantic context for linguistic explanations.
Hanlon's razor: Never attribute to malice that which can be adequately explained by stupidity.
Hitchens' razor: That which can be asserted without evidence can be dismissed without evidence.
Hume's guillotine: What ought to be cannot be deduced from what is; prescriptive claims cannot be derived solely from descriptive claims, and must depend on other prescriptions. "If the cause, assigned for any effect, be not sufficient to produce it, we must either reject that cause, or add to it such qualities as will give it a just proportion to the effect."
Occam's razor: Explanations which require fewer unjustified assumptions are more likely to be correct; avoid unnecessary or improbable assumptions.
Popper's falsifiability criterion: For a theory to be considered scientific, it must be falsifiable.
Sagan standard: Extraordinary claims require extraordinary evidence.
See also
Russell's teapotAnalogy formulated by Bertrand Russell to illustrate that the burden of proof lies upon a person making empirically unfalsifiable claims
References
Razors
Arguments
Philosophical analogies
Rhetorical techniques | 0.770175 | 0.997486 | 0.768239 |
Interpretation (philosophy) | A philosophical interpretation is the assignment of meanings to various concepts, symbols, or objects under consideration. Two broad types of interpretation can be distinguished: interpretations of physical objects, and interpretations of concepts (conceptual model).
Conceptual interpretations
Aesthetic interpretation
Interpretation is related to perceiving the things. An aesthetic interpretation is an explanation of the meaning of some work of art. An aesthetic interpretation expresses an understanding of a work of art, a poem, performance, or piece of literature. There may be different interpretations to same work by art by different people owing to their different perceptions or aims. All such interpretations are termed as 'aesthetic interpretations'. Some people, instead of interpreting work of art, believe in interpreting artist himself. It pretty much means "how or what do I believe about (subject)"
Judicial interpretation
A judicial interpretation is a conceptual interpretation that explains how the judiciary should interpret the law, particularly constitutional documents and legislation (see statutory interpretation).
Logical interpretation
In logic, an interpretation is an assignment of meaning to the symbols of a language. The formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.
Religious interpretation
Religious interpretation and similarly religious self-interpretation define a section of religion-related studies (theology, comparative religion, reason) where attention is given to aspects of perception—where religious symbolism and the self-image of all those who hold religious views have important bearing on how others perceive their particular belief system and its adherents.
Scientific interpretation
Descriptive interpretation
An interpretation is a descriptive interpretation (also called a factual interpretation) if at least one of the undefined symbols of its formal system becomes, in the interpretation, the name of a physical object, or observable property. A descriptive interpretation is a type of interpretation used in science and logic to talk about empirical entities.
Scientific model
When scientists attempt to formalize the principles of the empirical sciences, they use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will serve as a conceptual model of reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.
See also
Philosophical theory
References
External links
Conceptual modelling | 0.782055 | 0.982334 | 0.768239 |
World | The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts.
In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind.
Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world.
In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship".
Etymology
The English word world comes from the Old English . The Old English is a reflex of the Common Germanic *, a compound of 'man' and 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian , Old Saxon , Old Dutch , Old High German , and Old Norse .
The corresponding word in Latin is , literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos.
Conceptions
Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities.
Monism and pluralism
Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects.
Scientific cosmology
Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location.
Theories of modality
The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won them. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way.
Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds.
It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents.
Phenomenology
Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns.
Philosophy of mind
In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas".
Theology
Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two.
History of philosophy
In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century,
Plato
Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things.
Wittgenstein
Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921.
Heidegger
Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world".
Eugen Fink
"World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it.
Goodman
The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes.
Religion
Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar.
Hinduism
Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it.
A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta.
Christianity
Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation.
is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy.
Islam
In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach.
Mandaeism
In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light above and the World of Darkness below by aether.
Related terms and problems
Worldviews
A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion.
Paradox of many worlds
The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America.
Cosmogony
Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg.
Eschatology
Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions.
In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely.
World history
World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big history employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day.
World politics
World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events like 9/11, the 2003 war in Iraq or the financial crisis of 2007–2008.
Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists.
See also
References
External links
World. The World Factbook. Central Intelligence Agency.
Earth
Ontology | 0.76865 | 0.999333 | 0.768138 |
Autodidacticism | Autodidacticism (also autodidactism) or self-education (also self-learning, self-study and self-teaching) is the practice of education without the guidance of schoolmasters (i.e., teachers, professors, institutions).
Overview
Autodidacts are self-taught humans who learn a subject-of-study's aboutness through self-study. This educative praxis (process) may involve or complement formal education. Formal education itself may have a hidden curriculum that requires self-study for the uninitiated.
Generally, autodidacts are individuals who choose the subject they will study, their studying material, and the studying rhythm and time. Autodidacts may or may not have formal education, and their study may be either a complement or an alternative to formal education. Many notable contributions have been made by autodidacts.
The self-learning curriculum is infinite. One may seek out alternative pathways in education and use these to gain competency; self-study may meet some prerequisite-curricula criteria for experiential education or apprenticeship.
Self-education techniques used in self-study can include reading educational textbooks, watching educational videos and listening to educational audio recordings, or by visiting infoshops. One uses some space as a learning space, where one uses critical thinking to develop study skills within the broader learning environment until they've reached an academic comfort zone.
Etymology
The term has its roots in the Ancient Greek words (, ) and (, ). The related term didacticism defines an artistic philosophy of education.
Terminology
Various terms are used to describe self-education. One such is heutagogy, coined in 2000 by Stewart Hase and Chris Kenyon of Southern Cross University in Australia; others are self-directed learning and self-determined learning. In the heutagogy paradigm, a learner should be at the centre of their own learning. A truly self-determined learning approach also sees the heutagogic learner exploring different approaches to knowledge in order to learn; there is an element of experimentation underpinned by a personal curiosity.
Andragogy "strive[s] for autonomy and self-direction in learning", while Heutagogy "identif[ies] the potential to learn from novel experiences as a matter of course [...] manage their own learning". Ubuntugogy is a type of cosmopolitanism that has a collectivist ethics of awareness concerning the African diaspora.
Modern era
Autodidacticism is sometimes a complement of modern formal education. As a complement to formal education, students would be encouraged to do more independent work. The Industrial Revolution created a new situation for self-directed learners.
Before the twentieth century, only a small minority of people received an advanced academic education. As stated by Joseph Whitworth in his influential report on industry dated from 1853, literacy rates were higher in the United States. However, even in the U.S., most children were not completing high school. High school education was necessary to become a teacher. In modern times, a larger percentage of those completing high school also attended college, usually to pursue a professional degree, such as law or medicine, or a divinity degree.
Collegiate teaching was based on the classics (Latin, philosophy, ancient history, theology) until the early nineteenth century. There were few if any institutions of higher learning offering studies in engineering or science before 1800. Institutions such as the Royal Society did much to promote scientific learning, including public lectures. In England, there were also itinerant lecturers offering their service, typically for a fee.
Prior to the nineteenth century, there were many important inventors working as millwrights or mechanics who, typically, had received an elementary education and served an apprenticeship. Mechanics, instrument makers and surveyors had various mathematics training. James Watt was a surveyor and instrument maker and is described as being "largely self-educated". Watt, like some other autodidacts of the time, became a Fellow of the Royal Society and a member of the Lunar Society. In the eighteenth century these societies often gave public lectures and were instrumental in teaching chemistry and other sciences with industrial applications which were neglected by traditional universities. Academies also arose to provide scientific and technical training.
Years of schooling in the United States began to increase sharply in the early twentieth century. This phenomenon was seemingly related to increasing mechanization displacing child labor. The automated glass bottle-making machine is said to have done more for education than child labor laws because boys were no longer needed to assist. However, the number of boys employed in this particular industry was not that large; it was mechanization in several sectors of industry that displaced child labor toward education. For males in the U.S. born 1886–90, years of school averaged 7.86, while for those born in 1926–30, years of school averaged 11.46.
One of the most recent trends in education is that the classroom environment should cater towards students' individual needs, goals, and interests. This model adopts the idea of inquiry-based learning where students are presented with scenarios to identify their own research, questions and knowledge regarding the area. As a form of discovery learning, students in today's classrooms are being provided with more opportunity to "experience and interact" with knowledge, which has its roots in autodidacticism.
Successful self-teaching can require self-discipline and reflective capability. Some research suggests that the ability to regulate one's own learning may need to be modeled to some students so that they become active learners, while others learn dynamically via a process outside conscious control. To interact with the environment, a framework has been identified to determine the components of any learning system: a reward function, incremental action value functions and action selection methods. Rewards work best in motivating learning when they are specifically chosen on an individual student basis. New knowledge must be incorporated into previously existing information as its value is to be assessed. Ultimately, these scaffolding techniques, as described by Vygotsky (1978) and problem solving methods are a result of dynamic decision making.
In his book Deschooling Society, philosopher Ivan Illich strongly criticized 20th-century educational culture and the institutionalization of knowledge and learning - arguing that institutional schooling as such is an irretrievably flawed model of education - advocating instead ad-hoc co-operative networks through which autodidacts could find others interested in teaching themselves a given skill or about a given topic, supporting one another by pooling resources, materials, and knowledge.
Secular and modern societies have given foundations for new systems of education and new kinds of autodidacts. As Internet access has become more widespread the World Wide Web (explored using search engines such as Google) in general, and websites such as Wikipedia (including parts of it that were included in a book or referenced in a reading list), YouTube, Udemy, Udacity and Khan Academy in particular, have developed as learning centers for many people to actively and freely learn together. Organizations like The Alliance for Self-Directed Education (ASDE) have been formed to publicize and provide guidance for self-directed education. Entrepreneurs like Henry Ford, Steve Jobs, and Bill Gates are considered influential self-teachers.
History
The first philosophical claim supporting an autodidactic program to the study of nature and God was in the philosophical novel Hayy ibn Yaqdhan (Alive son of the Vigilant), whose titular hero is considered the archetypal autodidact. The story is a medieval autodidactic utopia, a philosophical treatise in a literary form, which was written by the Andalusian philosopher Ibn Tufail in the 1160s in Marrakesh. It is a story about a feral boy, an autodidact prodigy who masters nature through instruments and reason, discovers laws of nature by practical exploration and experiments, and gains summum bonum through a mystical mediation and communion with God. The hero rises from his initial state of tabula rasa to a mystical or direct experience of God after passing through the necessary natural experiences. The focal point of the story is that human reason, unaided by society and its conventions or by religion, can achieve scientific knowledge, preparing the way to the mystical or highest form of human knowledge.
Commonly translated as "The Self-Taught Philosopher" or "The Improvement of Human Reason", Ibn-Tufayl's story Hayy Ibn-Yaqzan inspired debates about autodidacticism in a range of historical fields from classical Islamic philosophy through Renaissance humanism and the European Enlightenment. In his book Reading Hayy Ibn-Yaqzan: a Cross-Cultural History of Autodidacticism, Avner Ben-Zaken showed how the text traveled from late medieval Andalusia to early modern Europe and demonstrated the intricate ways in which autodidacticism was contested in and adapted to diverse cultural settings.
Autodidacticism apparently intertwined with struggles over Sufism in twelfth-century Marrakesh; controversies about the role of philosophy in pedagogy in fourteenth-century Barcelona; quarrels concerning astrology in Renaissance Florence in which Pico della Mirandola pleads for autodidacticism against the strong authority of intellectual establishment notions of predestination; and debates pertaining to experimentalism in seventeenth-century Oxford. Pleas for autodidacticism echoed not only within close philosophical discussions; they surfaced in struggles for control between individuals and establishments.
In the story of Black American self-education, Heather Andrea Williams presents a historical account to examine Black American's relationship to literacy during slavery, the Civil War and the first decades of freedom. Many of the personal accounts tell of individuals who have had to teach themselves due to racial discrimination in education.
In architecture
Many successful and influential architects, such as Mies van der Rohe, Frank Lloyd Wright, Violet-Le-Duc, Tadao Ando were self-taught.
There are very few countries allowing autodidacticism in architecture today. The practice of architecture or the use of the title "architect", are now protected in most countries.
Self-taught architects have generally studied and qualified in other fields such as engineering or arts and crafts. Jean Prouvé was first a structural engineer. Le Corbusier had an academic qualification in decorative arts. Tadao Ando started his career as a draftsman, and Eileen Gray studied fine arts.
When a political state starts to implement restrictions on the profession, there are issues related to the rights of established self-taught architects. In most countries the legislation includes a grandfather clause, authorising established self-taught architects to continue practicing. In the UK, the legislation allowed self-trained architects with two years of experience to register. In France, it allowed self-trained architects with five years of experience to register. In Belgium, the law allowed experienced self-trained architects in practice to register. In Italy, it allowed self-trained architects with 10 years of experience to register. In The Netherlands, the "" along with additional procedures, allowed architects with 10 years of experience and architects aged 40 years old or over, with 5 years of experience, to access the register.
However, other sovereign states chose to omit such a clause, and many established and competent practitioners were stripped of their professional rights. In the Republic of Ireland, a group named "Architects' Alliance of Ireland" is defending the interests of long-established self-trained architects who were deprived of their rights to practice as per Part 3 of the Irish Building Control Act 2007.
Theoretical research such as Architecture of Change, Sustainability and Humanity in the Built Environment or older studies such as Vers une Architecture from Le Corbusier describe the practice of architecture as an environment changing with new technologies, sciences, and legislation. All architects must be autodidacts to keep up to date with new standards, regulations, or methods.
Self-taught architects such as Eileen Gray, Luis Barragán, and many others, created a system where working is also learning, where self-education is associated with creativity and productivity within a working environment.
While he was primarily interested in naval architecture, William Francis Gibbs learned his profession through his own study of battleships and ocean liners. Through his life he could be seen examining and changing the designs of ships that were already built, that is, until he started his firm Gibbs and Cox.
Predictors
Openness is the largest predictor of self-directed learning out of the Big Five personality traits, though, in a study, personality only explained 10% of the variance in self-directed learning.
Future role
The role of self-directed learning continues to be investigated in learning approaches, along with other important goals of education, such as content knowledge, epistemic practices and collaboration. As colleges and universities offer distance learning degree programs and secondary schools provide cyber school options for K–12 students, technology provides numerous resources that enable individuals to have a self-directed learning experience. Several studies show these programs function most effectively when the "teacher" or facilitator is a full owner of virtual space to encourage a broad range of experiences to come together in an online format. This allows self-directed learning to encompass both a chosen path of information inquiry, self-regulation methods and reflective discussion among experts as well as novices in a given area. Furthermore, massive open online courses (MOOCs) make autodidacticism easier and thus more common.
A 2016 Stack Overflow poll reported that due to the rise of autodidacticism, 69.1% of software developers appear to be self-taught.
Notable individuals
Some notable autodidacts can be broadly grouped in the following interdisciplinary areas:
Artists and authors
Actors, musicians, and other artists
Architects
Engineers and inventors
Scientists, historians, and educators
Educational materials availability
Most governments have compulsory education that may deny the right to education on the basis of discrimination; state school teachers may unwittingly indoctrinate students into the ideology of the oppressive community and government via a hidden curriculum.
See also
References
Further reading
External links
African-American society
African Americans and education
Alternative education
Applied learning
Area studies
Black studies
Cybernetics
Education activism
Education theory
Education in Poland during World War II
Education museums in the United States
Espionage
History of education in the United States
Information sensitivity
Learning
Learning methods
Learning to read
Lyceum movement
Methodology
Open content
Pedagogical movements and theories
Philosophical methodology
Philosophy of education
Play (activity)
Pre-emancipation African-American history
Problem solving methods
Research methods
Sampling (statistics)
School desegregation pioneers
Science experiments
Self-care
Teaching
Underground education
United States education law
WikiLeaks | 0.769981 | 0.9976 | 0.768133 |
Philosophy of language | Philosophy of language investigates the nature of language and the relations between language, language users, and the world. Investigations may include inquiry into the nature of meaning, intentionality, reference, the constitution of sentences, concepts, learning, and thought.
Gottlob Frege and Bertrand Russell were pivotal figures in analytic philosophy's "linguistic turn". These writers were followed by Ludwig Wittgenstein (Tractatus Logico-Philosophicus), the Vienna Circle, logical positivists, and Willard Van Orman Quine.
History
Ancient philosophy
In the West, inquiry into language stretches back to the 5th century BC with Socrates, Plato, Aristotle, and the Stoics. Linguistic speculation predated systematic descriptions of grammar which emerged in India and in Greece.
In the dialogue Cratylus, Plato considered the question of whether the names of things were determined by convention or by nature. He criticized conventionalism because it led to the bizarre consequence that anything can be conventionally denominated by any name. Hence, it cannot account for the correct or incorrect application of a name. He claimed that there was a natural correctness to names. To do this, he pointed out that compound words and phrases have a range of correctness. He also argued that primitive names had a natural correctness, because each phoneme represented basic ideas or sentiments. For example, for Plato the letter l and its sound represented the idea of softness. However, by the end of Cratylus, he had admitted that some social conventions were also involved, and that there were faults in the idea that phonemes had individual meanings. Plato is often considered a proponent of extreme realism.
Aristotle interested himself with issues of logic, categories, and the creation of meaning. He separated all things into categories of species and genus. He thought that the meaning of a predicate was established through an abstraction of the similarities between various individual things. This theory later came to be called nominalism. However, since Aristotle took these similarities to be constituted by a real commonality of form, he is more often considered a proponent of moderate realism.
The Stoics made important contributions to the analysis of grammar, distinguishing five parts of speech: nouns, verbs, appellatives (names or epithets), conjunctions and articles. They also developed a sophisticated doctrine of the lektón associated with each sign of a language, but distinct from both the sign itself and the thing to which it refers. This lektón was the meaning or sense of every term. The complete lektón of a sentence is what we would now call its proposition. Only propositions were considered truth-bearing—meaning they could be considered true or false—while sentences were simply their vehicles of expression. Different lektá could also express things besides propositions, such as commands, questions and exclamations.
Medieval philosophy
Medieval philosophers were greatly interested in the subtleties of language and its usage. For many scholastics, this interest was provoked by the necessity of translating Greek texts into Latin. There were several noteworthy philosophers of language in the medieval period. According to Peter J. King, (although this has been disputed), Peter Abelard anticipated the modern theories of reference. Also, William of Ockham's Summa Logicae brought forward one of the first serious proposals for codifying a mental language.
The scholastics of the high medieval period, such as Ockham and John Duns Scotus, considered logic to be a scientia sermocinalis (science of language). The result of their studies was the elaboration of linguistic-philosophical notions whose complexity and subtlety has only recently come to be appreciated. Many of the most interesting problems of modern philosophy of language were anticipated by medieval thinkers. The phenomena of vagueness and ambiguity were analyzed intensely, and this led to an increasing interest in problems related to the use of syncategorematic words such as and, or, not, if, and every. The study of categorematic words (or terms) and their properties was also developed greatly. One of the major developments of the scholastics in this area was the doctrine of the suppositio. The suppositio of a term is the interpretation that is given of it in a specific context. It can be proper or improper (as when it is used in metaphor, metonyms and other figures of speech). A proper suppositio, in turn, can be either formal or material accordingly when it refers to its usual non-linguistic referent (as in "Charles is a man"), or to itself as a linguistic entity (as in "Charles has seven letters"). Such a classification scheme is the precursor of modern distinctions between use and mention, and between language and metalanguage.
There is a tradition called speculative grammar which existed from the 11th to the 13th century. Leading scholars included Martin of Dacia and Thomas of Erfurt (see Modistae).
Modern philosophy
Linguists of the Renaissance and Baroque periods such as Johannes Goropius Becanus, Athanasius Kircher and John Wilkins were infatuated with the idea of a philosophical language reversing the confusion of tongues, influenced by the gradual discovery of Chinese characters and Egyptian hieroglyphs (Hieroglyphica). This thought parallels the idea that there might be a universal language of music.
European scholarship began to absorb the Indian linguistic tradition only from the mid-18th century, pioneered by Jean François Pons and Henry Thomas Colebrooke (the editio princeps of Varadarāja, a 17th-century Sanskrit grammarian, dating to 1849).
In the early 19th century, the Danish philosopher Søren Kierkegaard insisted that language ought to play a larger role in Western philosophy. He argued that philosophy has not sufficiently focused on the role language plays in cognition and that future philosophy ought to proceed with a conscious focus on language:
Contemporary philosophy
The phrase "linguistic turn" was used to describe the noteworthy emphasis that contemporary philosophers put upon language.
Language began to play a central role in Western philosophy in the early 20th century. One of the central figures involved in this development was the German philosopher Gottlob Frege, whose work on philosophical logic and the philosophy of language in the late 19th century influenced the work of 20th-century analytic philosophers Bertrand Russell and Ludwig Wittgenstein. The philosophy of language became so pervasive that for a time, in analytic philosophy circles, philosophy as a whole was understood to be a matter of philosophy of language.
In continental philosophy, the foundational work in the field was Ferdinand de Saussure's Cours de linguistique générale, published posthumously in 1916.
Major topics and subfields
Meaning
The topic that has received the most attention in the philosophy of language has been the nature of meaning, to explain what "meaning" is, and what we mean when we talk about meaning. Within this area, issues include: the nature of synonymy, the origins of meaning itself, our apprehension of meaning, and the nature of composition (the question of how meaningful units of language are composed of smaller meaningful parts, and how the meaning of the whole is derived from the meaning of its parts).
There have been several distinctive explanations of what a linguistic "meaning" is. Each has been associated with its own body of literature.
The ideational theory of meaning, most commonly associated with the British empiricist John Locke, claims that meanings are mental representations provoked by signs. Although this view of meaning has been beset by a number of problems from the beginning (see the main article for details), interest in it has been renewed by some contemporary theorists under the guise of semantic internalism.
The truth-conditional theory of meaning holds meaning to be the conditions under which an expression may be true or false. This tradition goes back at least to Frege and is associated with a rich body of modern work, spearheaded by philosophers like Alfred Tarski and Donald Davidson. (See also Wittgenstein's picture theory of language.)
The use theory of meaning, most commonly associated with the later Wittgenstein, helped inaugurate the idea of "meaning as use", and a communitarian view of language. Wittgenstein was interested in the way in which the communities use language, and how far it can be taken. It is also associated with P. F. Strawson, John Searle, Robert Brandom, and others.
The inferentialist theory of meaning, the view that the meaning of an expression is derived from the inferential relations that it has with other expressions. This view is thought to be descended from the use theory of meaning, and has been most notably defended by Wilfrid Sellars and Robert Brandom.
The direct reference theory of meaning, the view that the meaning of a word or expression is what it points out in the world. While views of this kind have been widely criticized regarding the use of language in general, John Stuart Mill defended a form of this view, and Saul Kripke and Ruth Barcan Marcus have both defended the application of direct reference theory to proper names.
The semantic externalist theory of meaning, according to which meaning is not a purely psychological phenomenon, because it is determined, at least in part, by features of one's environment. There are two broad subspecies of externalism: social and environmental. The first is most closely associated with Tyler Burge and the second with Hilary Putnam, Saul Kripke and others.
The verificationist theory of meaning is generally associated with the early 20th century movement of logical positivism. The traditional formulation of such a theory is that the meaning of a sentence is its method of verification or falsification. In this form, the thesis was abandoned after the acceptance by most philosophers of the Duhem–Quine thesis of confirmation holism after the publication of Quine's "Two Dogmas of Empiricism". However, Michael Dummett has advocated a modified form of verificationism since the 1970s. In this version, the comprehension (and hence meaning) of a sentence consists in the hearer's ability to recognize the demonstration (mathematical, empirical or other) of the truth of the sentence.
Pragmatic theories of meaning include any theory in which the meaning (or understanding) of a sentence is determined by the consequences of its application. Dummett attributes such a theory of meaning to Charles Sanders Peirce and other early 20th century American pragmatists.
Psychological theories of meaning, which focus on the intentions of a speaker in determining the meaning of an utterance. One notable proponent of such a view was Paul Grice, whose views also account for non-linguistic meaning (i.e., meaning as conveyed by body language, meanings as consequences, etc.).
Reference
Investigations into how language interacts with the world are called theories of reference. Gottlob Frege was an advocate of a mediated reference theory. Frege divided the semantic content of every expression, including sentences, into two components: sense and reference. The sense of a sentence is the thought that it expresses. Such a thought is abstract, universal and objective. The sense of any sub-sentential expression consists in its contribution to the thought that its embedding sentence expresses. Senses determine reference and are also the modes of presentation of the objects to which expressions refer. Referents are the objects in the world that words pick out. The senses of sentences are thoughts, while their referents are truth values (true or false). The referents of sentences embedded in propositional attitude ascriptions and other opaque contexts are their usual senses.
Bertrand Russell, in his later writings and for reasons related to his theory of acquaintance in epistemology, held that the only directly referential expressions are, what he called, "logically proper names". Logically proper names are such terms as I, now, here and other indexicals. He viewed proper names of the sort described above as "abbreviated definite descriptions" (see Theory of descriptions). Hence Joseph R. Biden may be an abbreviation for "the current President of the United States and husband of Jill Biden". Definite descriptions are denoting phrases (see "On Denoting") which are analyzed by Russell into existentially quantified logical constructions. Such phrases denote in the sense that there is an object that satisfies the description. However, such objects are not to be considered meaningful on their own, but have meaning only in the proposition expressed by the sentences of which they are a part. Hence, they are not directly referential in the same way as logically proper names, for Russell.
On Frege's account, any referring expression has a sense as well as a referent. Such a "mediated reference" view has certain theoretical advantages over Mill's view. For example, co-referential names, such as Samuel Clemens and Mark Twain, cause problems for a directly referential view because it is possible for someone to hear "Mark Twain is Samuel Clemens" and be surprised – thus, their cognitive content seems different.
Despite the differences between the views of Frege and Russell, they are generally lumped together as descriptivists about proper names. Such descriptivism was criticized in Saul Kripke's Naming and Necessity.
Kripke put forth what has come to be known as "the modal argument" (or "argument from rigidity"). Consider the name Aristotle and the descriptions "the greatest student of Plato", "the founder of logic" and "the teacher of Alexander". Aristotle obviously satisfies all of the descriptions (and many of the others we commonly associate with him), but it is not necessarily true that if Aristotle existed then Aristotle was any one, or all, of these descriptions. Aristotle may well have existed without doing any single one of the things for which he is known to posterity. He may have existed and not have become known to posterity at all or he may have died in infancy. Suppose that Aristotle is associated by Mary with the description "the last great philosopher of antiquity" and (the actual) Aristotle died in infancy. Then Mary's description would seem to refer to Plato. But this is deeply counterintuitive. Hence, names are rigid designators, according to Kripke. That is, they refer to the same individual in every possible world in which that individual exists. In the same work, Kripke articulated several other arguments against "Frege–Russell" descriptivism (see also Kripke's causal theory of reference).
The whole philosophical enterprise of studying reference has been critiqued by linguist Noam Chomsky in various works.
Composition and parts
It has long been known that there are different parts of speech. One part of the common sentence is the lexical word, which is composed of nouns, verbs, and adjectives. A major question in the field – perhaps the single most important question for formalist and structuralist thinkers – is how the meaning of a sentence emerges from its parts.
Many aspects of the problem of the composition of sentences are addressed in the field of linguistics of syntax. Philosophical semantics tends to focus on the principle of compositionality to explain the relationship between meaningful parts and whole sentences. The principle of compositionality asserts that a sentence can be understood on the basis of the meaning of the parts of the sentence (i.e., words, morphemes) along with an understanding of its structure (i.e., syntax, logic). Further, syntactic propositions are arranged into discourse or narrative structures, which also encode meanings through pragmatics like temporal relations and pronominals.
It is possible to use the concept of functions to describe more than just how lexical meanings work: they can also be used to describe the meaning of a sentence. In the sentence "The horse is red", "the horse" can be considered to be the product of a propositional function. A propositional function is an operation of language that takes an entity (in this case, the horse) as an input and outputs a semantic fact (i.e., the proposition that is represented by "The horse is red"). In other words, a propositional function is like an algorithm. The meaning of "red" in this case is whatever takes the entity "the horse" and turns it into the statement, "The horse is red."
Linguists have developed at least two general methods of understanding the relationship between the parts of a linguistic string and how it is put together: syntactic and semantic trees. Syntactic trees draw upon the words of a sentence with the grammar of the sentence in mind; semantic trees focus upon the role of the meaning of the words and how those meanings combine to provide insight onto the genesis of semantic facts.
Mind and language
Innateness and learning
Some of the major issues at the intersection of philosophy of language and philosophy of mind are also dealt with in modern psycholinguistics. Some important questions regard the amount of innate language, if language acquisition is a special faculty in the mind, and what the connection is between thought and language.
There are three general perspectives on the issue of language learning. The first is the behaviorist perspective, which dictates that not only is the solid bulk of language learned, but it is learned via conditioning. The second is the hypothesis testing perspective, which understands the child's learning of syntactic rules and meanings to involve the postulation and testing of hypotheses, through the use of the general faculty of intelligence. The final candidate for explanation is the innatist perspective, which states that at least some of the syntactic settings are innate and hardwired, based on certain modules of the mind.
There are varying notions of the structure of the brain when it comes to language. Connectionist models emphasize the idea that a person's lexicon and their thoughts operate in a kind of distributed, associative network. Nativist models assert that there are specialized devices in the brain that are dedicated to language acquisition. Computation models emphasize the notion of a representational language of thought and the logic-like, computational processing that the mind performs over them. Emergentist models focus on the notion that natural faculties are a complex system that emerge from simpler biological parts. Reductionist models attempt to explain higher-level mental processes in terms of the basic low-level neurophysiological activity.
Communication
Firstly, this field of study seeks to better understand what speakers and listeners do with language in communication, and how it is used socially. Specific interests include the topics of language learning, language creation, and speech acts.
Secondly, the question of how language relates to the minds of both the speaker and the interpreter is investigated. Of specific interest is the grounds for successful translation of words and concepts into their equivalents in another language.
Language and thought
An important problem which touches both philosophy of language and philosophy of mind is to what extent language influences thought and vice versa. There have been a number of different perspectives on this issue, each offering a number of insights and suggestions.
Linguists Sapir and Whorf suggested that language limited the extent to which members of a "linguistic community" can think about certain subjects (a hypothesis paralleled in George Orwell's novel Nineteen Eighty-Four). In other words, language was analytically prior to thought. Philosopher Michael Dummett is also a proponent of the "language-first" viewpoint.
The stark opposite to the Sapir–Whorf position is the notion that thought (or, more broadly, mental content) has priority over language. The "knowledge-first" position can be found, for instance, in the work of Paul Grice. Further, this view is closely associated with Jerry Fodor and his language of thought hypothesis. According to his argument, spoken and written language derive their intentionality and meaning from an internal language encoded in the mind. The main argument in favor of such a view is that the structure of thoughts and the structure of language seem to share a compositional, systematic character. Another argument is that it is difficult to explain how signs and symbols on paper can represent anything meaningful unless some sort of meaning is infused into them by the contents of the mind. One of the main arguments against is that such levels of language can lead to an infinite regress. In any case, many philosophers of mind and language, such as Ruth Millikan, Fred Dretske and Fodor, have recently turned their attention to explaining the meanings of mental contents and states directly.
Another tradition of philosophers has attempted to show that language and thought are coextensive – that there is no way of explaining one without the other. Donald Davidson, in his essay "Thought and Talk", argued that the notion of belief could only arise as a product of public linguistic interaction. Daniel Dennett holds a similar interpretationist view of propositional attitudes. To an extent, the theoretical underpinnings to cognitive semantics (including the notion of semantic framing) suggest the influence of language upon thought. However, the same tradition views meaning and grammar as a function of conceptualization, making it difficult to assess in any straightforward way.
Some thinkers, like the ancient sophist Gorgias, have questioned whether or not language was capable of capturing thought at all.
There are studies that prove that languages shape how people understand causality. Some of them were performed by Lera Boroditsky. For example, English speakers tend to say things like "John broke the vase" even for accidents. However, Spanish or Japanese speakers would be more likely to say "the vase broke itself". In studies conducted by Caitlin Fausey at Stanford University speakers of English, Spanish and Japanese watched videos of two people popping balloons, breaking eggs and spilling drinks either intentionally or accidentally. Later everyone was asked whether they could remember who did what. Spanish and Japanese speakers did not remember the agents of accidental events as well as did English speakers.
Russian speakers, who make an extra distinction between light and dark blue in their language, are better able to visually discriminate shades of blue. The Piraha, a tribe in Brazil, whose language has only terms like few and many instead of numerals, are not able to keep track of exact quantities.
In one study German and Spanish speakers were asked to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key"—a word that is masculine in German and feminine in Spanish—the German speakers were more likely to use words like "hard", "heavy", "jagged", "metal", "serrated" and "useful" whereas Spanish speakers were more likely to say "golden", "intricate", "little", "lovely", "shiny" and "tiny". To describe a "bridge", which is feminine in German and masculine in Spanish, the German speakers said "beautiful", "elegant", "fragile", "peaceful", "pretty" and "slender", and the Spanish speakers said "big", "dangerous", "long", "strong", "sturdy" and "towering". This was the case even though all testing was done in English, a language without grammatical gender.
In a series of studies conducted by Gary Lupyan, people were asked to look at a series of images of imaginary aliens. Whether each alien was friendly or hostile was determined by certain subtle features but participants were not told what these were. They had to guess whether each alien was friendly or hostile, and after each response they were told if they were correct or not, helping them learn the subtle cues that distinguished friend from foe. A quarter of the participants were told in advance that the friendly aliens were called "leebish" and the hostile ones "grecious", while another quarter were told the opposite. For the rest, the aliens remained nameless. It was found that participants who were given names for the aliens learned to categorize the aliens far more quickly, reaching 80 per cent accuracy in less than half the time taken by those not told the names. By the end of the test, those told the names could correctly categorize 88 per cent of aliens, compared to just 80 per cent for the rest. It was concluded that naming objects helps us categorize and memorize them.
In another series of experiments, a group of people was asked to view furniture from an IKEA catalog. Half the time they were asked to label the object – whether it was a chair or lamp, for example – while the rest of the time they had to say whether or not they liked it. It was found that when asked to label items, people were later less likely to recall the specific details of products, such as whether a chair had arms or not. It was concluded that labeling objects helps our minds build a prototype of the typical object in the group at the expense of individual features.
Social interaction and language
A common claim is that language is governed by social conventions. Questions inevitably arise on surrounding topics. One question regards what a convention exactly is, and how it is studied, and second regards the extent that conventions even matter in the study of language. David Kellogg Lewis proposed a worthy reply to the first question by expounding the view that a convention is a "rationally self-perpetuating regularity in behavior". However, this view seems to compete to some extent with the Gricean view of speaker's meaning, requiring either one (or both) to be weakened if both are to be taken as true.
Some have questioned whether or not conventions are relevant to the study of meaning at all. Noam Chomsky proposed that the study of language could be done in terms of the I-Language, or internal language of persons. If this is so, then it undermines the pursuit of explanations in terms of conventions, and relegates such explanations to the domain of metasemantics. Metasemantics is a term used by philosopher of language Robert Stainton to describe all those fields that attempt to explain how semantic facts arise. One fruitful source of research involves investigation into the social conditions that give rise to, or are associated with, meanings and languages. Etymology (the study of the origins of words) and stylistics (philosophical argumentation over what makes "good grammar", relative to a particular language) are two other examples of fields that are taken to be metasemantic.
Many separate (but related) fields have investigated the topic of linguistic convention within their own research paradigms. The presumptions that prop up each theoretical view are of interest to the philosopher of language. For instance, one of the major fields of sociology, symbolic interactionism, is based on the insight that human social organization is based almost entirely on the use of meanings. In consequence, any explanation of a social structure (like an institution) would need to account for the shared meanings which create and sustain the structure.
Rhetoric is the study of the particular words that people use to achieve the proper emotional and rational effect in the listener, be it to persuade, provoke, endear, or teach. Some relevant applications of the field include the examination of propaganda and didacticism, the examination of the purposes of swearing and pejoratives (especially how it influences the behaviors of others, and defines relationships), or the effects of gendered language. It can also be used to study linguistic transparency (or speaking in an accessible manner), as well as performative utterances and the various tasks that language can perform (called "speech acts"). It also has applications to the study and interpretation of law, and helps give insight to the logical concept of the domain of discourse.
Literary theory is a discipline that some literary theorists claim overlaps with the philosophy of language. It emphasizes the methods that readers and critics use in understanding a text. This field, an outgrowth of the study of how to properly interpret messages, is
closely tied to the ancient discipline of hermeneutics.
Truth
Finally, philosophers of language investigate how language and meaning relate to truth and the reality being referred to. They tend to be less interested in which sentences are actually true, and more in what kinds of meanings can be true or false. A truth-oriented philosopher of language might wonder whether or not a meaningless sentence can be true or false, or whether or not sentences can express propositions about things that do not exist, rather than the way sentences are used.
Problems in the philosophy of language
Nature of language
In the philosophical tradition stemming from the Ancient Greeks, such as Plato and Aristotle, language is seen as a tool for making statements about the reality by means of predication; e.g. "Man is a rational animal", where Man is the subject and is a rational animal is the predicate, which expresses a property of the subject. Such structures also constitute the syntactic basis of syllogism, which remained the standard model of formal logic until the early 20th century, when it was replaced with predicate logic. In linguistics and philosophy of language, the classical model survived in the Middle Ages, and the link between Aristotelian philosophy of science and linguistics was elaborated by Thomas of Erfurt's Modistae grammar, which gives an example of the analysis of the transitive sentence: "Plato strikes Socrates", where Socrates is the object and part of the predicate.
The social and evolutionary aspects of language were discussed during the classical and mediaeval periods. Plato's dialogue Cratylus investigates the iconicity of words, arguing that words are made by "wordsmiths" and selected by those who need the words, and that the study of language is external to the philosophical objective of studying ideas. Age-of-Enlightenment thinkers accommodated the classical model with a Christian worldview, arguing that God created Man social and rational, and, out of these properties, Man created his own cultural habits including language. In this tradition, the logic of the subject-predicate structure forms a general, or 'universal' grammar, which governs thinking and underpins all languages. Variation between languages was investigated in the Port-Royal Grammar of Arnauld and Lancelot, among others, who described it as accidental and separate from the logical requirements of thought and language.
The classical view was overturned in the early 19th century by the advocates of German romanticism. Humboldt and his contemporaries questioned the existence of a universal inner form of thought. They argued that, since thinking is verbal, language must be the prerequisite for thought. Therefore, every nation has its own unique way of thinking, a worldview, which has evolved with the linguistic history of the nation. Diversity became emphasized with a focus on the uncontrollable sociohistorical construction of language. Influential romantic accounts include Grimm's sound laws of linguistic evolution, Schleicher's "Darwinian" species-language analogy, the Völkerpsychologie accounts of language by Steinthal and Wundt, and Saussure's semiology, a dyadic model of semiotics, i.e., language as a sign system with its own inner logic, separated from physical reality.
In the early 20th century, logical grammar was defended by Frege and Husserl. Husserl's 'pure logical grammar' draws from 17th-century rational universal grammar, proposing a formal semantics that links the structures of physical reality (e.g., "This paper is white") with the structures of the mind, meaning, and the surface form of natural languages. Husserl's treatise was, however, rejected in general linguistics. Instead, linguists opted for Chomsky's theory of universal grammar as an innate biological structure that generates syntax in a formalistic fashion, i.e., irrespective of meaning.
Many philosophers continue to hold the view that language is a logically based tool of expressing the structures of reality by means of predicate-argument structure. Proponents include, with different nuances, Russell, Wittgenstein, Sellars, Davidson, Putnam, and Searle. Attempts to revive logical formal semantics as a basis of linguistics followed, e.g., the Montague grammar. Despite resistance from linguists including Chomsky and Lakoff, formal semantics was established in the late twentieth century. However, its influence has been mostly limited to computational linguistics, with little impact on general linguistics.
The incompatibility with genetics and neuropsychology of Chomsky's innate grammar gave rise to new psychologically and biologically oriented theories of language in the 1980s, and these have gained influence in linguistics and cognitive science in the 21st century. Examples include Lakoff's conceptual metaphor, which argues that language arises automatically from visual and other sensory input, and different models inspired by Dawkins's memetics, a neo-Darwinian model of linguistic units as the units of natural selection. These include cognitive grammar, construction grammar, and usage-based linguistics.
Problem of universals and composition
One debate that has captured the interest of many philosophers is the debate over the meaning of universals. It might be asked, for example, why when people say the word rocks, what it is that the word represents. Two different answers have emerged to this question. Some have said that the expression stands for some real, abstract universal out in the world called "rocks". Others have said that the word stands for some collection of particular, individual rocks that are associated with merely a nomenclature. The former position has been called philosophical realism, and the latter nominalism.
The issue here can be explicated in examination of the proposition "Socrates is a man".
From the realist's perspective, the connection between S and M is a connection between two abstract entities. There is an entity, "man", and an entity, "Socrates". These two things connect in some way or overlap.
From a nominalist's perspective, the connection between S and M is the connection between a particular entity (Socrates) and a vast collection of particular things (men). To say that Socrates is a man is to say that Socrates is a part of the class of "men". Another perspective is to consider "man" to be a property of the entity, "Socrates".
There is a third way, between nominalism and (extreme) realism, usually called "moderate realism" and attributed to Aristotle and Thomas Aquinas. Moderate realists hold that "man" refers to a real essence or form that is really present and identical in Socrates and all other men, but "man" does not exist as a separate and distinct entity. This is a realist position, because "man" is real, insofar as it really exists in all men; but it is a moderate realism, because "man" is not an entity separate from the men it informs.
Formal versus informal approaches
Another of the questions that has divided philosophers of language is the extent to which formal logic can be used as an effective tool in the analysis and understanding of natural languages. While most philosophers, including Gottlob Frege, Alfred Tarski and Rudolf Carnap, have been more or less skeptical about formalizing natural languages, many of them developed formal languages for use in the sciences or formalized parts of natural language for investigation. Some of the most prominent members of this tradition of formal semantics include Tarski, Carnap, Richard Montague and Donald Davidson.
On the other side of the divide, and especially prominent in the 1950s and '60s, were the so-called "ordinary language philosophers". Philosophers such as P. F. Strawson, John Langshaw Austin and Gilbert Ryle stressed the importance of studying natural language without regard to the truth-conditions of sentences and the references of terms. They did not believe that the social and practical dimensions of linguistic meaning could be captured by any attempts at formalization using the tools of logic. Logic is one thing and language is something entirely different. What is important is not expressions themselves but what people use them to do in communication.
Hence, Austin developed a theory of speech acts, which described the kinds of things which can be done with a sentence (assertion, command, inquiry, exclamation) in different contexts of use on different occasions. Strawson argued that the truth-table semantics of the logical connectives (e.g., , and ) do not capture the meanings of their natural language counterparts ("and", "or" and "if-then"). While the "ordinary language" movement basically died out in the 1970s, its influence was crucial to the development of the fields of speech-act theory and the study of pragmatics. Many of its ideas have been absorbed by theorists such as Kent Bach, Robert Brandom, Paul Horwich and Stephen Neale. In recent work, the division between semantics and pragmatics has become a lively topic of discussion at the interface of philosophy and linguistics, for instance in work by Sperber and Wilson, Carston and Levinson.
While keeping these traditions in mind, the question of whether or not there is any grounds for conflict between the formal and informal approaches is far from being decided. Some theorists, like Paul Grice, have been skeptical of any claims that there is a substantial conflict between logic and natural language.
Game theoretical approach
Game theory has been suggested as a tool to study the evolution of language. Some researchers that have developed game theoretical approaches to philosophy of language are David K. Lewis, Schuhmacher, and Rubinstein.
Translation and interpretation
Translation and interpretation are two other problems that philosophers of language have attempted to confront. In the 1950s, W.V. Quine argued for the indeterminacy of meaning and reference based on the principle of radical translation. In Word and Object, Quine asks readers to imagine a situation in which they are confronted with a previously undocumented, group of indigenous people where they must attempt to make sense of the utterances and gestures that its members make. This is the situation of radical translation.
He claimed that, in such a situation, it is impossible in principle to be absolutely certain of the meaning or reference that a speaker of the indigenous peoples language attaches to an utterance. For example, if a speaker sees a rabbit and says "gavagai", is she referring to the whole rabbit, to the rabbit's tail, or to a temporal part of the rabbit? All that can be done is to examine the utterance as a part of the overall linguistic behaviour of the individual, and then use these observations to interpret the meaning of all other utterances. From this basis, one can form a manual of translation. But, since reference is indeterminate, there will be many such manuals, no one of which is more correct than the others. For Quine, as for Wittgenstein and Austin, meaning is not something that is associated with a single word or sentence, but is rather something that, if it can be attributed at all, can only be attributed to a whole language. The resulting view is called semantic holism.
Inspired by Quine's discussion, Donald Davidson extended the idea of radical translation to the interpretation of utterances and behavior within a single linguistic community. He dubbed this notion radical interpretation. He suggested that the meaning that any individual ascribed to a sentence could only be determined by attributing meanings to many, perhaps all, of the individual's assertions, as well as their mental states and attitudes.
Vagueness
One issue that has troubled philosophers of language and logic is the problem of the vagueness of words. The specific instances of vagueness that most interest philosophers of language are those where the existence of "borderline cases" makes it seemingly impossible to say whether a predicate is true or false. Classic examples are "is tall" or "is bald", where it cannot be said that some borderline case (some given person) is tall or not-tall. In consequence, vagueness gives rise to the paradox of the heap. Many theorists have attempted to solve the paradox by way of n-valued logics, such as fuzzy logic, which have radically departed from classical two-valued logics.
Further reading
Atherton, Catherine. 1993. The Stoics on Ambiguity. Cambridge, UK: Cambridge University Press.
Denyer, Nicholas. 1991. Language, Thought and Falsehood in Ancient Greek Philosophy. London: Routledge.
Kneale, W., and M. Kneale. 1962. The Development of Logic. Oxford: Clarendon.
Modrak, Deborah K. W. 2001. Aristotle's Theory of Language and Meaning. Cambridge, UK: Cambridge University Press.
Sedley, David. 2003. Plato's Cratylus. Cambridge, UK: Cambridge University Press.
See also
Analytic philosophy
Discourse
Interpersonal communication
Linguistics
Semiotics
Theory of language
External links
One of five parts, the others found here, 2 here. 3 here, 4 here, 5 There are also 16 lectures by Searle, beginning with
Sprachlogik short articles in the philosophies of logic and language.
Glossary of Linguistic terms.
What is I-language? – Chapter 1 of I-language: An Introduction to Linguistics as Cognitive Science.
The London Philosophy Study Guide offers many suggestions on what to read, depending on the student's familiarity with the subject: Philosophy of Language.
Carnap, R., (1956). Meaning and Necessity: a Study in Semantics and Modal Logic. University of Chicago Press.
Collins, John. (2001). Truth Conditions Without Interpretation. .
Devitt, Michael and Hanley, Richard, eds. (2006) The Blackwell Guide to the Philosophy of Language. Oxford: Blackwell.
Greenberg, Mark and Harman, Gilbert. (2005). Conceptual Role Semantics. .
Hale, B. and Crispin Wright, Ed. (1999). Blackwell Companions To Philosophy. Malden, Massachusetts, Blackwell Publishers.
Lepore, Ernest and Barry C. Smith (eds). (2006). The Oxford Handbook of Philosophy of Language. Oxford University Press.
Lycan, W. G. (2008). Philosophy of Language: A Contemporary Introduction. New York, Routledge.
Miller, James. (1999). PEN-L message, Bad writing.
Searle, John (2007). Philosophy of Language: an interview with John Searle.
Stainton, Robert J. (1996). Philosophical Perspectives on Language. Peterborough, Ont., Broadview Press.
Tarski, Alfred. (1944). "The Semantical Conception of Truth".
Eco, Umberto. Semiotics and the Philosophy of Language. Indiana University Press, 1986, , .
References
Analytic philosophy | 0.769062 | 0.998766 | 0.768113 |
Fictionalism | Fictionalism is a view in philosophy that posits that statements appearing to be descriptions of the world should not be construed as such, but should instead be understood as cases of "make believe", thus allowing individuals to treat something as literally true (a "useful fiction").
Concept
Fictionalism consists in at least the following three theses:
Claims made within the domain of discourse are taken to be truth-apt; that is, true or false.
The domain of discourse is to be interpreted at face value—not reduced to meaning something else.
The aim of discourse in any given domain is not truth, but some other virtue(s) (e.g., simplicity, explanatory scope).
Two important strands of fictionalism are: modal fictionalism developed by Gideon Rosen, which states that possible worlds, regardless of whether they exist or not, may be a part of a useful discourse, and mathematical fictionalism advocated by Hartry Field.
Modal fictionalism is recognized as further refinement to the basic fictionalism as it holds that representations of possible worlds in texts are useful fictions. Conceptualization explains that it is a descriptive theorizing of what a text, such as the Bible, amounts to. It is also associated with linguistic ersatzism in the sense that both are views possible worlds.
Fictionalism, on the other hand, in the philosophy of mathematics states that talk of numbers and other mathematical objects is nothing more than a convenience for computation. According to Field, there is no reason to treat parts of mathematics that involve reference to or quantification as true. In this discourse, mathematical objects are accorded the same metaphysical status as literary figures such as Macbeth.
Also in meta-ethics, there is an equivalent position called moral fictionalism (championed by Richard Joyce). Many modern versions of fictionalism are influenced by the work of Kendall Walton in aesthetics.
See also
Color fictionalism
Hans Vaihinger
Noble lie
Philosophy of color
Quietism (philosophy)
Further reading
References
External links
Philosophical methodology
Theories of deduction
Theories of truth | 0.792465 | 0.969236 | 0.768086 |
Ethical egoism | In ethical philosophy, ethical egoism is the normative position that moral agents ought to act in their own self-interest. It differs from psychological egoism, which claims that people can only act in their self-interest. Ethical egoism also differs from rational egoism, which holds that it is rational to act in one's self-interest.
Ethical egoism holds, therefore, that actions whose consequences will benefit the doer are ethical.
Ethical egoism contrasts with ethical altruism, which holds that moral agents have an obligation to help others. Egoism and altruism both contrast with ethical utilitarianism, which holds that a moral agent should treat one's self (also known as the subject) with no higher regard than one has for others (as egoism does, by elevating self-interests and "the self" to a status not granted to others). But it also holds that one is not obligated to sacrifice one's own interests (as altruism does) to help others' interests, so long as one's own interests (i.e., one's own desires or well-being) are substantially equivalent to the others' interests and well-being, but they have the choice to do so. Egoism, utilitarianism, and altruism are all forms of consequentialism, but egoism and altruism contrast with utilitarianism, in that egoism and altruism are both agent-focused forms of consequentialism (i.e., subject-focused or subjective). However, utilitarianism is held to be agent-neutral (i.e., objective and impartial): it does not treat the subject's (i.e., the self's, i.e., the moral "agent's") own interests as being more or less important than the interests, desires, or well-being of others.
Ethical egoism does not, however, require moral agents to harm the interests and well-being of others when making moral deliberation; e.g., what is in an agent's self-interest may be incidentally detrimental, beneficial, or neutral in its effect on others. Individualism allows for others' interest and well-being to be disregarded or not, as long as what is chosen is efficacious in satisfying the self-interest of the agent. Nor does ethical egoism necessarily entail that, in pursuing self-interest, one ought always to do what one wants to do; e.g., in the long term, the fulfillment of short-term desires may prove detrimental to the self. Fleeting pleasure, then, takes a back seat to protracted eudaimonia. In the words of James Rachels, "Ethical egoism ... endorses selfishness, but it doesn't endorse foolishness."
Ethical egoism is often used as the philosophical basis for support of right-libertarianism and individualist anarchism. These are political positions based partly on a belief that individuals should not coercively prevent others from exercising freedom of action.
Forms
Ethical egoism can be broadly divided into three categories: individual, personal, and universal. An individual ethical egoist would hold that all people should do whatever benefits "my" (the individual's) self-interest; a personal ethical egoist would hold that they should act in their self-interest, but would make no claims about what anyone else ought to do; a universal ethical egoist would argue that everyone should act in ways that are in their self-interest.
History
Ethical egoism was introduced by the philosopher Henry Sidgwick in his book The Methods of Ethics, written in 1874. Sidgwick compared egoism to the philosophy of utilitarianism, writing that whereas utilitarianism sought to maximize overall pleasure, egoism focused only on maximizing individual pleasure.
Philosophers before Sidgwick have also retroactively been identified as ethical egoists. One ancient example is the philosophy of Yang Zhu (4th century BC), Yangism, who views wei wo, or "everything for myself", as the only virtue necessary for self-cultivation. Ancient Greek philosophers like Plato, Aristotle and the Stoics were exponents of virtue ethics, and "did not accept the formal principle that whatever the good is, we should seek only our own good, or prefer it to the good of others." However, the beliefs of the Cyrenaics have been referred to as a "form of egoistic hedonism", and while some refer to Epicurus' hedonism as a form of virtue ethics, others argue his ethics are more properly described as ethical egoism.
Justifications
Philosopher James Rachels, in an essay that takes as its title the theory's name, outlines the three arguments most commonly touted in its favor:
"The first argument," writes Rachels, "has several variations, each suggesting the same general point:
"Each of us is intimately familiar with our own individual wants and needs. Moreover, each of us is uniquely placed to pursue those wants and needs effectively. At the same time, we know the desires and needs of others only imperfectly, and we are not well situated to pursue them. Therefore, it is reasonable to believe that if we set out to be 'our brother's keeper,' we would often bungle the job and end up doing more mischief than good."
To give charity to someone is to degrade them, implying as it does that they are reliant on such munificence and quite unable to look out for themselves. "That," reckons Rachels, "is why the recipients of 'charity' are so often resentful rather than appreciative."
Altruism, ultimately, denies an individual's value and is therefore destructive both to society and its individual components, viewing life merely as a thing to be sacrificed. Philosopher Ayn Rand is quoted as writing that, "[i]f a man accepts the ethics of altruism, his first concern is not how to live his life but how to sacrifice it." Moreover, "[t]he basic principle of altruism is that man has no right to exist for his own sake, that service to others is the only justification for his existence, and that self-sacrifice is his highest moral duty, virtue or value." Rather, she writes, "[t]he purpose of morality is to teach you, not to suffer and die, but to enjoy yourself and live."
All of our commonly accepted moral duties, from doing no harm unto others to speaking always the truth to keeping promises, are rooted in the one fundamental principle of self-interest.
It has been observed, however, that the very act of eating (especially, when there are others starving in the world) is such an act of self-interested discrimination. Ethical egoists such as Rand who readily acknowledge the (conditional) value of others to an individual, and who readily endorse empathy for others, have argued the exact reverse from Rachels, that it is altruism which discriminates: "If the sensation of eating a cake is a value, then why is it an immoral indulgence in your stomach, but a moral goal for you to achieve in the stomach of others?" It is therefore altruism which is an arbitrary position, according to Rand.
Criticism
It has been argued that extreme ethical egoism is self-defeating. Faced with a situation of limited resources, egoists would consume as much of the resource as they could, making the overall situation worse for everybody. Egoists may respond that if the situation becomes worse for everybody, that would include the egoist, so it is not, in fact, in their rational self-interest to take things to such extremes. However, the (unregulated) tragedy of the commons and the (one off) prisoner's dilemma are cases in which, on the one hand, it is rational for an individual to seek to take as much as possible even though that makes things worse for everybody, and on the other hand, those cases are not self-refuting since that behaviour remains rational even though it is ultimately self-defeating, i.e. self-defeating does not imply self-refuting. Egoists might respond that a tragedy of the commons, however, assumes some degree of public land. That is, a commons forbidding homesteading requires regulation. Thus, an argument against the tragedy of the commons, in this belief system, is fundamentally an argument for private property rights and the system that recognizes both property rights and rational self-interest—capitalism. More generally, egoists might say that an increasing respect for individual rights uniquely allows for increasing wealth creation and increasing usable resources despite a fixed amount of raw materials (e.g. the West pre-1776 versus post-1776, East versus West Germany, Hong Kong versus mainland China, North versus South Korea, etc.).
It is not clear how to apply a private ownership model to many examples of "commons", however. Examples include large fisheries, the atmosphere and the ocean.
Some perhaps decisive problems with ethical egoism have been pointed out.
One is that an ethical egoist would not want ethical egoism to be universalized: as it would be in the egoist's best self-interest if others acted altruistically towards them, they wouldn't want them to act egoistically; however, that is what they consider to be morally binding. Their moral principles would demand of others not to follow them, which can be considered self-defeating and leads to the question: "How can ethical egoism be considered morally binding if its advocates do not want it to be universally applied?"
Another objection (e.g. by James Rachels) states that the distinction ethical egoism makes between "yourself" and "the rest" – demanding to view the interests of "yourself" as more important – is arbitrary, as no justification for it can be offered; considering that the merits and desires of "the rest" are comparable to those of "yourself" while lacking a justifiable distinction, Rachels concludes that "the rest" should be given the same moral consideration as "yourself".
Notable proponents
The term ethical egoism has been applied retroactively to philosophers such as Bernard de Mandeville and to many other materialists of his generation, although none of them declared themselves to be egoists. Note that materialism does not necessarily imply egoism, as indicated by Karl Marx, and the many other materialists who espoused forms of collectivism. It has been argued that ethical egoism can lend itself to individualist anarchism such as that of Benjamin Tucker, or the combined anarcho-communism and egoism of Emma Goldman, both of whom were proponents of many egoist ideas put forward by Max Stirner. In this context, egoism is another way of describing the sense that the common good should be enjoyed by all. However, most notable anarchists in history have been less radical, retaining altruism and a sense of the importance of the individual that is appreciable but does not go as far as egoism. Recent trends to greater appreciation of egoism within anarchism tend to come from less classical directions such as post-left anarchy or Situationism (e.g. Raoul Vaneigem). Egoism has also been referenced by anarcho-capitalists, such as Murray Rothbard.
Philosopher Max Stirner, in his book The Ego and Its Own, was the first philosopher to call himself an egoist, though his writing makes clear that he desired not a new idea of morality (ethical egoism), but rather a rejection of morality (amoralism), as a nonexistent and limiting "spook"; for this, Stirner has been described as the first individualist anarchist. Other philosophers, such as Thomas Hobbes and David Gauthier, have argued that the conflicts which arise when people each pursue their own ends can be resolved for the best of each individual only if they all voluntarily forgo some of their aims—that is, one's self-interest is often best pursued by allowing others to pursue their self-interest as well so that liberty is equal among individuals. Sacrificing one's short-term self-interest to maximize one's long-term self-interest is one form of "rational self-interest" which is the idea behind most philosophers' advocacy of ethical egoism. Egoists have also argued that one's actual interests are not immediately obvious, and that the pursuit of self-interest involves more than merely the acquisition of some good, but the maximizing of one's chances of survival and/or happiness.
Philosopher Friedrich Nietzsche suggested that egoistic or "life-affirming" behavior stimulates jealousy or "ressentiment" in others, and that this is the psychological motive for the altruism in Christianity. Sociologist Helmut Schoeck similarly considered envy the motive of collective efforts by society to reduce the disproportionate gains of successful individuals through moral or legal constraints, with altruism being primary among these. In addition, Nietzsche (in Beyond Good and Evil) and Alasdair MacIntyre (in After Virtue) have pointed out that the ancient Greeks did not associate morality with altruism in the way that post-Christian Western civilization has done.
Aristotle's view is that we have duties to ourselves as well as to other people (e.g. friends) and to the polis as a whole. The same is true for Thomas Aquinas, Christian Wolff and Immanuel Kant, who claim that there are duties to ourselves as Aristotle did, although it has been argued that, for Aristotle, the duty to one's self is primary.
Ayn Rand argued that there is a positive harmony of interests among free, rational humans, such that no moral agent can rationally coerce another person consistently with their own long-term self-interest. Rand argued that other people are an enormous value to an individual's well-being (through education, trade and affection), but also that this value could be fully realized only under conditions of political and economic freedom. According to Rand, voluntary trade alone can assure that human interaction is mutually beneficial. Rand's student, Leonard Peikoff has argued that the identification of one's interests itself is impossible absent the use of principles, and that self-interest cannot be consistently pursued absent a consistent adherence to certain ethical principles. Recently, Rand's position has also been defended by such writers as Tara Smith, Tibor Machan, Allan Gotthelf, David Kelley, Douglas Rasmussen, Nathaniel Branden, Harry Binswanger, Andrew Bernstein, and Craig Biddle.
Philosopher David L. Norton identified himself as an "ethical individualist", and, like Rand, saw a harmony between an individual's fidelity to their own self-actualization, or "personal destiny", and the achievement of society's well-being.
See also
Adam Smith and the invisible hand
Baruch Spinoza
Behavioral economics
Cārvāka, an egoistic Indian philosophy
Ethical solipsism
Helping behavior
Objectivism
Profit motive
Rational expectations
Footnotes
References
Aristotle, Nicomachean Ethics.
Aristotle, Eudemian Ethics.
Baier, Kurt, 1990, "Egoism" in A Companion to Ethics, Peter Singer (ed.), Blackwell: Oxford.
Biddle, Craig, Loving Life: The Morality of Self-Interest and the Facts that Support It, 2002, Glen Allen.
Branden, Nathaniel, The Psychology of Self-Esteem, 1969, Nash.
Hobbes, Thomas, 1968, Leviathan, C. B. Macpherson (ed.), Harmondsworth: Penguin.
Machan, Tibor, Classical Individualism: The Supreme Importance of Each Human Being, 1998, Routledge.
Nietzsche, Friedrich, 1886, Beyond Good and Evil.
Norton, David, Personal Destinies: A Philosophy of Ethical Individualism, 1976, Princeton University Press.
Paul, E. & F. Miller & J. Paul (1997). Self-Interest. Cambridge University Press
Peikoff, Leonard, "Why Should One Act on Principle?," The Objectivist Forum, 1988.
Rachels, James. 2008, "Ethical Egoism." In Reason & Responsibility: Readings in Some Basic Problems of Philosophy, edited by Joel Feinberg and Russ Shafer-Landau, 532–540. California: Thomson Wadsworth. .
Rand, Ayn, 1964, The Virtue of Selfishness. Signet.
Rosenstand, Nina. 2000. 'Chapter 3: Myself or Others?'. In The Moral of the Story. (3rd Edition). Mountain View, CA: Mayfield Publishing: 127–167.
Schoeck, Helmut, Der Neid. Eine Theorie der Gesellschaft (Envy: A Theory of Social Behaviour), 1966, 1st English ed. 1969.
Smith, Tara, Viable Values: A Study of Life as the Root and Reward of Morality, 2000, Rowman & Littlefield. .
Smith, Tara, The Virtuous Egoist: Ayn Rand's Normative Ethics, 2006, Cambridge University Press. .
Waller, Bruce, N. 2005. "Egoism." In Consider Ethics: Theory, Readings, and Contemporary Issues. New York: Pearson Longman: 79–83.
External links
Merriam-Webster Dictionary entry for egoism
Egoism
Anarchist theory
Libertarianism
Individualism
Consequentialism
Philosophy of life
Ethical theories | 0.772752 | 0.993959 | 0.768084 |
Objectivism | Objectivism is a philosophical system named and developed by Russian-American writer and philosopher Ayn Rand. She described it as "the concept of man as a heroic being, with his own happiness as the moral purpose of his life, with productive achievement as his noblest activity, and reason as his only absolute".
Rand first expressed Objectivism in her fiction, most notably The Fountainhead (1943) and Atlas Shrugged (1957), and later in non-fiction essays and books. Leonard Peikoff, a professional philosopher and Rand's designated intellectual heir, later gave it a more formal structure. Peikoff characterizes Objectivism as a "closed system" insofar as its "fundamental principles" were set out by Rand and are not subject to change. However, he stated that "new implications, applications and integrations can always be discovered".
Objectivism's main tenets are that reality exists independently of consciousness, that human beings have direct contact with reality through sense perception (see direct and indirect realism), that one can attain objective knowledge from perception through the process of concept formation and inductive logic, that the proper moral purpose of one's life is the pursuit of one's own happiness (see rational egoism), that the only social system consistent with this morality is one that displays full respect for individual rights embodied in laissez-faire capitalism, and that the role of art in human life is to transform humans' metaphysical ideas by selective reproduction of reality into a physical form—a work of art—that one can comprehend and to which one can respond emotionally.
Academic philosophers have generally paid little attention to or dismissed Rand's philosophy, although a smaller number of academics do support it. Nonetheless, Objectivism has been a persistent influence among right-libertarians and American conservatives. The Objectivist movement, which Rand founded, attempts to spread her ideas to the public and in academic settings.
Philosophy
Rand originally expressed her ideas in her novels—most notably, in both The Fountainhead and Atlas Shrugged. She further elaborated on them in her periodicals The Objectivist Newsletter, The Objectivist, and The Ayn Rand Letter, and in non-fiction books such as Introduction to Objectivist Epistemology and The Virtue of Selfishness.
The name "Objectivism" derives from the idea that human knowledge and values are objective: they exist and are determined by the nature of reality, to be discovered by one's mind, and are not created by the thoughts one has. Rand stated that she chose the name because her preferred term for a philosophy based on the primacy of existence—"existentialism"—had already been taken.
Rand characterized Objectivism as "a philosophy for living on earth", based on reality, and intended as a method of defining human nature and the nature of the world in which we live.
Metaphysics: objective reality
Rand's philosophy begins with three axioms: existence, consciousness, and identity. Rand defined an axiom as "a statement that identifies the base of knowledge and of any further statement pertaining to that knowledge, a statement necessarily contained in all others whether any particular speaker chooses to identify it or not. An axiom is a proposition that defeats its opponents by the fact that they have to accept it and use it in the process of any attempt to deny it." As Objectivist philosopher Leonard Peikoff argued, Rand's argument for axioms "is not a proof that the axioms of existence, consciousness, and identity are true. It is proof that they are axioms, that they are at the base of knowledge and thus inescapable."
Rand said that existence is the perceptually self-evident fact at the base of all other knowledge, i.e., that "existence exists". She further said that to be is to be something, that "existence is identity". That is, to be is to be "an entity of a specific nature made of specific attributes". That which has no nature or attributes does not and cannot exist. The axiom of existence is conceptualized as differentiating something from nothing, while the law of identity is conceptualized as differentiating one thing from another, i.e., one's first awareness of the law of non-contradiction, another crucial base for the rest of knowledge. As Rand wrote, "A leaf ... cannot be all red and green at the same time, it cannot freeze and burn at the same time... A is A." Objectivism rejects belief in anything alleged to transcend existence.
Rand argued that consciousness is "the faculty of perceiving that which exists". As she put it, "to be conscious is to be conscious of something", that is consciousness itself cannot be distinguished or conceptualized except in relation to an independent reality. "It cannot be aware only of itself—there is no 'itself' until it is aware of something." Thus, Objectivism posits that the mind does not create reality, but rather, it is a means of discovering reality. Expressed differently, existence has "primacy" over consciousness, which must conform to it. Any other type of argument Rand termed "the primacy of consciousness", including any variant of metaphysical subjectivism or theism.
Objectivist philosophy derives its explanations of action and causation from the axiom of identity, referring to causation as "the law of identity applied to action". According to Rand, it is entities that act, and every action is the action of an entity. The way entities act is caused by the specific nature (or "identity") of those entities; if they were different, they would act differently. As with the other axioms, an implicit understanding of causation is derived from one's primary observations of causal connections among entities even before it is verbally identified and serves as the basis of further knowledge.
Epistemology: reason
According to Rand, attaining knowledge beyond what is given by perception requires both volition (or the exercise of free will) and performing a specific method of validation by observation, concept-formation, and the application of inductive and deductive reasoning. For example, a belief in dragons, however sincere, does not mean that reality includes dragons. A process of proof identifying the basis in reality of a claimed item of knowledge is necessary to establish its truth.
Objectivist epistemology begins with the principle that "consciousness is identification". This is understood to be a direct consequence of the metaphysical principle that "existence is identity". Rand defined "reason" as "the faculty that identifies and integrates the material provided by man's senses". Rand wrote "The fundamental concept of method, the one on which all the others depend, is logic. The distinguishing characteristic of logic (the art of non-contradictory identification) indicates the nature of the actions (actions of consciousness required to achieve a correct identification) and their goal (knowledge)—while omitting the length, complexity or specific steps of the process of logical inference, as well as the nature of the particular cognitive problem involved in any given instance of using logic."
According to Rand, consciousness possesses a specific and finite identity, just like everything else that exists; therefore, it must operate by a specific method of validation. An item of knowledge cannot be "disqualified" by being arrived at by a specific process in a particular form. Thus, for Rand, the fact that consciousness must itself possess identity implies the rejection of both universal skepticism based on the "limits" of consciousness, as well as any claim to revelation, emotion or faith-based belief.
Objectivist epistemology maintains that all knowledge is ultimately based on perception. "Percepts, not sensations, are the given, the self-evident." Rand considered the validity of the senses to be axiomatic and said that purported arguments to the contrary all commit the fallacy of the "stolen concept" by presupposing the validity of concepts that, in turn, presuppose the validity of the senses. She said that perception, being determined physiologically, is incapable of error. For example, optical illusions are errors in the conceptual identification of what is seen, not errors of sight itself. The validity of sense perception, therefore, is not susceptible to proof (because it is presupposed by all proof as proof is only a matter of adducing sensory evidence) nor should its validity be denied (since the conceptual tools one would have to use to do this are derived from sensory data). Perceptual error, therefore, is not possible. Rand consequently rejected epistemological skepticism, as she said that the skeptics' claim to knowledge "distorted" by the form or the means of perception is impossible.
The Objectivist theory of perception distinguishes between the form and object. The form in which an organism perceives is determined by the physiology of its sensory systems. Whatever form the organism perceives it in, what it perceives—the object of perception—is reality. Rand consequently rejected the Kantian dichotomy between "things as we perceive them" and "things as they are in themselves". Rand wrote:
The aspect of epistemology given the most elaboration by Rand is the theory of concept-formation, which she presented in Introduction to Objectivist Epistemology. She argued that concepts are formed by a process of measurement omission. Peikoff described this as follows:
According to Rand, "the term 'measurements omitted' does not mean, in this context, that measurements are regarded as non-existent; it means that measurements exist, but are not specified. That measurements must exist is an essential part of the process. The principle is: the relevant measurements must exist in some quantity, but may exist in any quantity."
Rand argued that concepts are organized hierarchically. Concepts such as 'dog,' which bring together "concretes" available in perception, can be differentiated (into the concepts of 'dachshund,' 'poodle,' etc.) or integrated (along with 'cat,' etc., into the concept of 'animal'). Abstract concepts such as 'animal' can be further integrated, via "abstraction from abstractions", into such concepts as 'living thing.' Concepts are formed in the context of knowledge available. A young child differentiates dogs from cats and chickens but need not explicitly differentiate them from deep-sea tube worms, or from other types of animals not yet known to him, to form a concept 'dog'.
Because of its characterization of concepts as "open-ended" classifications that go well beyond the characteristics included in their past or current definitions, Objectivist epistemology rejects the analytic-synthetic distinction as a false dichotomy and denies the possibility of a priori knowledge.
Rand rejected "feeling" as sources of knowledge. Rand acknowledged the importance of emotion for human beings, but she maintained that emotions are a consequence of the conscious or subconscious ideas that a person already accepts, not a means of achieving awareness of reality. "Emotions are not tools of cognition." Rand also rejected all forms of faith or mysticism, terms that she used synonymously. She defined faith as "the acceptance of allegations without evidence or proof, either apart from or against the evidence of one's senses and reason... Mysticism is the claim to some non-sensory, non-rational, non-definable, non-identifiable means of knowledge, such as 'instinct,' 'intuition,' 'revelation,' or any form of 'just knowing. Reliance on revelation is like reliance on a Ouija board; it bypasses the need to show how it connects its results to reality. Faith, for Rand, is not a "short-cut" to knowledge, but a "short-circuit" destroying it.
Objectivism acknowledges the facts that human beings have limited knowledge, are vulnerable to error, and do not instantly understand all of the implications of their knowledge. According to Peikoff, one can be certain of a proposition if all of the available evidence verifies it, i.e., it can be logically integrated with the rest of one's knowledge; one is then certain within the context of the evidence.
Rand rejected the traditional rationalist/empiricist dichotomy, arguing that it embodies a false alternative: conceptually based knowledge independent of perception (rationalism) versus perceptually based knowledge independent of concepts (empiricism). Rand argued that neither is possible because the senses provide the material of knowledge while conceptual processing is also needed to establish knowable propositions.
Criticism on epistemology
The philosopher John Hospers, who was influenced by Rand and shared her moral and political opinions, disagreed with her concerning issues of epistemology. Some philosophers, such as Tibor Machan, have argued that the Objectivist epistemology is incomplete.
Psychology professor Robert L. Campbell writes that the relationship between Objectivist epistemology and cognitive science remains unclear because Rand made claims about human cognition and its development which belong to psychology, yet Rand also argued that philosophy is logically prior to psychology and in no way dependent on it.
The philosophers Randall Dipert and have argued that Objectivist epistemology conflates the perceptual process by which judgments are formed with the way in which they are to be justified, thereby leaving it unclear how sensory data can validate judgments structured propositionally.
Ethics: self-interest
Objectivism includes an extensive treatment of ethical concerns. Rand wrote on morality in her works We the Living (1936), Atlas Shrugged (1957) and The Virtue of Selfishness (1964). Rand defines morality as "a code of values to guide man's choices and actions—the choices and actions that determine the purpose and the course of his life". Rand maintained that the first question is not what should the code of values be, the first question is "Does man need values at all—and why?" According to Rand, "it is only the concept of 'Life' that makes the concept of 'Value' possible", and "the fact that a living entity is, determines what it ought to do". Rand writes: "there is only one fundamental alternative in the universe: existence or non-existence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. [...] It is only a living organism that faces a constant alternative: the issue of life or death".
Rand argued that the primary emphasis of man's free will is the choice: 'to think or not to think'. "Thinking is not an automatic function. In any hour and issue of his life, man is free to think or to evade that effort. Thinking requires a state of full, focused awareness. The act of focusing one's consciousness is volitional. Man can focus his mind to a full, active, purposefully directed awareness of reality—or he can unfocus it and let himself drift in a semiconscious daze, merely reacting to any chance stimulus of the immediate moment, at the mercy of his undirected sensory-perceptual mechanism and of any random, associational connections it might happen to make." According to Rand, therefore, possessing free will, human beings must choose their values: one does not automatically have one's own life as his ultimate value. Whether in fact a person's actions promote and fulfill his own life or not is a question of fact, as it is with all other organisms, but whether a person will act to promote his well-being is up to him, not hard-wired into his physiology. "Man has the power to act as his own destroyer—and that is the way he has acted through most of his history."
In Atlas Shrugged, Rand wrote "Man's mind is his basic tool of survival. Life is given to him, survival is not. His body is given to him, its sustenance is not. His mind is given to him, its content is not. To remain alive he must act and before he can act he must know the nature and purpose of his action. He cannot obtain his food without knowledge of food and of the way to obtain it. He cannot dig a ditch—or build a cyclotron—without a knowledge of his aim and the means to achieve it. To remain alive, he must think." In her novels, The Fountainhead and Atlas Shrugged, she also emphasizes the importance of productive work, romantic love and art to human happiness, and dramatizes the ethical character of their pursuit. The primary virtue in Objectivist ethics is rationality, as Rand meant it "the recognition and acceptance of reason as one's only source of knowledge, one's only judge of values and one's only guide to action".
The purpose of a moral code, Rand said, is to provide the principles by reference to which man can achieve the values his survival requires. Rand summarizes:
Rand's explanation of values presents the proposition that an individual's primary moral obligation is to achieve his own well-being—it is for his life and his self-interest that an individual ought to obey a moral code. Ethical egoism is a corollary of setting man's life as the moral standard. Rand believed that rational egoism is the logical consequence of humans following evidence to its logical conclusion. The only alternative would be that they live without orientation to reality.
A corollary to Rand's endorsement of self-interest is her rejection of the ethical doctrine of altruism—which she defined in the sense of Auguste Comte's altruism (he popularized the term), as a moral obligation to live for the sake of others. Rand also rejected subjectivism. A "whim-worshiper" or "hedonist", according to Rand, is not motivated by a desire to live his own human life, but by a wish to live on a sub-human level. Instead of using "that which promotes my (human) life" as his standard of value, he mistakes "that which I (mindlessly happen to) value" for a standard of value, in contradiction of the fact that, existentially, he is a human and therefore rational organism. The "I value" in whim-worship or hedonism can be replaced with "we value", "he values", "they value", or "God values", and still, it would remain dissociated from reality. Rand repudiated the equation of rational selfishness with hedonistic or whim-worshiping "selfishness-without-a-self". She said that the former is good, and the latter bad, and that there is a fundamental difference between them.
For Rand, all of the principal virtues are applications of the role of reason as man's basic tool of survival: rationality, honesty, justice, independence, integrity, productiveness, and pride—each of which she explains in some detail in "The Objectivist Ethics". The essence of Objectivist ethics is summarized by the oath her Atlas Shrugged character John Galt adhered to: "I swear—by my life and my love of it—that I will never live for the sake of another man, nor ask another man to live for mine."
Criticism on ethics
Some philosophers have criticized Objectivist ethics. The philosopher Robert Nozick argues that Rand's foundational argument in ethics is unsound because it does not explain why someone could not rationally prefer dying and having no values, in order to further some particular value. He argues that her attempt to defend the morality of selfishness is, therefore, an instance of begging the question. Nozick also argues that Rand's solution to David Hume's famous is-ought problem is unsatisfactory. In response, the philosophers Douglas B. Rasmussen and Douglas Den Uyl have argued that Nozick misstated Rand's case.
Charles King criticized Rand's example of an indestructible robot to demonstrate the value of life as incorrect and confusing. In response, Paul St. F. Blair defended Rand's ethical conclusions, while maintaining that his arguments might not have been approved by Rand.
Politics: individual rights and capitalism
Rand's defense of individual liberty integrates elements from her entire philosophy. Since reason is the means of human knowledge, it is therefore each person's most fundamental means of survival and is necessary to the achievement of values. The use or threat of force neutralizes the practical effect of an individual's reason, whether the force originates from the state or from a criminal. According to Rand, "man's mind will not function at the point of a gun". Therefore, the only type of organized human behavior consistent with the operation of reason is that of voluntary cooperation. Persuasion is the method of reason. By its nature, the overtly irrational cannot rely on the use of persuasion and must ultimately resort to force to prevail. Thus, Rand argued that reason and freedom are correlates, just as she argued that mysticism and force are corollaries. Based on this understanding of the role of reason, Objectivists claim that the initiation of physical force against the will of another is immoral, as are indirect initiations of force through threats, fraud, or breach of contract. The use of defensive or retaliatory force, on the other hand, is appropriate.
Objectivism claims that because the opportunity to use reason without the initiation of force is necessary to achieve moral values, each individual has an inalienable moral right to act as his own judgment directs and to keep the product of his effort. Peikoff, explaining the basis of rights, stated, "In content, as the founding fathers recognized, there is one fundamental right, which has several major derivatives. The fundamental right is the right to life. Its major derivatives are the right to liberty, property, and the pursuit of happiness." "A 'right' is a moral principle defining and sanctioning a man's freedom of action in a social context." These rights are specifically understood to be rights to action, not to specific results or objects, and the obligations created by rights are negative in nature: each individual must refrain from violating the rights of others. Objectivists reject alternative notions of rights, such as positive rights, collective rights, or animal rights. Objectivism claims that the only social system which fully recognizes individual rights is capitalism, specifically what Rand described as "full, pure, uncontrolled, unregulated laissez-faire capitalism". Objectivism regards capitalism as the social system which is most beneficial to the poor, but does not consider this its primary justification. Rather, it is the only moral social system. Objectivism maintains that only societies seeking to establish freedom (or free nations) have a right to self-determination.
Objectivism describes government as "the means of placing the retaliatory use of physical force under objective control—i.e., under objectively defined laws"; thus, government is both legitimate and critically important in order to protect individual rights. Rand opposed anarchism because she considered that putting police and courts on the market is an inherent miscarriage of justice. Objectivism claims that the proper functions of a government are "the police, to protect men from criminals—the armed services, to protect men from foreign invaders—the law courts, to settle disputes among men according to objective laws", the executive, and legislatures. Furthermore, in protecting individual rights, the government is acting as an agent of its citizens and "has no rights except the rights delegated to it by the citizens" and it must act in an impartial manner according to specific, objectively defined laws.
Rand argued that limited intellectual property monopolies being granted to certain inventors and artists on a first-to-file basis are moral because she considered all property as fundamentally intellectual. Furthermore, the value of a commercial product derives in part from the necessary work of its inventors. However, Rand considered limits on patents and copyrights as important and said that if they were granted in perpetuity, it would necessarily result in de facto collectivism.
Rand opposed racism and any legal application of racism. She considered affirmative action to be an example of legal racism. Rand advocated the right to legal abortion. Rand believed capital punishment is morally justified as retribution against a murderer, but dangerous due to the risk of mistakenly executing innocent people and facilitating state murder. She therefore said she opposed capital punishment "on epistemological, not moral, grounds". She opposed involuntary military conscription. She opposed any form of censorship, including legal restrictions on pornography, opinion or worship, famously quipping; "In the transition to statism, every infringement of human rights has begun with a given right's least attractive practitioners".
Objectivists have also opposed a number of government activities commonly endorsed by both liberals and conservatives, including antitrust laws, the minimum wage, public education, and existing child labor laws. Objectivists have argued against faith-based initiatives, displaying religious symbols in government facilities, and the teaching of "intelligent design" in public schools. Rand opposed involuntary taxation and believed government could be financed voluntarily, although she thought this could only happen after other reforms of government were implemented.
Criticism on politics
Some critics, including economists and political philosophers such as Murray Rothbard, David D. Friedman, Roy Childs, Norman P. Barry, and Chandran Kukathas, have argued that Objectivist ethics are consistent with anarcho-capitalism instead of minarchism.
Aesthetics: metaphysical value-judgments
The Objectivist theory of art derives from its epistemology, by way of "psycho-epistemology" (Rand's term for an individual's characteristic mode of functioning in acquiring knowledge). Art, according to Objectivism, serves a human cognitive need: it allows human beings to understand concepts as though they were percepts. Objectivism defines "art" as a "selective re-creation of reality according to an artist's metaphysical value-judgments"—that is, according to what the artist believes to be ultimately true and important about the nature of reality and humanity. In this respect Objectivism regards art as a way of presenting abstractions concretely, in perceptual form.
The human need for art, according to this idea, derives from the need for cognitive economy. A concept is already a sort of mental shorthand standing for a large number of concretes, allowing a human being to think indirectly or implicitly of many more such concretes than can be kept explicitly in mind. But a human being cannot keep indefinitely many concepts explicitly in mind either—and yet, according to Objectivism, they need a comprehensive conceptual framework to provide guidance in life. Art offers a way out of this dilemma by providing a perceptual, easily grasped means of communicating and thinking about a wide range of abstractions, including one's metaphysical value-judgments. Objectivism regards art as an effective way to communicate a moral or ethical ideal. Objectivism does not, however, regard art as propagandistic: even though art involves moral values and ideals, its purpose is not to educate, only to show or project. Moreover, art need not be, and usually is not, the outcome of a full-blown, explicit philosophy. Usually, it stems from an artist's sense of life (which is preconceptual and largely emotional).
The end goal of Rand's own artistic endeavors was to portray the ideal man. The Fountainhead is the best example of this effort. Rand uses the character of Roark to embody the concept of the higher man which she believes is what great art should do—embody the characteristics of the best of humanity. This symbolism should be represented in all art; artistic expression should be an extension of the greatness in humanity.
Rand said that Romanticism was the highest school of literary art, noting that Romanticism was "based on the recognition of the principle that man possesses the faculty of volition", absent which, Rand believed, literature is robbed of dramatic power, adding:
The term "romanticism", however, is often affiliated with emotionalism, to which Objectivism is completely opposed. Historically, many romantic artists were philosophically subjectivist. Most Objectivists who are also artists subscribe to what they term romantic realism, which is how Rand described her own work.
Development by other authors
Several authors have developed and applied Rand's ideas in their own work. Rand described Peikoff's The Ominous Parallels (1982), as "the first book by an Objectivist philosopher other than myself". During 1991, Peikoff published Objectivism: The Philosophy of Ayn Rand, a comprehensive exposition of Rand's philosophy. Chris Matthew Sciabarra discusses Rand's ideas and theorizes about their intellectual origins in Ayn Rand: The Russian Radical (1995). Surveys such as On Ayn Rand by Allan Gotthelf (1999), Ayn Rand by Tibor R. Machan (2000), and Objectivism in One Lesson by Andrew Bernstein (2009) provide briefer introductions to Rand's ideas.
Some scholars have emphasized applying Objectivism to more specific areas. Machan has developed Rand's contextual conception of human knowledge (while also drawing on the insights of J. L. Austin and Gilbert Harman) in works such as Objectivity (2004), and David Kelley has explicated Rand's epistemological ideas in works such as The Evidence of the Senses (1986) and A Theory of Abstraction (2001). Regarding the topic of ethics, Kelley has argued in works such as Unrugged Individualism (1996) and The Contested Legacy of Ayn Rand (2000) that Objectivists should pay more attention to the virtue of benevolence and place less emphasis on issues of moral sanction. Kelley's claims have been controversial, and critics Peikoff and Peter Schwartz have argued that he contradicts important principles of Objectivism. Kelley has used the term "Open Objectivism" for a version of Objectivism that involves "a commitment to reasoned, non-dogmatic discussion and debate", "the recognition that Objectivism is open to expansion, refinement, and revision", and "a policy of benevolence toward others, including fellow-travelers and critics". Arguing against Kelley, Peikoff characterized Objectivism as a "closed system" that is not subject to change.
An author who emphasizes Rand's ethics, Tara Smith, retains more of Rand's original ideas in such works as Moral Rights and Political Freedom (1995), Viable Values (2000), and Ayn Rand's Normative Ethics (2006). In collaboration with Peikoff, David Harriman has developed a theory of scientific induction based upon Rand's theory of concepts in The Logical Leap: Induction in Physics (2010).
The political aspects of Rand's philosophy are discussed by Bernstein in The Capitalist Manifesto (2005). In Capitalism: A Treatise on Economics (1996), George Reisman attempts to integrate Objectivist methodology and insights with both Classical and Austrian economics. In psychology, Professor Edwin A. Locke and Ellen Kenner have explored Rand's ideas in the publication The Selfish Path to Romance: How to Love with Passion & Reason. Other writers have explored the application of Objectivism to fields ranging from art, as in What Art Is (2000) by Louis Torres and Michelle Marder Kamhi, to teleology, as in The Biological Basis of Teleological Concepts (1990) by Harry Binswanger.
Impact
One Rand biographer says most people who read Rand's works for the first time do it in their "formative years". Rand's former protégé Nathaniel Branden referred to Rand's "especially powerful appeal to the young", while of the Ayn Rand Institute said Rand "appeals to the idealism of youth". This appeal has alarmed a number of critics of the philosophy. Many of these young people later abandon their positive opinion of Rand and are often said to have "outgrown" her ideas. Endorsers of Rand's work recognize the phenomenon, but attribute it to the loss of youthful idealism and inability to resist social pressures for intellectual conformity. In contrast, historian Jennifer Burns, writing in Goddess of the Market (2009), writes some critics "dismiss Rand as a shallow thinker appealing only to adolescents", although she thinks the critics "miss her significance" as a "gateway drug" to right-wing politics.
Academic philosophers have generally dismissed Objectivism since Rand first presented it. Objectivism has been termed "fiercely anti-academic" because of Rand's criticism of contemporary intellectuals. David Sidorsky, a professor of moral and political philosophy at Columbia University, writes that Rand's work is "outside the mainstream" and is more of an ideology than a comprehensive philosophy. British philosopher Ted Honderich notes that he deliberately excluded an article on Rand from The Oxford Companion to Philosophy (Rand is, however, mentioned in the article on popular philosophy by Anthony Quinton). Rand is the subject of entries in the Stanford Encyclopedia of Philosophy, The Dictionary of Modern American Philosophers, the Internet Encyclopedia of Philosophy, The Routledge Dictionary of Twentieth-Century Political Thinkers, and The Penguin Dictionary of Philosophy. Chandran Kukathas writes in an entry about Rand in the Routledge Encyclopedia of Philosophy, "The influence of Rand's ideas was strongest among college students in the USA but attracted little attention from academic philosophers." Kukathas also writes that her defenses of capitalism and selfishness "kept her out of the intellectual mainstream".
During the 1990s, Rand's works were more likely to be encountered in American classrooms. The Ayn Rand Society, dedicated to fostering the scholarly study of Objectivism, is affiliated with the American Philosophical Association's Eastern Division. Aristotle scholar and Objectivist Allan Gotthelf, late chairman of the Society, and his colleagues argued for more academic study of Objectivism, considering the philosophy as a unique and intellectually interesting defense of classical liberalism that is worth debating. In 1999, a refereed Journal of Ayn Rand Studies began. Programs and fellowships for the study of Objectivism have been supported at the University of Pittsburgh, University of Texas at Austin and University of North Carolina at Chapel Hill.
See also
Bibliography of Ayn Rand and Objectivism
Objectivism and homosexuality
Objectivism and libertarianism
Objectivist periodicals
Philosophical fiction
References
Works cited
Further reading
External links
Ayn Rand Institute: The Center for the Advancement of Objectivism
The Atlas Society: The Center for Objectivism
Capitalism.org – an Objectivist website and publishers of Capitalism on-line magazine
The Objectivism Reference Center
American philosophy
Capitalism
Epistemological theories
Egoism
Individualism
Libertarian theory
Metaphysical theories
Philosophical schools and traditions
Philosophy and atheism
Political theories
Theories of aesthetics
Ethical theories | 0.768708 | 0.999179 | 0.768077 |
Foundationalism | Foundationalism concerns philosophical theories of knowledge resting upon non-inferential justified belief, or some secure foundation of certainty such as a conclusion inferred from a basis of sound premises. The main rival of the foundationalist theory of justification is the coherence theory of justification, whereby a body of knowledge, not requiring a secure foundation, can be established by the interlocking strength of its components, like a puzzle solved without prior certainty that each small region was solved correctly.
Identifying the alternatives as either circular reasoning or infinite regress, and thus exhibiting the regress problem, Aristotle made foundationalism his own clear choice, positing basic beliefs underpinning others. Descartes, the most famed foundationalist, discovered a foundation in the fact of his own existence and in the "clear and distinct" ideas of reason, whereas Locke found a foundation in experience. Differing foundations may reflect differing epistemological emphases—empiricists emphasizing experience, rationalists emphasizing reason—but may blend both.
In the 1930s, debate over foundationalism revived. Whereas Moritz Schlick viewed scientific knowledge like a pyramid where a special class of statements does not require verification through other beliefs and serves as a foundation, Otto Neurath argued that scientific knowledge lacks an ultimate foundation and acts like a raft. In the 1950s, the dominance of foundationalism was challenged by a number of philosophers such as Willard Van Orman Quine and Wilfrid Sellars. Quine's ontological relativity found any belief networked to one's beliefs on all of reality, while auxiliary beliefs somewhere in the vast network are readily modified to protect desired beliefs.
Classically, foundationalism had posited infallibility of basic beliefs and deductive reasoning between beliefs—a strong foundationalism. Around 1975, weak foundationalism emerged. Thus recent foundationalists have variously allowed fallible basic beliefs, and inductive reasoning between them, either by enumerative induction or by inference to the best explanation. And whereas internalists require cognitive access to justificatory means, externalists find justification without such access.
History
Foundationalism was initiated by French early modern philosopher René Descartes. In his Meditations, Descartes challenged the contemporary principles of philosophy by arguing that everything he knew he learnt from or through his senses. He used various arguments to challenge the reliability of the senses, citing previous errors and the possibilities that he was dreaming or being deceived by an Evil Demon which rendered all of his beliefs about the external world false. Descartes attempted to establish the secure foundations for knowledge to avoid scepticism. He contrasted the information provided by senses, which is unclear and uncertain, with the truths of geometry, which are clear and distinct. Geometrical truths are also certain and indubitable; Descartes thus attempted to find truths which were clear and distinct because they would be indubitably true and a suitable foundation for knowledge. His method was to question all of his beliefs until he reached something clear and distinct that was indubitably true. The result was his cogito ergo sum – 'I think therefore I am', or the belief that he was thinking – as his indubitable belief suitable as a foundation for knowledge. This resolved Descartes' problem of the Evil Demon. Even if his beliefs about the external world were false, his beliefs about what he was experiencing were still indubitably true, even if those perceptions do not relate to anything in the world.
Several other philosophers of the early modern period, including John Locke, G. W. Leibniz, George Berkeley, David Hume, and Thomas Reid, accepted foundationalism as well. Baruch Spinoza was interpreted as metaphysical foundationalist by G. W. F. Hegel, a proponent of coherentism. Immanuel Kant's foundationalism rests on his theory of categories.
In late modern philosophy, foundationalism was defended by J. G. Fichte in his book Grundlage der gesamten Wissenschaftslehre (1794/1795), Wilhelm Windelband in his book Über die Gewißheit der Erkenntniss. (1873), and Gottlob Frege in his book Die Grundlagen der Arithmetik (1884).
In contemporary philosophy, foundationalism has been defended by Edmund Husserl, Bertrand Russell and John McDowell.
Definition
Foundationalism is an attempt to respond to the regress problem of justification in epistemology. According to this argument, every proposition requires justification to support it, but any justification also needs to be justified itself. If this goes on ad infinitum, it is not clear how anything in the chain could be justified. Foundationalism holds that there are 'basic beliefs' which serve as foundations to anchor the rest of our beliefs. Strong versions of the theory assert that an indirectly justified belief is completely justified by basic beliefs; more moderate theories hold that indirectly justified beliefs require basic beliefs to be justified, but can be further justified by other factors.
Since ancient Greece, Western philosophy has pursued a solid foundation as the ultimate and eternal reference system for all knowledge. This foundation serves not only as a starting point but also as the fundamental basis for understanding the truth of existence. Thinking is the process of proving the validity of knowledge, not proving the rationality of the foundation from which knowledge is shaped. This means, with ultimate cause, the foundation is true, absolute, entire and impossible to prove. Neopragmatist philosopher Richard Rorty, a proponent of anti-foundationalism, said that the fundamentalism confirmed the existence of the privileged representation which constitutes the foundation, from which dominates epistemology. The earliest foundationalism is Plato's theory of Forms, which shows the general concept as a model for the release of existence, which is only the faint copy of the Forms of eternity, that means, understanding the expression of objects leads to acquiring all knowledge, then acquiring knowledge accompanies achieving the truth. Achieving the truth means understanding the foundation. This idea still has some appeal in for example international relations studies.
Classical foundationalism
Foundationalism holds basic beliefs exist, which are justified without reference to other beliefs, and that nonbasic beliefs must ultimately be justified by basic beliefs. Classical foundationalism maintains that basic beliefs must be infallible if they are to justify nonbasic beliefs, and that only deductive reasoning can be used to transfer justification from one belief to another. Laurence BonJour has argued that the classical formulation of foundationalism requires basic beliefs to be infallible, incorrigible, indubitable, and certain if they are to be adequately justified. Mental states and immediate experience are often taken as good candidates for basic beliefs because it is argued that beliefs about these do not need further support to be justified.
Modest foundationalism
As an alternative to the classic view, modest foundationalism does not require that basic perceptual beliefs are infallible, but holds that it is reasonable to assume that perceptual beliefs are justified unless evidence to the contrary exists. This is still foundationalism because it maintains that all non-basic beliefs must be ultimately justified by basic beliefs, but it does not require that basic beliefs are infallible and allows inductive reasoning as an acceptable form of inference. For example, a belief that 'I see red' could be defeated with psychological evidence showing my mind to be confused or inattentive. Modest foundationalism can also be used to avoid the problem of inference. Even if perceptual beliefs are infallible, it is not clear that they can infallibly ground empirical knowledge (even if my belief that the table looks red to me is infallible, the inference to the belief that the table actually is red might not be infallible). Modest foundationalism does not require this link between perception and reality to be so strong; our perception of a table being yellow is adequate justification to believe that this is the case, even if it is not infallible.
Reformed epistemology is a form of modest foundationalism which takes religious beliefs as basic because they are non-inferentially justified: their justification arises from religious experience, rather than prior beliefs. This takes a modest approach to foundationalism – religious beliefs are not taken to be infallible, but are assumed to be prima facie justified unless evidence arises to the contrary.
Internalism and externalism
Foundationalism can take internalist and externalist forms. Internalism requires that a believer's justification for a belief must be accessible to them for it to be justified. Foundationalist internalists have held that basic beliefs are justified by mental events or states, such as experiences, that do not constitute beliefs. Alternatively, basic beliefs may be justified by some special property of the belief itself, such as its being self-evident or infallible. Externalism maintains that it is unnecessary for the means of justification of a belief to be accessible to the believer.
Reliabilism is an externalist foundationalist theory, initially proposed by Alvin Goldman, which argues that a belief is justified if it is reliably produced, meaning that it will be probably true. Goldman distinguished between two kinds of justification for beliefs: belief-dependent and belief-independent. A belief-dependent process uses prior beliefs to produce new beliefs; a belief-independent process does not, using other stimuli instead. Beliefs produced this way are justified because the processes that cause them are reliable; this might be because we have evolved to reach good conclusions when presented with sense-data, meaning the conclusions we draw from our senses are usually true.
Criticisms
Critics of foundationalism often argue that for a belief to be justified it must be supported by other beliefs; in Donald Davidson's phrase, "only a belief can be a reason for another belief". For instance, Wilfrid Sellars argued that non-doxastic mental states cannot be reasons, and so noninferential warrant cannot be derived from them. Similarly, critics of externalist foundationalism argue that only mental states or properties the believer is aware of could make a belief justified.
Postmodernists and post-structuralists such as Richard Rorty and Jacques Derrida have attacked foundationalism on the grounds that the truth of a statement or discourse is only verifiable in accordance with other statements and discourses. Rorty in particular elaborates further on this, claiming that the individual, the community, the human body as a whole have a 'means by which they know the world' (this entails language, culture, semiotic systems, mathematics, science etc.). In order to verify particular means, or particular statements belonging to certain means (e.g., the propositions of the natural sciences), a person would have to 'step outside' the means and critique them neutrally, in order to provide a foundation for adopting them. However, this is impossible. The only way in which one can know the world is through the means by which they know the world; a method cannot justify itself. This argument can be seen as directly related to Wittgenstein's theory of language, drawing a parallel between postmodernism and late logical positivism that is united in critique of foundationalism.
See also
Constructivist epistemology
Ethical intuitionism
Evidentialism
Foundherentism
Panrationalism
Pragmatism
References
Bibliography
External links
Philosophical analogies
Theories of justification | 0.775143 | 0.990719 | 0.767949 |
Theory of language | Theory of language is a topic in philosophy of language and theoretical linguistics. It has the goal of answering the questions "What is language?"; "Why do languages have the properties they do?"; or "What is the origin of language?". In addition to these fundamental questions, the theory of language also seeks to understand how language is acquired and used by individuals and communities. This involves investigating the cognitive and neural processes involved in language processing and production, as well as the social and cultural factors that shape linguistic behavior.
Even though much of the research in linguistics is descriptive or prescriptive, there exists an underlying assumption that terminological and methodological choices reflect the researcher's opinion of language. These choices often stem from the theoretical framework a linguist subscribes to, shaping their interpretation of linguistic phenomena. For instance, within the generative grammar framework, linguists might focus on underlying syntactic structures, while cognitive linguists might emphasize the role of conceptual metaphor. Linguists are divided into different schools of thinking, with the nature–nurture debate as the main divide. Some linguistics conferences and journals are focussed on a specific theory of language, while others disseminate a variety of views.
Like in other human and social sciences, theories in linguistics can be divided into humanistic and sociobiological approaches. Same terms, for example 'rationalism', 'functionalism', 'formalism' and 'constructionism', are used with different meanings in different contexts.
Humanistic theories
Humanistic theories consider people as having an agentive role in the social construction of language. Language is primarily seen as a sociocultural phenomenon. This tradition emphasises culture, nurture, creativity and diversity. A classical rationalist approach to language stems from the philosophy Age of Enlightenment. Rationalist philosophers argued that people had created language in a step-by-step process to serve their need to communicate with each other. Thus, language is thought of as a rational human invention.
Logical grammar
Many philosophers of language, since Plato and Aristotle, have considered language as a manmade tool for making statements or propositions about the world on the basis of a predicate-argument structure. Especially in the classical tradition, the purpose of the sentence was considered to be to predicate about the subject. Aristotle's example is "Man is a rational animal", where Man is the subject and is a rational animal is the predicate, which attributes the subject. In the twentieth century, classical logical grammar was defended by Edmund Husserl's "pure logical grammar". Husserl argues, in the spirit of seventeenth-century rational grammar, that the structures of consciousness are compositional and organized into subject-predicate structures. These give rise to the structures of semantics and syntax cross-linguistically. Categorial grammar is another example of logical grammar in the modern context.
More lately, in Donald Davidson's event semantics, for example, the verb serves as the predicate. Like in modern predicate logic, subject and object are arguments of the transitive predicate. A similar solution is found in formal semantics. Many modern philosophers continue to consider language as a logically based tool for expressing the structures of reality by means of predicate-argument structure. Examples include Bertrand Russell, Ludwig Wittgenstein, Winfrid Sellars, Hilary Putnam, and John Searle.
Cultural–historical approaches
During the 19th century, when sociological questions remained under psychology, languages and language change were thought of as arising from human psychology and the collective unconscious mind of the community, shaped by its history, as argued by Moritz Lazarus, Heymann Steinthal and Wilhelm Wundt. Advocates of Völkerpsychologie ('folk psychology') regarded language as Volksgeist; a social phenomenon conceived as the 'spirit of the nation'.
Wundt claimed that the human mind becomes organised according to the principles of syllogistic reasoning with social progress and education. He argued for a binary-branching model for the description of the mind, and syntax. Folk psychology was imported to North American linguistics by Franz Boas and Leonard Bloomfield who were the founders of a school of thought which was later nicknamed 'American structuralism'.
Folk psychology became associated with German nationalism, and after World War I Bloomfield apparently replaced Wundt's structural psychology with Albert Paul Weiss's behavioral psychology; although Wundtian notions remained elementary for his linguistic analysis. The Bloomfieldian school of linguistics was eventually reformed as a sociobiological approach by Noam Chomsky (see 'generative grammar' below).
Since generative grammar's popularity began to wane towards the end of the 20th century, there has been a new wave of cultural anthropological approaches to the language question sparking a modern debate on the relationship of language and culture. Participants include Daniel Everett, Jesse Prinz, Nicholas Evans and Stephen Levinson.
Structuralism: a sociological–semiotic theory
The study of culture and language developed in a different direction in Europe where Émile Durkheim successfully separated sociology from psychology, thus establishing it as an autonomous science. Ferdinand de Saussure likewise argued for the autonomy of linguistics from psychology. He created a semiotic theory which would eventually give rise to the movement in human sciences known as structuralism, followed by functionalism or functional structuralism, post-structuralism and other similar tendencies. The names structuralism and functionalism are derived from Durkheim's modification of Herbert Spencer's organicism which draws an analogy between social structures and the organs of an organism, each necessitated by its function.
Saussure approaches the essence of language from two sides. For the one, he borrows ideas from Steinthal and Durkheim, concluding that language is a 'social fact'. For the other, he creates a theory of language as a system in and for itself which arises from the association of concepts and words or expressions. Thus, language is a dual system of interactive sub-systems: a conceptual system and a system of linguistic forms. Neither of these can exist without the other because, in Saussure's notion, there are no (proper) expressions without meaning, but also no (organised) meaning without words or expressions. Language as a system does not arise from the physical world, but from the contrast between the concepts, and the contrast between the linguistic forms.
Functionalism: language as a tool for communication
There was a shift of focus in sociology in the 1920s, from structural to functional explanation, or the adaptation of the social 'organism' to its environment. Post-Saussurean linguists, led by the Prague linguistic circle, began to study the functional value of the linguistic structure, with communication taken as the primary function of language in the meaning 'task' or 'purpose'. These notions translated into an increase of interest in pragmatics, with a discourse perspective (the analysis of full texts) added to the multilayered interactive model of structural linguistics. This gave rise to functional linguistics. Some of its main concepts include information structure and economy.
Formalism: language as a mathematical–semiotic system
Structural and formal linguist Louis Hjelmslev considered the systemic organisation of the bilateral linguistic system fully mathematical, rejecting the psychological and sociological aspect of linguistics altogether. He considered linguistics as the comparison of the structures of all languages using formal grammars – semantic and discourse structures included. Hjelmslev's idea is sometimes referred to as 'formalism'.
Although generally considered as a structuralist, Lucien Tesnière regarded meaning as giving rise to expression, but not vice versa, at least as regards the relationship between semantics and syntax. He considered the semantic plane as psychological, but syntax as being based on the necessity to break the two-dimensional semantic representation into linear form.
Post-structuralism: language as a societal tool
The Saussurean idea of language as an interaction of the conceptual system and the expressive system was elaborated in philosophy, anthropology and other fields of human sciences by Claude Lévi-Strauss, Roland Barthes, Michel Foucault, Jacques Derrida, Julia Kristeva and many others. This movement was interested in the Durkheimian concept of language as a social fact or a rule-based code of conduct; but eventually rejected the structuralist idea that the individual cannot change the norm. Post-structuralists study how language affects our understanding of reality thus serving as a tool of shaping society.
Language as an artificial construct
While the humanistic tradition stemming from 19th century Völkerpsychologie emphasises the unconscious nature of the social construction of language, some perspectives of post-structuralism and social constructionism regard human languages as man-made rather than natural. At this end of the spectrum, structural linguist Eugenio Coșeriu laid emphasis on the intentional construction of language. Daniel Everett has likewise approached the question of language construction from the point of intentionality and free will.
There were also some contacts between structural linguists and the creators of constructed languages. For example, Saussure's brother René de Saussure was an Esperanto activist, and the French functionalist André Martinet served as director of the International Auxiliary Language Association. Otto Jespersen created and proposed the international auxiliary language Novial.
Sociobiological theories
In contrast to humanistic linguistics, sociobiological approaches consider language as biological phenomena. Approaches to language as part of cultural evolution can be roughly divided into two main groups: genetic determinism which argues that languages stem from the human genome; and social Darwinism, as envisioned by August Schleicher and Max Müller, which applies principles and methods of evolutionary biology to linguistics. Because sociobiogical theories have been labelled as chauvinistic in the past, modern approaches, including Dual inheritance theory and memetics, aim to provide more sustainable solutions to the study of biology's role in language.
Language as a genetically inherited phenomenon
Strong version ('rationalism')
The role of genes in language formation has been discussed and studied extensively. Proposing generative grammar, Noam Chomsky argues that language is fully caused by a random genetic mutation, and that linguistics is the study of universal grammar, or the structure in question. Others, including Ray Jackendoff, point out that the innate language component could be the result of a series of evolutionary adaptations; Steven Pinker argues that, because of these, people are born with a language instinct.
The random and the adaptational approach are sometimes referred to as formalism (or structuralism) and functionalism (or adaptationism), respectively, as a parallel to debates between advocates of structural and functional explanation in biology. Also known as biolinguistics, the study of linguistic structures is parallelised with that of natural formations such as ferromagnetic droplets and botanic forms. This approach became highly controversial at the end of the 20th century due to a lack of empirical support for genetics as an explanation of linguistic structures.
More recent anthropological research aims to avoid genetic determinism. Behavioural ecology and dual inheritance theory, the study of gene–culture co-evolution, emphasise the role of culture as a human invention in shaping the genes, rather than vice versa.
Weak version ('empiricism')
Some former generative grammarians argue that genes may nonetheless have an indirect effect on abstract features of language. This makes up yet another approach referred to as 'functionalism' which makes a weaker claim with respect to genetics. Instead of arguing for a specific innate structure, it is suggested that human physiology and neurological organisation may give rise to linguistic phenomena in a more abstract way.
Based on a comparison of structures from multiple languages, John A. Hawkins suggests that the brain, as a syntactic parser, may find it easier to process some word orders than others, thus explaining their prevalence. This theory remains to be confirmed by psycholinguistic studies.
Conceptual metaphor theory from George Lakoff's cognitive linguistics hypothesises that people have inherited from lower animals the ability for deductive reasoning based on visual thinking, which explains why languages make so much use of visual metaphors.
Languages as species
It was thought in early evolutionary biology that languages and species can be studied according to the same principles and methods. The idea of languages and cultures as fighting for living space became highly controversial as it was accused of being a pseudoscience that caused two world wars, and social Darwinism was banished from humanities by 1945. In the concepts of Schleicher and Müller, both endorsed by Charles Darwin, languages could be either organisms or populations.
A neo-Darwinian version of this idea was introduced as memetics by Richard Dawkins in 1976. In this thinking, ideas and cultural units, including words, are compared to viruses or replicators. Although meant as a softer alternative to genetic determinism, memetics has been widely discredited as pseudoscience, and it has failed to establish itself as a recognised field of scientific research. The language–species analogy nonetheless continues to enjoy popularity in linguistics and other human sciences. Since the 1990s there have been numerous attempts to revive it in various guises. As Jamin Pelkey explains,Theorists who explore such analogies usually feel obliged to pin language to some specific sub-domain of biotic growth. William James selects "zoölogical evolution", William Croft prefers botanical evolution, but most theorists zoom in to more microbiotic levels – some claiming that linguistic phenomena are analogous to the cellular level and others arguing for the genetic level of biotic growth. For others, language is a parasite; for others still, language is a virus ... The disagreements over grounding analogies do not stop here.Like many other approaches to linguistics, these, too, are collectively called 'functionalism'. They include various frameworks of usage-based linguistics, language as a complex adaptive system, construction grammar, emergent linguistics, and others.
See also
Philosophy of language
References
Theories of language | 0.778023 | 0.986965 | 0.767882 |
Ideal (ethics) | An ideal is a principle or value that one actively pursues as a goal, usually in the context of ethics, and one's prioritization of ideals can serve to indicate the extent of one's dedication to each. The belief in ideals is called ethical idealism, and the history of ethical idealism includes a variety of philosophers. In some theories of applied ethics, such as that of Rushworth Kidder, there is importance given to such orders as a way to resolve disputes. In law, for instance, a judge is sometimes called on to resolve the balance between the ideal of truth, which would advise hearing out all evidence, and the ideal of fairness. Given the complexity of putting ideals into practice, and resolving conflicts between them, it is not uncommon to see them reduced to dogma. One way to avoid this, according to Bernard Crick, is to have ideals that themselves are descriptive of a process, rather than an outcome. His political virtues try to raise the practical habits useful in resolving disputes into ideals of their own. A virtue, in general, is an ideal that one can make a habit.
See also
Dominant culture
Euthyphro dilemma
History of ethical idealism
Self-sufficiency
Social justice
References
External links
Philosophy of life
Concepts in ethics | 0.780658 | 0.98359 | 0.767847 |
Intuitionism | In the philosophy of mathematics, intuitionism, or neointuitionism (opposed to preintuitionism), is an approach where mathematics is considered to be purely the result of the constructive mental activity of humans rather than the discovery of fundamental principles claimed to exist in an objective reality. That is, logic and mathematics are not considered analytic activities wherein deep properties of objective reality are revealed and applied, but are instead considered the application of internally consistent methods used to realize more complex mental constructs, regardless of their possible independent existence in an objective reality.
Truth and proof
The fundamental distinguishing characteristic of intuitionism is its interpretation of what it means for a mathematical statement to be true. In Brouwer's original intuitionism, the truth of a mathematical statement is a subjective claim: a mathematical statement corresponds to a mental construction, and a mathematician can assert the truth of a statement only by verifying the validity of that construction by intuition. The vagueness of the intuitionistic notion of truth often leads to misinterpretations about its meaning. Kleene formally defined intuitionistic truth from a realist position, yet Brouwer would likely reject this formalization as meaningless, given his rejection of the realist/Platonist position. Intuitionistic truth therefore remains somewhat ill-defined. However, because the intuitionistic notion of truth is more restrictive than that of classical mathematics, the intuitionist must reject some assumptions of classical logic to ensure that everything they prove is in fact intuitionistically true. This gives rise to intuitionistic logic.
To an intuitionist, the claim that an object with certain properties exists is a claim that an object with those properties can be constructed. Any mathematical object is considered to be a product of a construction of a mind, and therefore, the existence of an object is equivalent to the possibility of its construction. This contrasts with the classical approach, which states that the existence of an entity can be proved by refuting its non-existence. For the intuitionist, this is not valid; the refutation of the non-existence does not mean that it is possible to find a construction for the putative object, as is required in order to assert its existence. As such, intuitionism is a variety of mathematical constructivism; but it is not the only kind.
The interpretation of negation is different in intuitionist logic than in classical logic. In classical logic, the negation of a statement asserts that the statement is false; to an intuitionist, it means the statement is refutable. There is thus an asymmetry between a positive and negative statement in intuitionism. If a statement P is provable, then P certainly cannot be refutable. But even if it can be shown that P cannot be refuted, this does not constitute a proof of P. Thus P is a stronger statement than not-not-P.
Similarly, to assert that A or B holds, to an intuitionist, is to claim that either A or B can be proved. In particular, the law of excluded middle, "A or not A", is not accepted as a valid principle. For example, if A is some mathematical statement that an intuitionist has not yet proved or disproved, then that intuitionist will not assert the truth of "A or not A". However, the intuitionist will accept that "A and not A" cannot be true. Thus the connectives "and" and "or" of intuitionistic logic do not satisfy de Morgan's laws as they do in classical logic.
Intuitionistic logic substitutes constructability for abstract truth and is associated with a transition from the proof of model theory to abstract truth in modern mathematics. The logical calculus preserves justification, rather than truth, across transformations yielding derived propositions. It has been taken as giving philosophical support to several schools of philosophy, most notably the Anti-realism of Michael Dummett. Thus, contrary to the first impression its name might convey, and as realized in specific approaches and disciplines (e.g. Fuzzy Sets and Systems), intuitionist mathematics is more rigorous than conventionally founded mathematics, where, ironically, the foundational elements which intuitionism attempts to construct/refute/refound are taken as intuitively given.
Infinity
Among the different formulations of intuitionism, there are several different positions on the meaning and reality of infinity.
The term potential infinity refers to a mathematical procedure in which there is an unending series of steps. After each step has been completed, there is always another step to be performed. For example, consider the process of counting:
The term actual infinity refers to a completed mathematical object which contains an infinite number of elements. An example is the set of natural numbers, .
In Cantor's formulation of set theory, there are many different infinite sets, some of which are larger than others. For example, the set of all real numbers is larger than , because any attempt to put the natural numbers into one-to-one correspondence with the real numbers will always fail: there will always be an infinite number of real numbers "left over". Any infinite set that can be placed in one-to-one correspondence with the natural numbers is said to be "countable" or "denumerable". Infinite sets larger than this are said to be "uncountable".
Cantor's set theory led to the axiomatic system of Zermelo–Fraenkel set theory (ZFC), now the most common foundation of modern mathematics. Intuitionism was created, in part, as a reaction to Cantor's set theory.
Modern constructive set theory includes the axiom of infinity from ZFC (or a revised version of this axiom) and the set of natural numbers. Most modern constructive mathematicians accept the reality of countably infinite sets (however, see Alexander Esenin-Volpin for a counter-example).
Brouwer rejected the concept of actual infinity, but admitted the idea of potential infinity.
History
Intuitionism's history can be traced to two controversies in nineteenth century mathematics.
The first of these was the invention of transfinite arithmetic by Georg Cantor and its subsequent rejection by a number of prominent mathematicians including most famously his teacher Leopold Kronecker—a confirmed finitist.
The second of these was Gottlob Frege's effort to reduce all of mathematics to a logical formulation via set theory and its derailing by a youthful Bertrand Russell, the discoverer of Russell's paradox. Frege had planned a three volume definitive work, but just as the second volume was going to press, Russell sent Frege a letter outlining his paradox, which demonstrated that one of Frege's rules of self-reference was self-contradictory. In an appendix to the second volume, Frege acknowledged that one of the axioms of his system did in fact lead to Russell's paradox.
Frege, the story goes, plunged into depression and did not publish the third volume of his work as he had planned. For more see Davis (2000) Chapters 3 and 4: Frege: From Breakthrough to Despair and Cantor: Detour through Infinity. See van Heijenoort for the original works and van Heijenoort's commentary.
These controversies are strongly linked as the logical methods used by Cantor in proving his results in transfinite arithmetic are essentially the same as those used by Russell in constructing his paradox. Hence how one chooses to resolve Russell's paradox has direct implications on the status accorded to Cantor's transfinite arithmetic.
In the early twentieth century L. E. J. Brouwer represented the intuitionist position and David Hilbert the formalist position—see van Heijenoort. Kurt Gödel offered opinions referred to as Platonist (see various sources re Gödel). Alan Turing considers:
"non-constructive systems of logic with which not all the steps in a proof are mechanical, some being intuitive". Later, Stephen Cole Kleene brought forth a more rational consideration of intuitionism in his Introduction to metamathematics (1952).
Nicolas Gisin is adopting intuitionist mathematics to reinterpret quantum indeterminacy, information theory and the physics of time.
Contributors
Henri Poincaré (preintuitionism/conventionalism)
L. E. J. Brouwer
Michael Dummett
Arend Heyting
Stephen Kleene
Branches of intuitionistic mathematics
Intuitionistic logic
Intuitionistic arithmetic
Intuitionistic type theory
Intuitionistic set theory
Intuitionistic analysis
See also
Anti-realism
BHK interpretation
Brouwer–Hilbert controversy
Computability logic
Constructive logic
Curry–Howard isomorphism
Foundations of mathematics
Fuzzy logic
Game semantics
Intuition (knowledge)
Model theory
Topos theory
Ultraintuitionism
Notes
References
"Analysis." Encyclopædia Britannica. 2006. Encyclopædia Britannica 2006 Ultimate Reference Suite DVD 15 June 2006, "Constructive analysis" (Ian Stewart, author)
W. S. Anglin, Mathematics: A Concise history and Philosophy, Springer-Verlag, New York, 1994.
In Chapter 39 Foundations, with respect to the 20th century Anglin gives very precise, short descriptions of Platonism (with respect to Godel), Formalism (with respect to Hilbert), and Intuitionism (with respect to Brouwer).
Martin Davis (ed.) (1965), The Undecidable, Raven Press, Hewlett, NY. Compilation of original papers by Gödel, Church, Kleene, Turing, Rosser, and Post. Republished as
John W. Dawson Jr., Logical Dilemmas: The Life and Work of Kurt Gödel, A. K. Peters, Wellesley, MA, 1997.
Less readable than Goldstein but, in Chapter III Excursis, Dawson gives an excellent "A Capsule History of the Development of Logic to 1928".
Rebecca Goldstein, Incompleteness: The Proof and Paradox of Kurt Godel, Atlas Books, W.W. Norton, New York, 2005.
In Chapter II Hilbert and the Formalists Goldstein gives further historical context. As a Platonist Gödel was reticent in the presence of the logical positivism of the Vienna Circle. Goldstein discusses Wittgenstein's impact and the impact of the formalists. Goldstein notes that the intuitionists were even more opposed to Platonism than Formalism.
Jacques Hartong and Georges Reeb, Intuitionnisme 84 (first published in La Mathématique Non-standard, éditions du C.N.R.S.)
A reevaluation of intuitionism, from the point of view (among others) of constructive mathematics and non-standard analysis.
van Heijenoort, J., From Frege to Gödel, A Source Book in Mathematical Logic, 1879–1931, Harvard University Press, Cambridge, MA, 1967. Reprinted with corrections, 1977. The following papers appear in van Heijenoort:
L.E.J. Brouwer, 1923, On the significance of the principle of excluded middle in mathematics, especially in function theory [reprinted with commentary, p. 334, van Heijenoort]
Andrei Nikolaevich Kolmogorov, 1925, On the principle of excluded middle, [reprinted with commentary, p. 414, van Heijenoort]
L.E.J. Brouwer, 1927, On the domains of definitions of functions, [reprinted with commentary, p. 446, van Heijenoort]
Although not directly germane, in his (1923) Brouwer uses certain words defined in this paper.
L.E.J. Brouwer, 1927(2), Intuitionistic reflections on formalism, [reprinted with commentary, p. 490, van Heijenoort]
Jacques Herbrand, (1931b), "On the consistency of arithmetic", [reprinted with commentary, p. 618ff, van Heijenoort]
From van Heijenoort's commentary it is unclear whether or not Herbrand was a true "intuitionist"; Gödel (1963) asserted that indeed "...Herbrand was an intuitionist". But van Heijenoort says Herbrand's conception was "on the whole much closer to that of Hilbert's word 'finitary' ('finit') that to "intuitionistic" as applied to Brouwer's doctrine".
Arend Heyting:
In Chapter III A Critique of Mathematic Reasoning, §11. The paradoxes, Kleene discusses Intuitionism and Formalism in depth. Throughout the rest of the book he treats, and compares, both Formalist (classical) and Intuitionist logics with an emphasis on the former.
Stephen Cole Kleene and Richard Eugene Vesley, The Foundations of Intuitionistic Mathematics, North-Holland Publishing Co. Amsterdam, 1965. The lead sentence tells it all "The constructive tendency in mathematics...". A text for specialists, but written in Kleene's wonderfully-clear style.
A. A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e. Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algorifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60–51085.] A secondary reference for specialists: Markov opined that "The entire significance for mathematics of rendering more precise the concept of algorithm emerges, however, in connection with the problem of a constructive foundation for mathematics....[p. 3, italics added.] Markov believed that further applications of his work "merit a special book, which the author hopes to write in the future" (p. 3). Sadly, said work apparently never appeared.
Hilary Putnam and Paul Benacerraf, Philosophy of Mathematics: Selected Readings, Englewood Cliffs, N.J.: Prentice-Hall, 1964. 2nd ed., Cambridge: Cambridge University Press, 1983.
Part I. The foundation of mathematics, Symposium on the foundations of mathematics
Rudolf Carnap, The logicist foundations of mathematics, p. 41
Arend Heyting, The intuitionist foundations of mathematics, p. 52
Johann von Neumann, The formalist foundations of mathematics, p. 61
Arend Heyting, Disputation, p. 66
L. E. J. Brouwer, Intuitionnism and formalism, p. 77
L. E. J. Brouwer, Consciousness, philosophy, and mathematics, p. 90
Constance Reid, Hilbert, Copernicus – Springer-Verlag, 1st edition 1970, 2nd edition 1996.
Definitive biography of Hilbert places his "Program" in historical context together with the subsequent fighting, sometimes rancorous, between the Intuitionists and the Formalists.
Paul Rosenbloom, The Elements of Mathematical Logic, Dover Publications Inc, Mineola, New York, 1950.
In a style more of Principia Mathematica – many symbols, some antique, some from German script. Very good discussions of intuitionism in the following locations: pages 51–58 in Section 4 Many Valued Logics, Modal Logics, Intuitionism; pages 69–73 Chapter III The Logic of Propostional Functions Section 1 Informal Introduction; and p. 146-151 Section 7 the Axiom of Choice.
External links
Epistemology
Constructivism (mathematics)
Philosophy of mathematics
de:Intuitionismus | 0.775917 | 0.989581 | 0.767833 |
Egalitarianism | Egalitarianism, or equalitarianism, is a school of thought within political philosophy that builds on the concept of social equality, prioritizing it for all people. Egalitarian doctrines are generally characterized by the idea that all humans are equal in fundamental worth or moral status. As such, all people should be accorded equal rights and treatment under the law. Egalitarian doctrines have supported many modern social movements, including the Enlightenment, feminism, civil rights, and international human rights.
One key aspect of egalitarianism is its emphasis on equal opportunities for all individuals, regardless of their background or circumstances. This means ensuring that everyone has access to the same resources, education, and opportunities to succeed in life. By promoting equal opportunities, egalitarianism aims to level the playing field and reduce disparities that result from social inequalities.
Forms
Some specifically focused egalitarian concerns include communism, legal egalitarianism, luck egalitarianism, political egalitarianism, gender egalitarianism, racial equality, equality of opportunity, and Christian egalitarianism. Common forms of egalitarianism include political and philosophical.
Legal egalitarianism
One argument is that liberalism provides democratic societies with the means to carry out civic reform by providing a framework for developing public policy and providing the correct conditions for individuals to achieve civil rights. There are two major types of equality:
Formal equality: individual merit-based equality of opportunity.
Substantive equality: moves away from individual merit-based comparison towards equality of outcomes for groups and social equity.
Equality of person
The English Bill of Rights of 1689 and the United States Constitution use only the term person in operative language involving fundamental rights and responsibilities, except for a reference to men in the English Bill of Rights regarding men on trial for treason; and a rule of proportional Congressional representation in the 14th Amendment to the United States Constitution.
As the rest of the Constitution, in its operative language the 14th Amendment to the United States Constitution uses the term person, stating that "nor shall any State deprives any person of life, liberty, or property, without due process of law; nor deny any person within its jurisdiction the equal protection of the laws".
Gender equality
The motto "" was used during the French Revolution and is still used as an official motto of the French government. The 1789 Declaration of the Rights of Man and the Citizen French Constitution is also framed with this basis in equal rights of humankind.
The Declaration of Independence of the United States is an example of an assertion of equality of men as "All men are created equal" and the wording of men and man is a reference to both men and women, i.e., mankind. John Locke is sometimes considered the founder of this form. Many state constitutions in the United States also use the rights of man language rather than rights of person since the noun man has always been a reference to and an inclusion of both men and women.
The Tunisian Constitution of 2014 provides that "men and women shall be equal in their rights and duties".
Feminism is informed by egalitarian philosophy, being a gender-focused philosophy of equality. Feminism is distinguished from egalitarianism by also existing as a political and social movement.
Social egalitarianism
At a cultural level, egalitarian theories have developed in sophistication and acceptance during the past two hundred years. Among the notable broadly egalitarian philosophies are socialism, communism, social anarchism, libertarian socialism, left-libertarianism, and progressivism, some of which propound economic egalitarianism. Anti-egalitarianism or elitism is opposition to egalitarianism.
Economic
An early example of equality is what might be described as outcome economic egalitarianism is the Chinese philosophy of agriculturalism which held that the economic policies of a country need to be based upon egalitarian self-sufficiency.
In socialism, social ownership of means of production is sometimes considered to be a form of economic egalitarianism because in an economy characterized by social ownership the surplus product generated by industry would accrue to the population as a whole as opposed to a class of private owners, thereby granting each increased autonomy and greater equality in their relationships with one another. Although the economist Karl Marx is sometimes mistaken to be an egalitarian, Marx eschewed normative theorizing on moral principles altogether. Marx did have a theory of the evolution of moral principles concerning specific economic systems.
The American economist John Roemer has put forth a new perspective on equality and its relationship to socialism. Roemer attempts to reformulate Marxist analysis to accommodate normative principles of distributive justice, shifting the argument for socialism away from purely technical and materialist reasons to one of distributive justice. Roemer argues that according to the principle of distributive justice, the traditional definition of socialism is based on the principle that individual compensation is proportional to the value of the labor one expends in production ("To each according to his contribution") is inadequate. Roemer concludes that egalitarians must reject socialism as it is classically defined for equality to be realized.
The egalitarian management style focusses on the approach to democratize power, decision-making, and responsibility and distributed them more evenly among all members of a team or organization.
Egalitarianism and non-human animals
Many philosophers, including Ingmar Persson, Peter Vallentyne, Nils Holtug, Catia Faria and Lewis Gompertz, have argued that egalitarianism implies that the interests of non-human animals must be taken into account as well. Philosopher Oscar Horta has further argued that egalitarianism implies rejecting speciesism, ceasing to exploit non-human animals and aiding animals suffering in nature. Furthermore, Horta argues that non-human animals should be prioritized since they are worse off than humans.
Religious and spiritual egalitarianism
Christianity
In 1957, Martin Luther King Jr. quoted Galatians 3:28 ("There is neither Jew nor Greek, slave nor free, male nor female, for you are all one in Christ Jesus") in a pamphlet opposing racial segregation in the United States. He wrote, "Racial segregation is a blatant denial of the unity which we all have in Christ." He also alluded to that verse at the end of his 1963 "I Have a Dream" speech. The verse is cited to support an egalitarian interpretation of Christianity. According to Jakobus M. Vorster, the central question debated by theologians is whether the statement about ecclesiastical relationships can be translated into a Christian-ethical norm for all human relationships. Vorster argues that it can, and that the verse provides a Christian foundation for the promotion of human rights and equality, in contrast to "patriarchy, racism and exploitation" which in his opinion are caused by human sinfulness. Karin Neutel notes how some apply the philosophy of Paul's statement to include sexuality, health and race saying "[The original] three pairs must have been as relevant in the first century, as the additional categories are today." She argues that the verse points to a utopian, cosmopolitan community.
Islam
The verse 49:13 of The Quran states: "O mankind, indeed We have created you from male and female and made you peoples and tribes that you may know one another. Indeed, the noblest of you in the sight of Allah is the most righteous of you. Indeed, Allah is Knowing and Acquainted". Muhammad echoed these egalitarian sentiments, sentiments that clashed with the practices of the pre-Islamic cultures. In a review of Louise Marlow's Hierarchy and Egalitarianism in Islamic Thought, Ismail Poonawala argues the desire for the Arab-Muslim Empire to consolidate power and administer the state rather led to the deemphasis of egalitarian teachings in the Qur'an and by the Prophet.
Modern egalitarianism theory
Modern egalitarianism is a theory that rejects the classic definition of egalitarianism as a possible achievement economically, politically, and socially. Modern egalitarianism theory, or new egalitarianism, outlines that if everyone had the same opportunity cost, then there would be no comparative advances and no one would gain from trading with each other. In essence, the immense gains people receive from trading with each other arise because they are unequal in characteristics and talents—these differences may be innate or developed so that people can gain from trading with each other.
Discussion
Alexander Berkman and Thompson et. al
Thompson et al. theorize that any society consisting of only one perspective, be it egalitarianism, hierarchies, individualist, fatalist or autonomists will be inherently unstable as the claim is that an interplay between all these perspectives are required if each perspective is to be fulfilling. Although an individualist according to cultural theory is aversive towards both principles and groups, individualism is not fulfilling if individual brilliance cannot be recognized by groups, or if individual brilliance cannot be made permanent in the form of principles. Accordingly, they argue that egalitarians have no power except through their presence, unless they (by definition, reluctantly) embrace principles which enable them to cooperate with fatalists and hierarchies. They argue that this means they will also have no individual sense of direction without a group, which could be mitigated by following individuals outside their group, namely autonomists or individualists. Alexander Berkman suggests that "equality does not mean an equal amount but equal opportunity. ... Do not make the mistake of identifying equality in liberty with the forced equality of the convict camp. True anarchist equality implies freedom, not quantity. It does not mean that everyone must eat, drink, or wear the same things, do the same work, or live in the same manner. Far from it: the very reverse. ... Individual needs and tastes differ, as appetites differ. It is an equal opportunity to satisfy them that constitutes true equality. ... Far from leveling, such equality opens the door for the greatest possible variety of activity and development. For human character is diverse."
The cultural theory of risk holds egalitarianism—with fatalism termed as its opposite—as defined by a negative attitude towards rules and principles; and a positive attitude towards group decision-making. The theory distinguishes between hierarchists, who are positive towards both rules and groups; and egalitarians, who are positive towards groups, but negative towards rules. This is by definition a form of anarchist equality as referred to by Berkman. Thus, the fabric of an egalitarian society is held together by cooperation and implicit peer pressure rather than by explicit rules and punishment.
Marxism
Karl Marx and Friedrich Engels believed that an international proletarian revolution would bring about a socialist society which would then eventually give way to a communist stage of social development which would be a classless, stateless, moneyless, humane society erected on common ownership of the means of production and the principle of "From each according to their ability, to each according to their needs". Marxism rejected egalitarianism in the sense of greater equality between classes, clearly distinguishing it from the socialist notion of the abolition of classes based on the division between workers and owners of productive property.
Allen Woods finds that Marx's view of classlessness was not the subordination of society to a universal interest such as a universal notion of equality, but it was about the creation of the conditions that would enable individuals to pursue their true interests and desires, making Marx's notion of communist society radically individualistic. Although his position is often confused or conflated with distributive egalitarianism in which only the goods and services resulting from production are distributed according to notional equality, Marx eschewed the entire concept of equality as abstract and bourgeois, preferring to focus on more concrete principles such as opposition to exploitation on materialist grounds and economic logic.
Murray Rothbard
In the title essay of his book Egalitarianism as a Revolt Against Nature and Other Essays, Murray Rothbard argued that egalitarian theory always results in a politics of statist control because it is founded on revolt against the ontological structure of reality itself. According to Rothbard, individuals are naturally unequal in their abilities, talents, and characteristics. He believed that this inequality was not only natural but necessary for a functioning society. In his view, people's unique qualities and abilities are what allow them to contribute to society in different ways.
Rothbard argued that egalitarianism was a misguided attempt to impose an artificial equality on individuals, which would ultimately lead to societal breakdown. He believed that attempts to force equality through government policies or other means would stifle individual freedom and prevent people from pursuing their own interests and passions. Furthermore, Rothbard believed that egalitarianism was rooted in envy and resentment towards those who were more successful or talented than others. He saw it as a destructive force that would lead to a culture of mediocrity, where people were discouraged from striving for excellence.
Equity
The Atlas movement defines equitism as the idea that all groups should have equal rights and benefits. The term has been used as the claimed philosophical basis of Telosa, a proposed utopia to be built in the United States by Marc Lore. Social equity is about equality of outcomes for each groups, while egalitarianism generally advocates for equality of opportunity, recognizing that a fair society should provide all members with the same opportunities while recognizing that different outcomes are expected due to human individuality.
See also
"All men are created equal"
Animal rights
Asset-based egalitarianism
Citizen's dividend
Consociationalism
Deep ecology
Discrimination
Economic inequality
Egalitarian social choice rule
Equal consideration of interests
Equal opportunity
Equality of outcome
Feminism
Gift economy
Inequity aversion
Left-wing politics
Legal status of transgender people
LGBT rights by country or territory
Men's rights movement
Men's liberation movement
Meritocracy
Mutualism
Natural rights and legal rights
Political egalitarianism
One man, one vote
Reciprocal altruism
Redistributive justice
Same-sex marriage
Social dividend
Transfeminism
Universal basic income
References
External links
Internet Encyclopedia of Philosophy
Stanford Encyclopedia of Philosophy
Economic ideologies
Equality rights
Consequentialism
Fairness criteria
Human rights
Political culture
Political ideologies
Social inequality
Social justice
Social theories | 0.76867 | 0.998902 | 0.767826 |
Consequentialism | In moral philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from a consequentialist standpoint, a morally right act (including omission from acting) is one that will produce a good outcome. Consequentialism, along with eudaimonism, falls under the broader category of teleological ethics, a group of views which claim that the moral value of any act consists in its tendency to produce things of intrinsic value. Consequentialists hold in general that an act is right if and only if the act (or in some views, the rule under which it falls) will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative. Different consequentialist theories differ in how they define moral goods, with chief candidates including pleasure, the absence of pain, the satisfaction of one's preferences, and broader notions of the "general good".
Consequentialism is usually contrasted with deontological ethics (or deontology): deontology, in which rules and moral duty are central, derives the rightness or wrongness of one's conduct from the character of the behaviour itself, rather than the outcomes of the conduct. It is also contrasted with both virtue ethics, which focuses on the character of the agent rather than on the nature or consequences of the act (or omission) itself, and pragmatic ethics, which treats morality like science: advancing collectively as a society over the course of many lifetimes, such that any moral criterion is subject to revision.
Some argue that consequentialist theories (such as utilitarianism) and deontological theories (such as Kantian ethics) are not necessarily mutually exclusive. For example, T. M. Scanlon advances the idea that human rights, which are commonly considered a "deontological" concept, can only be justified with reference to the consequences of having those rights. Similarly, Robert Nozick argued for a theory that is mostly consequentialist, but incorporates inviolable "side-constraints" which restrict the sort of actions agents are permitted to do. Derek Parfit argued that, in practice, when understood properly, rule consequentialism, Kantian deontology, and contractualism would all end up prescribing the same behavior.
Etymology
The term consequentialism was coined by G. E. M. Anscombe in her essay "Modern Moral Philosophy" in 1958. However, the meaning of the word has changed over the time since Anscombe used it: in the sense she coined it, she had explicitly placed J.S. Mill in the nonconsequentialist and W.D. Ross in the consequentialist camp, whereas, in the contemporary sense of the word, they would be classified the other way round. This is due to changes in the meaning of the word, not due to changes in perceptions of W.D. Ross's and J.S. Mill's views.
Classification
One common view is to classify consequentialism, together with virtue ethics, under a broader label of "teleological ethics". Proponents of teleological ethics (Greek: telos, 'end, purpose' + logos, 'science') argue that the moral value of any act consists in its tendency to produce things of intrinsic value, meaning that an act is right if and only if it, or the rule under which it falls, produces, will probably produce, or is intended to produce, a greater balance of good over evil than any alternative act. This concept is exemplified by the famous aphorism, "the end justifies the means," variously attributed to Machiavelli or Ovid i.e. if a goal is morally important enough, any method of achieving it is acceptable.
Teleological ethical theories are contrasted with deontological ethical theories, which hold that acts themselves are inherently good or bad, rather than good or bad because of extrinsic factors (such as the act's consequences or the moral character of the person who acts).
Forms of consequentialism
Utilitarianism
In summary, Jeremy Bentham states that people are driven by their interests and their fears, but their interests take precedence over their fears; their interests are carried out in accordance with how people view the consequences that might be involved with their interests. Happiness, in this account, is defined as the maximization of pleasure and the minimization of pain. It can be argued that the existence of phenomenal consciousness and "qualia" is required for the experience of pleasure or pain to have an ethical significance.
Historically, hedonistic utilitarianism is the paradigmatic example of a consequentialist moral theory. This form of utilitarianism holds that what matters is the aggregate happiness; the happiness of everyone, and not the happiness of any particular person. John Stuart Mill, in his exposition of hedonistic utilitarianism, proposed a hierarchy of pleasures, meaning that the pursuit of certain kinds of pleasure is more highly valued than the pursuit of other pleasures. However, some contemporary utilitarians, such as Peter Singer, are concerned with maximizing the satisfaction of preferences, hence preference utilitarianism. Other contemporary forms of utilitarianism mirror the forms of consequentialism outlined below.
Rule consequentialism
In general, consequentialist theories focus on actions. However, this need not be the case. Rule consequentialism is a theory that is sometimes seen as an attempt to reconcile consequentialism with deontology, or rules-based ethics—and in some cases, this is stated as a criticism of rule consequentialism. Like deontology, rule consequentialism holds that moral behavior involves following certain rules. However, rule consequentialism chooses rules based on the consequences that the selection of those rules has. Rule consequentialism exists in the forms of rule utilitarianism and rule egoism.
Various theorists are split as to whether the rules are the only determinant of moral behavior or not. For example, Robert Nozick held that a certain set of minimal rules, which he calls "side-constraints," are necessary to ensure appropriate actions. There are also differences as to how absolute these moral rules are. Thus, while Nozick's side-constraints are absolute restrictions on behavior, Amartya Sen proposes a theory that recognizes the importance of certain rules, but these rules are not absolute. That is, they may be violated if strict adherence to the rule would lead to much more undesirable consequences.
One of the most common objections to rule-consequentialism is that it is incoherent, because it is based on the consequentialist principle that what we should be concerned with is maximizing the good, but then it tells us not to act to maximize the good, but to follow rules (even in cases where we know that breaking the rule could produce better results).
In Ideal Code, Real World, Brad Hooker avoids this objection by not basing his form of rule-consequentialism on the ideal of maximizing the good. He writes:
[T]he best argument for rule-consequentialism is not that it derives from an overarching commitment to maximise the good. The best argument for rule-consequentialism is that it does a better job than its rivals of matching and tying together our moral convictions, as well as offering us help with our moral disagreements and uncertainties.
Derek Parfit described Hooker's book as the "best statement and defence, so far, of one of the most important moral theories."
State consequentialism
State consequentialism, also known as Mohist consequentialism, is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the welfare of a state. According to the Stanford Encyclopedia of Philosophy, Mohist consequentialism, dating back to the 5th century BCE, is the "world's earliest form of consequentialism, a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare."
Unlike utilitarianism, which views utility as the sole moral good, "the basic goods in Mohist consequentialist thinking are...order, material wealth, and increase in population." The word "order" refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability; "material wealth" of Mohist consequentialism refers to basic needs, like shelter and clothing; and "increase in population" relates to the time of Mozi, war and famine were common, and population growth was seen as a moral necessity for a harmonious society. In The Cambridge History of Ancient China, Stanford sinologist David Shepherd Nivison writes that the moral goods of Mohism "are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth...if people have plenty, they would be good, filial, kind, and so on unproblematically."
The Mohists believed that morality is based on "promoting the benefit of all under heaven and eliminating harm to all under heaven." In contrast to Jeremy Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweigh the importance of individual pleasure and pain. The term state consequentialism has also been applied to the political philosophy of the Confucian philosopher Xunzi. On the other hand, "legalist" Han Fei "is motivated almost totally from the ruler's point of view."
Ethical egoism
Ethical egoism can be understood as a consequentialist theory according to which the consequences for the individual agent are taken to matter more than any other result. Thus, egoism will prescribe actions that may be beneficial, detrimental, or neutral to the welfare of others. Some, like Henry Sidgwick, argue that a certain degree of egoism promotes the general welfare of society for two reasons: because individuals know how to please themselves best, and because if everyone were an austere altruist then general welfare would inevitably decrease.
Ethical altruism
Ethical altruism can be seen as a consequentialist theory which prescribes that an individual take actions that have the best consequences for everyone, not necessarily including themselves (similar to selflessness). This was advocated by Auguste Comte, who coined the term altruism, and whose ethics can be summed up in the phrase "Live for others."
Two-level consequentialism
The two-level approach involves engaging in critical reasoning and considering all the possible ramifications of one's actions before making an ethical decision, but reverting to generally reliable moral rules when one is not in a position to stand back and examine the dilemma as a whole. In practice, this equates to adhering to rule consequentialism when one can only reason on an intuitive level, and to act consequentialism when in a position to stand back and reason on a more critical level.
This position can be described as a reconciliation between act consequentialism—in which the morality of an action is determined by that action's effects—and rule consequentialism—in which moral behavior is derived from following rules that lead to positive outcomes.
The two-level approach to consequentialism is most often associated with R. M. Hare and Peter Singer.
Motive consequentialism
Another consequentialist application view is motive consequentialism, which looks at whether the state of affairs that results from the motive to choose an action is better or at least as good as each alternative state of affairs that would have resulted from alternative actions. This version gives relevance to the motive of an act and links it to its consequences. An act can therefore not be wrong if the decision to act was based on a right motive. A possible inference is that one can not be blamed for mistaken judgments if the motivation was to do good.
Negative consequentialism
Most consequentialist theories focus on promoting some sort of good consequences. However, negative utilitarianism lays out a consequentialist theory that focuses solely on minimizing bad consequences.
One major difference between these two approaches is the agent's responsibility. Positive consequentialism demands that we bring about good states of affairs, whereas negative consequentialism requires that we avoid bad ones. Stronger versions of negative consequentialism will require active intervention to prevent bad and ameliorate existing harm. In weaker versions, simple forbearance from acts tending to harm others is sufficient. An example of this is the slippery-slope argument, which encourages others to avoid a specified act on the grounds that it may ultimately lead to undesirable consequences.
Often "negative" consequentialist theories assert that reducing suffering is more important than increasing pleasure. Karl Popper, for example, claimed that "from the moral point of view, pain cannot be outweighed by pleasure." (While Popper is not a consequentialist per se, this is taken as a classic statement of negative utilitarianism.) When considering a theory of justice, negative consequentialists may use a statewide or global-reaching principle: the reduction of suffering (for the disadvantaged) is more valuable than increased pleasure (for the affluent or luxurious).
Acts and omissions
Since pure consequentialism holds that an action is to be judged solely by its result, most consequentialist theories hold that a deliberate action is no different from a deliberate decision not to act. This contrasts with the "acts and omissions doctrine", which is upheld by some medical ethicists and some religions: it asserts there is a significant moral distinction between acts and deliberate non-actions which lead to the same outcome. This contrast is brought out in issues such as voluntary euthanasia.
Actualism and possibilism
The normative status of an action depends on its consequences according to consequentialism. The consequences of the actions of an agent may include other actions by this agent. Actualism and possibilism disagree on how later possible actions impact the normative status of the current action by the same agent. Actualists assert that it is only relevant what the agent would actually do later for assessing the value of an alternative. Possibilists, on the other hand, hold that we should also take into account what the agent could do, even if she would not do it.
For example, assume that Gifre has the choice between two alternatives, eating a cookie or not eating anything. Having eaten the first cookie, Gifre could stop eating cookies, which is the best alternative. But after having tasted one cookie, Gifre would freely decide to continue eating cookies until the whole bag is finished, which would result in a terrible stomach ache and would be the worst alternative. Not eating any cookies at all, on the other hand, would be the second-best alternative. Now the question is: should Gifre eat the first cookie or not? Actualists are only concerned with the actual consequences. According to them, Gifre should not eat any cookies at all since it is better than the alternative leading to a stomach ache. Possibilists, however, contend that the best possible course of action involves eating the first cookie and this is therefore what Gifre should do.
One counterintuitive consequence of actualism is that agents can avoid moral obligations simply by having an imperfect moral character. For example, a lazy person might justify rejecting a request to help a friend by arguing that, due to her lazy character, she would not have done the work anyway, even if she had accepted the request. By rejecting the offer right away, she managed at least not to waste anyone's time. Actualists might even consider her behavior praiseworthy since she did what, according to actualism, she ought to have done. This seems to be a very easy way to "get off the hook" that is avoided by possibilism. But possibilism has to face the objection that in some cases it sanctions and even recommends what actually leads to the worst outcome.
Douglas W. Portmore has suggested that these and other problems of actualism and possibilism can be avoided by constraining what counts as a genuine alternative for the agent. On his view, it is a requirement that the agent has rational control over the event in question. For example, eating only one cookie and stopping afterward only is an option for Gifre if she has the rational capacity to repress her temptation to continue eating. If the temptation is irrepressible then this course of action is not considered to be an option and is therefore not relevant when assessing what the best alternative is. Portmore suggests that, given this adjustment, we should prefer a view very closely associated with possibilism called maximalism.
Issues
Action guidance
One important characteristic of many normative moral theories such as consequentialism is the ability to produce practical moral judgements. At the very least, any moral theory needs to define the standpoint from which the goodness of the consequences are to be determined. What is primarily at stake here is the responsibility of the agent.
The ideal observer
One common tactic among consequentialists, particularly those committed to an altruistic (selfless) account of consequentialism, is to employ an ideal, neutral observer from which moral judgements can be made. John Rawls, a critic of utilitarianism, argues that utilitarianism, in common with other forms of consequentialism, relies on the perspective of such an ideal observer. The particular characteristics of this ideal observer can vary from an omniscient observer, who would grasp all the consequences of any action, to an ideally informed observer, who knows as much as could reasonably be expected, but not necessarily all the circumstances or all the possible consequences. Consequentialist theories that adopt this paradigm hold that right action is the action that will bring about the best consequences from this ideal observer's perspective.
The real observer
In practice, it is very difficult, and at times arguably impossible, to adopt the point of view of an ideal observer. Individual moral agents do not know everything about their particular situations, and thus do not know all the possible consequences of their potential actions. For this reason, some theorists have argued that consequentialist theories can only require agents to choose the best action in line with what they know about the situation. However, if this approach is naïvely adopted, then moral agents who, for example, recklessly fail to reflect on their situation, and act in a way that brings about terrible results, could be said to be acting in a morally justifiable way. Acting in a situation without first informing oneself of the circumstances of the situation can lead to even the most well-intended actions yielding miserable consequences. As a result, it could be argued that there is a moral imperative for agents to inform themselves as much as possible about a situation before judging the appropriate course of action. This imperative, of course, is derived from consequential thinking: a better-informed agent is able to bring about better consequences.
Consequences for whom
Moral action always has consequences for certain people or things. Varieties of consequentialism can be differentiated by the beneficiary of the good consequences. That is, one might ask "Consequences for whom?"
Agent-focused or agent-neutral
A fundamental distinction can be drawn between theories which require that agents act for ends perhaps disconnected from their own interests and drives, and theories which permit that agents act for ends in which they have some personal interest or motivation. These are called "agent-neutral" and "agent-focused" theories respectively.
Agent-neutral consequentialism ignores the specific value a state of affairs has for any particular agent. Thus, in an agent-neutral theory, an actor's personal goals do not count any more than anyone else's goals in evaluating what action the actor should take. Agent-focused consequentialism, on the other hand, focuses on the particular needs of the moral agent. Thus, in an agent-focused account, such as one that Peter Railton outlines, the agent might be concerned with the general welfare, but the agent is more concerned with the immediate welfare of herself and her friends and family.
These two approaches could be reconciled by acknowledging the tension between an agent's interests as an individual and as a member of various groups, and seeking to somehow optimize among all of these interests. For example, it may be meaningful to speak of an action as being good for someone as an individual, but bad for them as a citizen of their town.
Human-centered?
Many consequentialist theories may seem primarily concerned with human beings and their relationships with other human beings. However, some philosophers argue that we should not limit our ethical consideration to the interests of human beings alone. Jeremy Bentham, who is regarded as the founder of utilitarianism, argues that animals can experience pleasure and pain, thus demanding that 'non-human animals' should be a serious object of moral concern.
More recently, Peter Singer has argued that it is unreasonable that we do not give equal consideration to the interests of animals as to those of human beings when we choose the way we are to treat them. Such equal consideration does not necessarily imply identical treatment of humans and non-humans, any more than it necessarily implies identical treatment of all humans.
Value of consequences
One way to divide various consequentialisms is by the types of consequences that are taken to matter most, that is, which consequences count as good states of affairs. According to utilitarianism, a good action is one that results in an increase in pleasure, and the best action is one that results in the most pleasure for the greatest number. Closely related is eudaimonic consequentialism, according to which a full, flourishing life, which may or may not be the same as enjoying a great deal of pleasure, is the ultimate aim. Similarly, one might adopt an aesthetic consequentialism, in which the ultimate aim is to produce beauty. However, one might fix on non-psychological goods as the relevant effect. Thus, one might pursue an increase in material equality or political liberty instead of something like the more ephemeral "pleasure". Other theories adopt a package of several goods, all to be promoted equally. As the consequentialist approach contains an inherent assumption that the outcomes of a moral decision can be quantified in terms of "goodness" or "badness," or at least put in order of increasing preference, it is an especially suited moral theory for a probabilistic and decision theoretical approach.
Virtue ethics
Consequentialism can also be contrasted with aretaic moral theories such as virtue ethics. Whereas consequentialist theories posit that consequences of action should be the primary focus of our thinking about ethics, virtue ethics insists that it is the character rather than the consequences of actions that should be the focal point. Some virtue ethicists hold that consequentialist theories totally disregard the development and importance of moral character. For example, Philippa Foot argues that consequences in themselves have no ethical content, unless it has been provided by a virtue such as benevolence.
However, consequentialism and virtue ethics need not be entirely antagonistic. Iain King has developed an approach that reconciles the two schools. Other consequentialists consider effects on the character of people involved in an action when assessing consequence. Similarly, a consequentialist theory may aim at the maximization of a particular virtue or set of virtues. Finally, following Foot's lead, one might adopt a sort of consequentialism that argues that virtuous activity ultimately produces the best consequences.
Ultimate end
The ultimate end is a concept in the moral philosophy of Max Weber, in which individuals act in a faithful, rather than rational, manner.
Criticisms
G. E. M. Anscombe objects to the consequentialism of Sidgwick on the grounds that the moral worth of an action is premised on the predictive capabilities of the individual, relieving them of the responsibility for the "badness" of an act should they "make out a case for not having foreseen" negative consequences.
Immanuel Kant makes a similar argument against consequentialism in the case of the inquiring murder. The example asks whether or not it would be right to give false statement to an inquiring murderer in order to misdirect the individual away from the intended victim. He argues, in On a Supposed Right to Tell Lies from Benevolent Motives, that lying from "benevolent motives," here the motive to maximize the good consequences by protecting the intended victim, should then make the liar responsible for the consequences of the act. For example, it could be that by misdirecting the inquiring murder away from where one thought the intended victim was actually directed the murder to the intended victim. That such an act is immoral mirrors Anscombe's objection to Sidgwick that his consequentialism would problematically absolve the consequentalist of moral responsibility when the consequentalist fails to foresee the true consequences of an act.
The future amplification of the effects of small decisions is an important factor that makes it more difficult to predict the ethical value of consequences, even though most would agree that only predictable consequences are charged with a moral responsibility.
Bernard Williams has argued that consequentialism is alienating because it requires moral agents to put too much distance between themselves and their own projects and commitments. Williams argues that consequentialism requires moral agents to take a strictly impersonal view of all actions, since it is only the consequences, and not who produces them, that are said to matter. Williams argues that this demands too much of moral agents—since (he claims) consequentialism demands that they be willing to sacrifice any and all personal projects and commitments in any given circumstance in order to pursue the most beneficent course of action possible. He argues further that consequentialism fails to make sense of intuitions that it can matter whether or not someone is personally the author of a particular consequence. For example, that participating in a crime can matter, even if the crime would have been committed anyway, or would even have been worse, without the agent's participation.
Some consequentialists—most notably Peter Railton—have attempted to develop a form of consequentialism that acknowledges and avoids the objections raised by Williams. Railton argues that Williams's criticisms can be avoided by adopting a form of consequentialism in which moral decisions are to be determined by the sort of life that they express. On his account, the agent should choose the sort of life that will, on the whole, produce the best overall effects.
Notable consequentialists
R. M. Adams (born 1937)
Jonathan Baron (born 1944)
Jeremy Bentham (1748–1832)
Richard B. Brandt (1910–1997)
John Dewey (1857–1952)
Julia Driver (1961- )
Milton Friedman (1912–2006)
David Friedman (born 1945)
William Godwin (1756–1836)
R. M. Hare (1919–2002)
John Harsanyi (1920–2000)
Brad Hooker (born 1957)
Francis Hutcheson (1694–1746)
Shelly Kagan (born 1963)
Niccolò Machiavelli (1469–1527)
James Mill (1773–1836)
John Stuart Mill (1806–1873)
G. E. Moore (1873–1958)
Mozi (470–391 BCE)
Philip Pettit (born 1945)
Peter Railton (born 1950)
Henry Sidgwick (1838–1900)
Peter Singer (born 1946)
J. J. C. Smart (1920–2012)
See also
Charvaka
Demandingness objection
Dharma-yuddha
Effective altruism
Instrumental and intrinsic value
Lesser of two evils principle
Mental reservation
Mohism
Omission bias
Principle of double effect
Situational ethics
Utilitarianism
Welfarism
References
Further reading
External links
University of Texas. Ethics Unwrapped – Consequentialism | 0.769568 | 0.997688 | 0.767788 |
Pluralism | Pluralism in general denotes a diversity of views or stands, rather than a single approach or method.
Pluralism or pluralist may refer more specifically to:
Politics and law
Pluralism (political philosophy), the acknowledgement of a diversity of political systems
Pluralism (political theory), belief that there should be diverse and competing centres of power in society
Legal pluralism, the existence of differing legal systems in a population or area
Pluralist democracy, a political system with more than one center of power
Philosophy
Pluralism (philosophy), a doctrine according to which many basic substances make up reality
Pluralist school, a Greek school of pre-Socratic philosophers
Epistemological pluralism or methodological pluralism, the view that some phenomena require multiple methods to account for their nature
Value pluralism, the idea that several values may be equally correct and yet in conflict with each other
Religion
Religious pluralism, the acceptance of all religious paths as equally valid, promoting coexistence
Holding multiple ecclesiastical offices; see "Pluralism" at Benefice
Pluralism Project, a Harvard-affiliated project on religious diversity in the United States
Other uses
Cosmic pluralism, the belief in numerous other worlds beyond the Earth, which may possess the conditions suitable for life
Cultural pluralism, when small groups within a larger society maintain their unique cultural identities
Media pluralism, the representation of different cultural groups and political opinions in the media
Pluralist commonwealth, a systemic model of wealth democratization
Pluralism in economics, a campaign to enrich the academic discipline of economics
See also
Plurality (disambiguation)
Journal of Legal Pluralism, a peer-reviewed academic journal that focuses on legal pluralism
Global Centre for Pluralism, an international centre for research of pluralist societies
Multiculturalism, the existence of multiple cultural traditions within a single country
Postmodernism, a broad movement in the late-20th century that is skeptical toward grand narratives or ideologies | 0.779489 | 0.984738 | 0.767593 |
Theoretical philosophy | The modern division of philosophy into theoretical philosophy and practical philosophy has its origin in Aristotle's categories of natural philosophy and moral philosophy. The one has theory for its object, and the other practice.
__forcetoc__
Overview
In Denmark, Finland, Germany, the Netherlands, Sweden, and the United States, courses in theoretical and practical philosophy are taught separately, and are separate degrees. Other countries may use a similar scheme—some Scottish universities, for example, divide philosophy into logic, metaphysics, and ethics—but in most universities around the world philosophy is taught as a single subject. There is also a unified philosophy subject in some Swedish universities, such as Södertörns Högskola.
Theoretical philosophy is sometimes confused with analytic philosophy, but the latter is a philosophical movement, embracing certain ideas and methods but dealing with all philosophical subject matters, while the former is a way of sorting philosophical questions into two different categories in the context of a curriculum.
Subjects of theoretical philosophy
Epistemology
Logic
Philosophy of mathematics
Philosophy of science
Philosophy of language
Philosophy of mind
Metaphysics
Ontology
References
T | 0.787145 | 0.975064 | 0.767517 |
Indian philosophy | Indian philosophy consists of philosophical traditions of the Indian subcontinent. The philosophies are often called darśana meaning, "to see" or "looking at." Ānvīkṣikī means “critical inquiry” or “investigation." Unlike darśana, ānvīkṣikī was used to refer to Indian philosophies by classical Indian philosophers, such as Chanakya in the Arthaśāstra.
A traditional Hindu classification divides āstika and nāstika schools of philosophy, depending on one of three alternate criteria: whether it believes the Vedas as a valid source of knowledge; whether the school believes in the premises of Brahman and Atman; and whether the school believes in afterlife and Devas. (though there are exceptions to the latter two: Mimamsa and Samkhya respectively).
There are six major (āstika) schools of Vedic philosophy—Nyaya, Vaisheshika, Samkhya, Yoga, Mīmāṃsā and Vedanta—and five major non-Vedic or heterodox (nāstika or sramanic) schools—Jain, Buddhist, Ajivika, Ajñana, and Charvaka. The āstika group embraces the Vedas as an essential source of its foundations, while the nāstika group does not. However, there are other methods of classification; Vidyaranya for instance identifies sixteen schools of Indian philosophy by including those that belong to the Śaiva and Raseśvara traditions.
The main schools of Indian philosophy were formalised and recognised chiefly between 500 BCE and the late centuries of the Common Era. Some schools like Jainism, Buddhism, Yoga, Śaiva and Vedanta survived, but others, like Ajñana, Charvaka and Ājīvika did not.
Ancient and medieval era texts of Indian philosophies include extensive discussions on ontology (metaphysics, Brahman-Atman, Sunyata-Anatta), reliable means of knowledge (epistemology, Pramanas), value system (axiology) and other topics.
Common themes
Indian philosophies share many concepts such as dharma, karma, samsara, dukkha, renunciation, meditation, with almost all of them focusing on the ultimate goal of liberation of the individual from dukkha and samsara through diverse range of spiritual practices (moksha, nirvana). While many sutra texts explicitly mention that the work leads to moksha, Indian philosophy is not exclusively concerned with moksha.
They differ in their assumptions about the nature of existence as well as the specifics of the path to the ultimate liberation, resulting in numerous schools that disagreed with each other. Their ancient doctrines span the diverse range of philosophies found in other ancient cultures.
Hindu traditions
Some of the earliest surviving Indian philosophical texts are the Upanishads of the later Vedic period (1000–500 BCE), which are considered to preserve the ideas of Brahmanism. Indian philosophical traditions are commonly grouped according to their relationship to the Vedas and the ideas contained in them. Jainism and Buddhism originated at the end of the Vedic period, while the various traditions grouped under Hinduism mostly emerged after the Vedic period as independent traditions.
Hindu philosophy classify Indian philosophical traditions as either orthodox (āstika) or heterodox (nāstika), depending on whether they accept the authority of the Vedas and the theories of brahman and ātman found therein. Besides these, the "heterodox" schools that do not accept the authority of the Vedas include Buddhism, Jainism, Ajivika and Charvaka.
This orthodox-heterodox terminology is a scholarly construct found in later Indian sources (and in Western sources on Indian thought) and not all of these sources agree on which system should be considered "orthodox". As such there are various heresiological systems in Indian philosophy. Some traditions see "orthodox" as a synonym for "theism" and "heterodox" as a synonym for atheism. Other Hindu sources argue that certain systems of Shaiva tantra should be considered heterodox due to its deviations from the Vedic tradition.
One of the most common list of Hindu orthodox schools is the "six philosophies" (ṣaḍ-darśana), which are:
Sāṃkhya (school of "Enumeration"), a philosophical tradition which regards the universe as consisting of two independent realities: puruṣa (the perceiving consciousness) and prakṛti (perceived reality, including mind, perception, kleshas, and matter) and which describes a soteriology based on this duality, in which purush is discerned and disentangled from the impurities of prakriti. It has included atheistic authors as well as some theistic thinkers, and forms the basis of much of subsequent Indian philosophy.
Yoga, a school similar to Sāṃkhya (or perhaps even a branch of it) which accepts a personal god and focuses on yogic practice.
Nyāya (the "Logic" school), a philosophy which focuses on logic and epistemology. It accepts four kinds of Pramā (valid presentation): (1) perception, (2) inference, (3) comparison or analogy, (4) word or testimony. Nyāya defends a form of direct realism and a theory of substances (dravya).
Vaiśeṣika (the school of "Characteristics"), closely related to the Nyāya school, this tradition focused on the metaphysics of substance, and on defending a theory of atoms. Unlike Nyāya, they only accept two pramanas: perception and inference.
Pūrva-Mīmāṃsā (the school of "Prior Investigation" [of the Vedas]), a school which focuses on exegesis of the Vedas, philology and the interpretation of Vedic ritual.
Vedānta ("the end of the Vedas", also called Uttara Mīmāṃsā), focuses on interpreting the philosophy of the Upanishads, particularly the soteriological and metaphysical ideas relating to Atman and Brahman.
Sometimes these six are coupled into three groups for both historical and conceptual reasons: Nyāya-Vaiśeṣika, Sāṃkhya-Yoga, and Mīmāṃsā-Vedānta. Each tradition also included different currents and sub-schools. For example, Vedānta was divided among the sub-schools of Advaita (non-dualism), Visishtadvaita (qualified non-dualism), Dvaita (dualism), Dvaitadvaita (dualistic non-dualism), Suddhadvaita (pure non-dualism), and Achintya Bheda Abheda (inconceivable oneness and difference).
The doctrines of the Vedas and Upanishads were interpreted differently by these six schools, with varying degrees of overlap. They represent a "collection of philosophical views that share a textual connection", according to . They also reflect a tolerance for a diversity of philosophical interpretations within Hinduism while sharing the same foundation.
Hindu philosophers of the orthodox schools developed systems of epistemology (pramana) and investigated topics such as metaphysics, ethics, psychology (guṇa), hermeneutics, and soteriology within the framework of the Vedic knowledge, while presenting a diverse collection of interpretations. The commonly named six orthodox schools were the competing philosophical traditions of what has been called the "Hindu synthesis" of classical Hinduism.
All these systems are not the only "orthodox" systems of philosophy, as numerous sub-schools developed throughout the history of Hindu thought. They are however the most well known Hindu philosophical traditions.
In addition to the six systems, the Hindu philosopher Vidyāraṇya (ca. 1374–1380) also includes several further Hindu philosophical systems in his Sarva-darśana-saṃgraha (A Compendium of all the Philosophical Systems):
Paśupata, a school of Shaivism founded by Nakulisa
Shaiva Siddhantha, a theistic and dualistic school of Shaivism, which is influenced by Samkhya, and expands the Samkhya system further.
Pratyabhijña (the school of "Recognition"), which defends an idealistic monism and part of the Kashmir Shaivism tradition of Tantric Shaivism
Pāṇini Darśana, a tradition focusing on Sanskrit linguistics and grammar which also developed the theory of sphoṭavāda under Bhartṛhari, a theory which places speech and sound at the center of its metaphysics.
Raseśvara, an alchemical school which advocated the use of mercury as a way to attain enlightenment.
Śramaṇic traditions
Several non-Vedic traditions of thought also flourished in ancient India and they developed their own philosophical systems. The Śramaṇa movement included various traditions which did not accept the Brahmanical religion of the Vedas. These non-Vedic schools gave rise to a diverse range of ideas about topics like the atman, atomism, ethics, materialism, atheism, agnosticism, free will, asceticism, family life, ahimsa (non-violence) and vegetarianism. Notable philosophies that arose from the Śramaṇa movement were Jainism, early Buddhism, Charvaka, Ajñana and Ājīvika.
Indian Śramaṇa movements became prominent in the 5th and 4th centuries BCE, and even more so during the Mauryan period (c. 322–184 BCE). Jainism and Buddhism were especially influential. These traditions influenced all later forms of Indian philosophy who either adopted some of their ideas or reacted against them.
Ajñana philosophy
Ajñana was one of the nāstika or "heterodox" schools of ancient Indian philosophy, and the ancient school of radical Indian skepticism. It was a Śramaṇa movement and a major rival of early Buddhism and Jainism. Their ideas are recorded in Buddhist and Jain texts. They held that it was impossible to obtain knowledge of metaphysical nature or ascertain the truth value of philosophical propositions; and even if knowledge was possible, it was useless and disadvantageous for final salvation. They were sophists who specialised in refutation without propagating any positive doctrine of their own.
Jain philosophy
Jain philosophy is the oldest Indian philosophy that separates body (matter) from the soul (consciousness) completely. Jainism was revived and re-established after Mahavira, the last and the 24th Tirthankara, synthesised and revived the philosophies and promulgations of the ancient Śramaṇic traditions laid down by the first Jain tirthankara Rishabhanatha millions of years ago. According to Dundas, outside of the Jain tradition, historians date the Mahavira as about contemporaneous with the Buddha in the 5th-century BCE, and accordingly the historical Parshvanatha, based on the c. 250-year gap, is placed in 8th or 7th century BCE.
Jainism is a Śramaṇic religion and rejected the authority of the Vedas. However, like all Indian religions, it shares the core concepts such as karma, ethical living, rebirth, samsara and moksha. Jainism places strong emphasis on asceticism, ahimsa (non-violence) and anekantavada (relativity of viewpoints) as a means of spiritual liberation, ideas that influenced other Indian traditions.
Jainism strongly upholds the individualistic nature of soul and personal responsibility for one's decisions; and that self-reliance and individual efforts alone are responsible for one's liberation. According to the Jain philosophy, the world (Saṃsāra) is full of hiṃsā (violence). Therefore, one should direct all his efforts in attainment of Ratnatraya, that are Samyak Darshan (right perception), Samyak Gnana (right knowledge) and Samyak Chàritra (right conduct) which are the key requisites to attain liberation.
Buddhist philosophy
Buddhist philosophy refers to several traditions which can be traced back to the teachings of Siddhartha Gautama, the Buddha ("awakened one"). Buddhism is a Śramaṇa religion, but it contains novel ideas not found or accepted by other Śramaṇa religions, such as the Buddhist doctrine of not-self (anatta). Buddhist thought is also influenced by the thought of the Upanishads.
Buddhism and Hinduism mutually influenced each other and shared many concepts, however it is now difficult to identify and describe these influences. Buddhism rejected the Vedic concepts of Brahman (ultimate reality) and Atman (soul, self) at the foundation of Hindu philosophies.
Buddhism shares many philosophical views with other Indian systems, such as belief in karma – a cause-and-effect relationship, samsara – ideas about cyclic afterlife and rebirth, dharma – ideas about ethics, duties and values, impermanence of all material things and of body, and possibility of spiritual liberation (nirvana or moksha). A major departure from Hindu and Jain philosophy is the Buddhist rejection of an eternal soul (atman) in favour of anatta (non-Self). After the death of the Buddha, several competing philosophical systems termed Abhidharma began to emerge as ways to systematize Buddhist philosophy.
Schools of thought
The main traditions of Buddhist philosophy in India (from 300 BCE to 1000 CE) can be divided into Mahayana schools and non-Mahayana schools (sometimes called Śrāvakayāna schools, Nikaya Buddhism, "Mainstream" Buddhism or Hinayana, "inferior" or "lesser" vehicle, a term used only in Mahayana to refer to non-Mahayana traditions). The Mahayana schools accepted the Mahayana sutras and studied the works of Mahayana philosophers like Nagarjuna. The non-Mahayana schools drew their philosophical doctrines from the Tripitaka and on the Abhidharma treatises.
Śrāvakayāna schools (non-Mahayana):
The Mahāsāṃghika ("Great Community") tradition (which included numerous sub-schools, all are now extinct). A key doctrine of this tradition was the supramundane and transcendent nature of the Buddha (lokottaravada).
The schools of the Sthavira ("Elders") tradition:
Vaibhāṣika ("Commentators") also known as the Sarvāstivāda-Vaibhāśika, was an Abhidharma tradition that composed the "Great Commentary" (Mahāvibhāṣa). They were known for their defense of the doctrine of "sarvāstitva" (all exists), which is a form of eternalism regarding the philosophy of time. They also supported direct realism and a theory of substances (svabhāva).
Sautrāntika ("Those who uphold the sutras"), a tradition which did not see the northern Abhidharma as authoritative, and instead focused on the Buddhist sutras. They disagreed with the Vaibhāṣika on several key points, including their eternalistic theory of time, their direct realism and their realist theory of nirvana.
Pudgalavāda ("Personalists"), which were known for their controversial theory of the "person" (pudgala) which is what undergoes rebirth and attain awakening. They are now extinct.
Vibhajyavāda ("The Analysts"), a widespread tradition which reached Kashmir, South India and Sri Lanka. A part of this school has survived into the modern era as the Southeast Asian Theravada tradition. Their orthodox positions can be found in the Kathavatthu. They rejected the views of the Pudgalavāda and of the Vaibhāṣika among others.
Mahāyāna traditions:
The Mahāyāna ("Great Vehicle") movement (c. 1st century BCE onwards) included new ideas and scriptures (Mahayana sutras). These philosophical traditions differ significantly from other schools of Buddhism, and include metaphysical doctrines which are not accepted by the other Buddhist traditions. Mahayana thought focuses on the universal altruistic ideal of the bodhisattva, a being who is on the path to Buddhahood for the sake of all living beings. It also defends the doctrine that there are limitless number of Buddhas throughout limitless numbers of universes. These Indian traditions are the main source of modern Tibetan Buddhism and of modern East Asian Buddhism.
The main Indian Mahayana schools of philosophy are:
Madhyamaka ("Middle way" or "Centrism") founded by Nagarjuna. Also known as Śūnyavāda (the emptiness doctrine) and Niḥsvabhāvavāda (the no svabhāva doctrine), this tradition focuses on the idea that all phenomena are empty of any essence or substance (svabhāva).
Yogācāra ("Yoga-praxis"), an idealistic school which held that only consciousness exists, and thus was also known as Vijñānavāda (the doctrine of consciousness).
The Dignāga-Dharmakīrti tradition is an influential school of thought which focused on epistemology, or pramāṇa ('means of knowledge'). They generally followed the doctrine of Vijñānavāda.
Some scholars see the Tathāgatagarbha ("Buddha womb/source") or "buddha-nature" texts as constituting a third "school" of Indian Mahāyāna thought.
Vajrayāna (also known as Mantrayāna, Tantrayāna, Secret Mantra, and Tantric Buddhism) is often placed in a separate category due to its unique tantric theories and practices.
Many of these philosophies were brought to other regions, like Central Asia and China. After the disappearance of Buddhism from India, some of these philosophical traditions continued to develop in the Tibetan Buddhist, East Asian Buddhist and Theravada Buddhist traditions.
Ājīvika philosophy
The philosophy of Ājīvika was founded by Makkhali Gosala, it was a Śramaṇa movement and a major rival of early Buddhism and Jainism. Ājīvikas were organised renunciates who formed discrete monastic communities prone to an ascetic and simple lifestyle.
Original scriptures of the Ājīvika school of philosophy may once have existed, but these are currently unavailable and probably lost. Their theories are extracted from mentions of Ajivikas in the secondary sources of ancient Indian literature, particularly those of Jainism and Buddhism which polemically criticized the Ajivikas. The Ājīvika school is known for its Niyati doctrine of absolute determinism (fate), the premise that there is no free will, that everything that has happened, is happening and will happen is entirely preordained and a function of cosmic principles. Ājīvika considered the karma doctrine as a fallacy. Ājīvikas were atheists and rejected the authority of the Vedas, but they believed that in every living being is an ātman – a central premise of Hinduism and Jainism.
Charvaka philosophy
Charvaka (; IAST: Cārvāka), also known as Lokāyata, is an ancient school of Indian materialism. Charvaka holds direct perception, empiricism, and conditional inference as proper sources of knowledge, embraces philosophical skepticism and rejects ritualism and supernaturalism. It was a popular belief system in ancient India.
The etymology of Charvaka (Sanskrit: चार्वाक) is uncertain. Bhattacharya quotes the grammarian Hemacandra, to the effect that the word cārvāka is derived from the root carv, 'to chew' : "A Cārvāka chews the self (carvatyātmānaṃ cārvākaḥ). Hemacandra refers to his own grammatical work, Uṇādisūtra 37, which runs as follows: mavāka-śyāmāka-vārtāka-jyontāka-gūvāka-bhadrākādayaḥ. Each of these words ends with the āka suffix and is formed irregularly". This may also allude to the philosophy's hedonistic precepts of "eat, drink, and be merry".
Brihaspati is traditionally referred to as the founder of Charvaka or Lokāyata philosophy, although some scholars dispute this. During the Hindu reformation period in the first millennium BCE, when Buddhism was established by Gautama Buddha and Jainism was re-organized by Parshvanatha, the Charvaka philosophy was well documented and opposed by both religions. Much of the primary literature of Charvaka, the Barhaspatya sutras, were lost either due to waning popularity or other unknown reasons. Its teachings have been compiled from historic secondary literature such as those found in the shastras, sutras, and the Indian epic poetry as well as in the dialogues of Gautama Buddha and from Jain literature. However, there is text that may belong to the Charvaka tradition, written by the skeptic philosopher Jayarāśi Bhaṭṭa, known as the Tattvôpaplava-siṁha, that provides information about this school, albeit unorthodox.
One of the widely studied principles of Charvaka philosophy was its rejection of inference as a means to establish valid, universal knowledge, and metaphysical truths. In other words, the Charvaka epistemology states that whenever one infers a truth from a set of observations or truths, one must acknowledge doubt; inferred knowledge is conditional.
Comparison of Indian philosophies
The Indian traditions subscribed to diverse philosophies, significantly disagreeing with each other as well as orthodox Indian philosophy and its six schools of Hindu philosophy. The differences ranged from a belief that every individual has a soul (self, atman) to asserting that there is no soul, from axiological merit in a frugal ascetic life to that of a hedonistic life, from a belief in rebirth to asserting that there is no rebirth.
Political philosophy
The Arthashastra, attributed to the Mauryan minister Chanakya, is one of the early Indian texts devoted to political philosophy. It is dated to 4th century BCE and discusses ideas of statecraft and economic policy. The Kural text, attributed to Valluvar and dated to around 5th century CE, deals with ahimsa and morality, extending them to political philosophy and love.
The political philosophy most closely associated with modern India is the one of ahimsa (non-violence) and Satyagraha, popularised by Mahatma Gandhi during the Indian struggle for independence. In turn it influenced the later independence and Civil Rights movements, especially those led by Martin Luther King Jr. and Nelson Mandela. Prabhat Ranjan Sarkar's Progressive Utilization Theory is also a major socio-economic and political philosophy.
Integral humanism was a set of concepts drafted by Upadhyaya as political program and adopted in 1965 as the official doctrine of the Jan Sangh.
Upadhyaya considered that it was of utmost importance for India to develop an indigenous economic model with a human being at center stage. This approach made this concept different from Socialism and Capitalism. Integral Humanism was adopted as Jan Sangh's political doctrine and its new openness to other opposition forces made it possible for the Hindu nationalist movement to have an alliance in the early 1970s with the prominent Gandhian Sarvodaya movement going on under the leadership of J. P. Narayan. This was considered to be the first major public breakthrough for the Hindu nationalist movement.
Influence
In appreciation of subtlety and truth of the Indian philosophy, T. S. Eliot wrote that the great philosophers of India "make most of the great European philosophers look like schoolboys". Arthur Schopenhauer used Indian philosophy to improve upon Kantian thought. In the preface to his book The World As Will And Representation, Schopenhauer writes that one who "has also received and assimilated the sacred primitive Indian wisdom, then he is the best of all prepared to hear what I have to say to him." The 19th-century American philosophical movement Transcendentalism was also influenced by Indian thought.
See also
List of Indian philosophers
Affectionism
Ancient Indian philosophy
Hindu philosophy
M. Hiriyanna
Indian art
Indian logic
Indian psychology
Svayam bhagavan
Trikaranasuddhi
Notes
References
Citations
Sources
Further reading
Vol. 1 | Vol. 2 | Vol. 3 | Vol. 4 | Vol. 5.
4th edition.
External links
Surendranath Dasgupta. A History of Indian Philosophy | HTML (vol. 1) | (vol. 2) | (vol. 3) | (vol. 4) | (vol. 5), ebook at Wisdomlib.org
Surendranath Dasgupta. Indian Idealism at archive.org
A recommended reading guide from the philosophy department of University College, London: London Philosophy Study Guide – Indian Philosophy
Articles at the Internet Encyclopedia of Philosophy
Indian Psychology Institute The application of Indian Philosophy to contemporary issues in Psychology
The Essentials of Indian Philosophy by Mysore Hiriyanna at archive.org
Outlines of Indian Philosophy by Mysore Hiriyanna at archive.org
Indian Philosophy by Sarvepalli Radhakrishnan (2 Volumes) at archive.org
History of Philosophy – Eastern and Western Edited by Sarvepalli Radhakrishnan (2 Volumes) at archive.org
Indian Schools of Philosophy and Theology (Jiva Institute)
Cultural history of India
Indian literature | 0.769287 | 0.997598 | 0.76744 |