title
stringlengths 4
52
| text
stringlengths 396
121k
| relevans
float64 0.76
0.82
| popularity
float64 0.9
1
| ranking
float64 0.71
0.82
|
---|---|---|---|---|
Epistemology | Epistemology is the branch of philosophy that examines the nature, origin, and limits of knowledge. Also called theory of knowledge, it explores different types of knowledge, such as propositional knowledge about facts, practical knowledge in the form of skills, and knowledge by acquaintance as a familiarity through experience. Epistemologists study the concepts of belief, truth, and justification to understand the nature of knowledge. To discover how knowledge arises, they investigate sources of justification, such as perception, introspection, memory, reason, and testimony.
The school of skepticism questions the human ability to attain knowledge while fallibilism says that knowledge is never certain. Empiricists hold that all knowledge comes from sense experience, whereas rationalists believe that some knowledge does not depend on it. Coherentists argue that a belief is justified if it coheres with other beliefs. Foundationalists, by contrast, maintain that the justification of basic beliefs does not depend on other beliefs. Internalism and externalism disagree about whether justification is determined solely by mental states or also by external circumstances.
Separate branches of epistemology are dedicated to knowledge found in specific fields, like scientific, mathematical, moral, and religious knowledge. Naturalized epistemology relies on empirical methods and discoveries, whereas formal epistemology uses formal tools from logic. Social epistemology investigates the communal aspect of knowledge and historical epistemology examines its historical conditions. Epistemology is closely related to psychology, which describes the beliefs people hold, while epistemology studies the norms governing the evaluation of beliefs. It also intersects with fields such as decision theory, education, and anthropology.
Early reflections on the nature, sources, and scope of knowledge are found in ancient Greek, Indian, and Chinese philosophy. The relation between reason and faith was a central topic in the medieval period. The modern era was characterized by the contrasting perspectives of empiricism and rationalism. Epistemologists in the 20th century examined the components, structure, and value of knowledge while integrating insights from the natural sciences and linguistics.
Definition
Epistemology is the philosophical study of knowledge. Also called theory of knowledge, it examines what knowledge is and what types of knowledge there are. It further investigates the sources of knowledge, like perception, inference, and testimony, to determine how knowledge is created. Another topic is the extent and limits of knowledge, confronting questions about what people can and cannot know. Other central concepts include belief, truth, justification, evidence, and reason. Epistemology is one of the main branches of philosophy besides fields like ethics, logic, and metaphysics. The term is also used in a slightly different sense to refer not to the branch of philosophy but to a particular position within that branch, as in Plato's epistemology and Immanuel Kant's epistemology.
As a normative field of inquiry, epistemology explores how people should acquire beliefs. This way, it determines which beliefs fulfill the standards or epistemic goals of knowledge and which ones fail, thereby providing an evaluation of beliefs. Descriptive fields of inquiry, like psychology and cognitive sociology, are also interested in beliefs and related cognitive processes. Unlike epistemology, they study the beliefs people have and how people acquire them instead of examining the evaluative norms of these processes. Epistemology is relevant to many descriptive and normative disciplines, such as the other branches of philosophy and the sciences, by exploring the principles of how they may arrive at knowledge.
The word epistemology comes from the ancient Greek terms (episteme, meaning knowledge or understanding) and (logos, meaning study of or reason), literally, the study of knowledge. The word was only coined in the 19th century to label this field and conceive it as a distinct branch of philosophy.
Central concepts
Knowledge
Knowledge is an awareness, familiarity, understanding, or skill. Its various forms all involve a cognitive success through which a person establishes epistemic contact with reality. Knowledge is typically understood as an aspect of individuals, generally as a cognitive mental state that helps them understand, interpret, and interact with the world. While this core sense is of particular interest to epistemologists, the term also has other meanings. Understood on a social level, knowledge is a characteristic of a group of people that share ideas, understanding, or culture in general. The term can also refer to information stored in documents, such as "knowledge housed in the library" or knowledge stored in computers in the form of the knowledge base of an expert system.
Knowledge contrasts with ignorance, which is often simply defined as the absence of knowledge. Knowledge is usually accompanied by ignorance since people rarely have complete knowledge of a field, forcing them to rely on incomplete or uncertain information when making decisions. Even though many forms of ignorance can be mitigated through education and research, there are certain limits to human understanding that are responsible for inevitable ignorance. Some limitations are inherent in the human cognitive faculties themselves, such as the inability to know facts too complex for the human mind to conceive. Others depend on external circumstances when no access to the relevant information exists.
Epistemologists disagree on how much people know, for example, whether fallible beliefs about everyday affairs can amount to knowledge or whether absolute certainty is required. The most stringent position is taken by radical skeptics, who argue that there is no knowledge at all.
Types
Epistemologists distinguish between different types of knowledge. Their primary interest is in knowledge of facts, called propositional knowledge. It is a theoretical knowledge that can be expressed in declarative sentences using a that-clause, like "Ravi knows that kangaroos hop". For this reason, it is also called knowledge-that. Epistemologists often understand it as a relation between a knower and a known proposition, in the case above between the person Ravi and the proposition "kangaroos hop". It is use-independent since it is not tied to one specific purpose. It is a mental representation that relies on concepts and ideas to depict reality. Because of its theoretical nature, it is often held that only relatively sophisticated creatures, such as humans, possess propositional knowledge.
Propositional knowledge contrasts with non-propositional knowledge in the form of knowledge-how and knowledge by acquaintance. Knowledge-how is a practical ability or skill, like knowing how to read or how to prepare lasagna. It is usually tied to a specific goal and not mastered in the abstract without concrete practice. To know something by acquaintance means to be familiar with it as a result of experiental contact. Examples are knowing the city of Perth, knowing the taste of tsampa, and knowing Marta Vieira da Silva personally.
Another influential distinction is between a posteriori and a priori knowledge. A posteriori knowledge is knowledge of empirical facts based on sensory experience, like seeing that the sun is shining and smelling that a piece of meat has gone bad. Knowledge belonging to the empirical science and knowledge of everyday affairs belongs to a posteriori knowledge. A priori knowledge is knowledge of non-empirical facts and does not depend on evidence from sensory experience. It belongs to fields such as mathematics and logic, like knowing that . The contrast between a posteriori and a priori knowledge plays a central role in the debate between empiricists and rationalists on whether all knowledge depends on sensory experience.
A closely related contrast is between analytic and synthetic truths. A sentence is analytically true if its truth depends only on the meaning of the words it uses. For instance, the sentence "all bachelors are unmarried" is analytically true because the word "bachelor" already includes the meaning "unmarried". A sentence is synthetically true if its truth depends on additional facts. For example, the sentence "snow is white" is synthetically true because its truth depends on the color of snow in addition to the meanings of the words snow and white. A priori knowledge is primarily associated with analytic sentences while a posteriori knowledge is primarily associated with synthetic sentences. However, it is controversial whether this is true for all cases. Some philosophers, such as Willard Van Orman Quine, reject the distinction, saying that there are no analytic truths.
Analysis
The analysis of knowledge is the attempt to identify the essential components or conditions of all and only propositional knowledge states. According to the so-called traditional analysis, knowledge has three components: it is a belief that is justified and true. In the second half of the 20th century, this view was put into doubt by a series of thought experiments that aimed to show that some justified true beliefs do not amount to knowledge. In one of them, a person is unaware of all the fake barns in their area. By coincidence, they stop in front of the only real barn and form a justified true belief that it is a real barn. Many epistemologists agree that this is not knowledge because the justification is not directly relevant to the truth. More specifically, this and similar counterexamples involve some form of epistemic luck, that is, a cognitive success that results from fortuitous circumstances rather than competence.
Following these thought experiments, philosophers proposed various alternative definitions of knowledge by modifying or expanding the traditional analysis. According to one view, the known fact has to cause the belief in the right way. Another theory states that the belief is the product of a reliable belief formation process. Further approaches require that the person would not have the belief if it was false, that the belief is not inferred from a falsehood, that the justification cannot be undermined, or that the belief is infallible. There is no consensus on which of the proposed modifications and reconceptualizations is correct. Some philosophers, such as Timothy Williamson, reject the basic assumption underlying the analysis of knowledge by arguing that propositional knowledge is a unique state that cannot be dissected into simpler components.
Value
The value of knowledge is the worth it holds by expanding understanding and guiding action. Knowledge can have instrumental value by helping a person achieve their goals. For example, knowledge of a disease helps a doctor cure their patient, and knowledge of when a job interview starts helps a candidate arrive on time. The usefulness of a known fact depends on the circumstances. Knowledge of some facts may have little to no uses, like memorizing random phone numbers from an outdated phone book. Being able to assess the value of knowledge matters in choosing what information to acquire and transmit to others. It affects decisions like which subjects to teach at school and how to allocate funds to research projects.
Of particular interest to epistemologists is the question of whether knowledge is more valuable than a mere opinion that is true. Knowledge and true opinion often have a similar usefulness since both are accurate representations of reality. For example, if a person wants to go to Larissa, a true opinion about how to get there may help them in the same way as knowledge does. Plato already considered this problem and suggested that knowledge is better because it is more stable. Another suggestion focuses on practical reasoning. It proposes that people put more trust in knowledge than in mere true beliefs when drawing conclusions and deciding what to do. A different response says that knowledge has intrinsic value, meaning that it is good in itself independent of its usefulness.
Belief and truth
Beliefs are mental states about what is the case, like believing that snow is white or that God exists. In epistemology, they are often understood as subjective attitudes that affirm or deny a proposition, which can be expressed in a declarative sentence. For instance, to believe that snow is white is to affirm the proposition "snow is white". According to this view, beliefs are representations of what the world is like. They are kept in memory and can be retrieved when actively thinking about reality or when deciding how to act. A different view understands beliefs as behavioral patterns or dispositions to act rather than as representational items stored in the mind. This view says that to believe that there is mineral water in the fridge is nothing more than a group of dispositions related to mineral water and the fridge. Examples are the dispositions to answer questions about the presence of mineral water affirmatively and to go to the fridge when thirsty. Some theorists deny the existence of beliefs, saying that this concept borrowed from folk psychology is an oversimplification of much more complex psychological processes. Beliefs play a central role in various epistemological debates, which cover their status as a component of propositional knowledge, the question of whether people have control over and are responsible for their beliefs, and the issue of whether there are degrees of beliefs, called credences.
As propositional attitudes, beliefs are true or false depending on whether they affirm a true or a false proposition. According to the correspondence theory of truth, to be true means to stand in the right relation to the world by accurately describing what it is like. This means that truth is objective: a belief is true if it corresponds to a fact. The coherence theory of truth says that a belief is true if it belongs to a coherent system of beliefs. A result of this view is that truth is relative since it depends on other beliefs. Further theories of truth include pragmatist, semantic, pluralist, and deflationary theories. Truth plays a central role in epistemology as a goal of cognitive processes and a component of propositional knowledge.
Justification
In epistemology, justification is a property of beliefs that fulfill certain norms about what a person should believe. According to a common view, this means that the person has sufficient reasons for holding this belief because they have information that supports it. Another view states that a belief is justified if it is formed by a reliable belief formation process, such as perception. The terms reasonable, warranted, and supported are closely related to the idea of justification and are sometimes used as synonyms. Justification is what distinguishes justified beliefs from superstition and lucky guesses. However, justification does not guarantee truth. For example, if a person has strong but misleading evidence, they may form a justified belief that is false.
Epistemologists often identify justification as one component of knowledge. Usually, they are not only interested in whether a person has a sufficient reason to hold a belief, known as propositional justification, but also in whether the person holds the belief because or based on this reason, known as doxastic justification. For example, if a person has sufficient reason to believe that a neighborhood is dangerous but forms this belief based on superstition then they have propositional justification but lack doxastic justification.
Sources
Sources of justification are ways or cognitive capacities through which people acquire justification. Often-discussed sources include perception, introspection, memory, reason, and testimony, but there is no universal agreement to what extent they all provide valid justification. Perception relies on sensory organs to gain empirical information. There are various forms of perception corresponding to different physical stimuli, such as visual, auditory, haptic, olfactory, and gustatory perception. Perception is not merely the reception of sense impressions but an active process that selects, organizes, and interprets sensory signals. Introspection is a closely related process focused not on external physical objects but on internal mental states. For example, seeing a bus at a bus station belongs to perception while feeling tired belongs to introspection.
Rationalists understand reason as a source of justification for non-empirical facts. It is often used to explain how people can know about mathematical, logical, and conceptual truths. Reason is also responsible for inferential knowledge, in which one or several beliefs are used as premises to support another belief. Memory depends on information provided by other sources, which it retains and recalls, like remembering a phone number perceived earlier. Justification by testimony relies on information one person communicates to another person. This can happen by talking to each other but can also occur in other forms, like a letter, a newspaper, and a blog.
Other concepts
Rationality is closely related to justification and the terms rational belief and justified belief are sometimes used as synonyms. However, rationality has a wider scope that encompasses both a theoretical side, covering beliefs, and a practical side, covering decisions, intentions, and actions. There are different conceptions about what it means for something to be rational. According to one view, a mental state is rational if it is based on or responsive to good reasons. Another view emphasizes the role of coherence, stating that rationality requires that the different mental states of a person are consistent and support each other. A slightly different approach holds that rationality is about achieving certain goals. Two goals of theoretical rationality are accuracy and comprehensiveness, meaning that a person has as few false beliefs and as many true beliefs as possible.
Epistemic norms are criteria to assess the cognitive quality of beliefs, like their justification and rationality. Epistemologists distinguish between deontic norms, which are prescriptions about what people should believe or which beliefs are correct, and axiological norms, which identify the goals and values of beliefs. Epistemic norms are closely related to intellectual or epistemic virtues, which are character traits like open-mindedness and conscientiousness. Epistemic virtues help individuals form true beliefs and acquire knowledge. They contrast with epistemic vices and act as foundational concepts of virtue epistemology.
Evidence for a belief is information that favors or supports it. Epistemologists understand evidence primarily in terms of mental states, for example, as sensory impressions or as other propositions that a person knows. But in a wider sense, it can also include physical objects, like bloodstains examined by forensic analysts or financial records studied by investigative journalists. Evidence is often understood in terms of probability: evidence for a belief makes it more likely that the belief is true. A defeater is evidence against a belief or evidence that undermines another piece of evidence. For instance, witness testimony connecting a suspect to a crime is evidence for their guilt while an alibi is a defeater. Evidentialists analyze justification in terms of evidence by saying that to be justified, a belief needs to rest on adequate evidence.
The presence of evidence usually affects doubt and certainty, which are subjective attitudes toward propositions that differ regarding their level of confidence. Doubt involves questioning the validity or truth of a proposition. Certainty, by contrast, is a strong affirmative conviction, meaning that the person is free of doubt that the proposition is true. In epistemology, doubt and certainty play central roles in attempts to find a secure foundation of all knowledge and in skeptical projects aiming to establish that no belief is immune to doubt.
While propositional knowledge is the main topic in epistemology, some theorists focus on understanding rather than knowledge. Understanding is a more holistic notion that involves a wider grasp of a subject. To understand something, a person requires awareness of how different things are connected and why they are the way they are. For example, knowledge of isolated facts memorized from a textbook does not amount to understanding. According to one view, understanding is a special epistemic good that, unlike knowledge, is always intrinsically valuable. Wisdom is similar in this regard and is sometimes considered the highest epistemic good. It encompasses a reflective understanding with practical applications. It helps people grasp and evaluate complex situations and lead a good life.
Schools of thought
Skepticism, fallibilism, and relativism
Philosophical skepticism questions the human ability to arrive at knowledge. Some skeptics limit their criticism to certain domains of knowledge. For example, religious skeptics say that it is impossible to have certain knowledge about the existence of deities or other religious doctrines. Similarly, moral skeptics challenge the existence of moral knowledge and metaphysical skeptics say that humans cannot know ultimate reality.
Global skepticism is the widest form of skepticism, asserting that there is no knowledge in any domain. In ancient philosophy, this view was accepted by academic skeptics while Pyrrhonian skeptics recommended the suspension of belief to achieve a state of tranquility. Overall, not many epistemologists have explicitly defended global skepticism. The influence of this position derives mainly from attempts by other philosophers to show that their theory overcomes the challenge of skepticism. For example, René Descartes used methodological doubt to find facts that cannot be doubted.
One consideration in favor of global skepticism is the dream argument. It starts from the observation that, while people are dreaming, they are usually unaware of this. This inability to distinguish between dream and regular experience is used to argue that there is no certain knowledge since a person can never be sure that they are not dreaming. Some critics assert that global skepticism is a self-refuting idea because denying the existence of knowledge is itself a knowledge claim. Another objection says that the abstract reasoning leading to skepticism is not convincing enough to overrule common sense.
Fallibilism is another response to skepticism. Fallibilists agree with skeptics that absolute certainty is impossible. Most fallibilists disagree with skeptics about the existence of knowledge, saying that there is knowledge since it does not require absolute certainty. They emphasize the need to keep an open and inquisitive mind since doubt can never be fully excluded, even for well-established knowledge claims like thoroughly tested scientific theories.
Epistemic relativism is a related view. It does not question the existence of knowledge in general but rejects the idea that there are universal epistemic standards or absolute principles that apply equally to everyone. This means that what a person knows depends on the subjective criteria or social conventions used to assess epistemic status.
Empiricism and rationalism
The debate between empiricism and rationalism centers on the origins of human knowledge. Empiricism emphasizes that sense experience is the primary source of all knowledge. Some empiricists express this view by stating that the mind is a blank slate that only develops ideas about the external world through the sense data it receives from the sensory organs. According to them, the mind can arrive at various additional insights by comparing impressions, combining them, generalizing to arrive at more abstract ideas, and deducing new conclusions from them. Empiricists say that all these mental operations depend on material from the senses and do not function on their own.
Even though rationalists usually accept sense experience as one source of knowledge, they also say that important forms of knowledge come directly from reason without sense experience, like knowledge of mathematical and logical truths. According to some rationalists, the mind possesses inborn ideas which it can access without the help of the senses. Others hold that there is an additional cognitive faculty, sometimes called rational intuition, through which people acquire nonempirical knowledge. Some rationalists limit their discussion to the origin of concepts, saying that the mind relies on inborn categories to understand the world and organize experience.
Foundationalism and coherentism
Foundationalists and coherentists disagree about the structure of knowledge. Foundationalism distinguishes between basic and non-basic beliefs. A belief is basic if it is justified directly, meaning that its validity does not depend on the support of other beliefs. A belief is non-basic if it is justified by another belief. For example, the belief that it rained last night is a non-basic belief if it is inferred from the observation that the street is wet. According to foundationalism, basic beliefs are the foundation on which all other knowledge is built while non-basic beliefs constitute the superstructure resting on this foundation.
Coherentists reject the distinction between basic and non-basic beliefs, saying that the justification of any belief depends on other beliefs. They assert that a belief must be in tune with other beliefs to amount to knowledge. This is the case if the beliefs are consistent and support each other. According to coherentism, justification is a holistic aspect determined by the whole system of beliefs, which resembles an interconnected web.
The view of foundherentism is an intermediary position combining elements of both foundationalism and coherentism. It accepts the distinction between basic and non-basic beliefs while asserting that the justification of non-basic beliefs depends on coherence with other beliefs.
Infinitism presents another approach to the structure of knowledge. It agrees with coherentism that there are no basic beliefs while rejecting the view that beliefs can support each other in a circular manner. Instead, it argues that beliefs form infinite justification chains, in which each link of the chain supports the belief following it and is supported by the belief preceding it.
Internalism and externalism
The disagreement between internalism and externalism is about the sources of justification. Internalists say that justification depends only on factors within the individual. Examples of such factors include perceptual experience, memories, and the possession of other beliefs. This view emphasizes the importance of the cognitive perspective of the individual in the form of their mental states. It is commonly associated with the idea that the relevant factors are accessible, meaning that the individual can become aware of their reasons for holding a justified belief through introspection and reflection.
Externalism rejects this view, saying that at least some relevant factors are external to the individual. This means that the cognitive perspective of the individual is less central while other factors, specifically the relation to truth, become more important. For instance, when considering the belief that a cup of coffee stands on the table, externalists are not only interested in the perceptual experience that led to this belief but also consider the quality of the person's eyesight, their ability to differentiate coffee from other beverages, and the circumstances under which they observed the cup.
Evidentialism is an influential internalist view. It says that justification depends on the possession of evidence. In this context, evidence for a belief is any information in the individual's mind that supports the belief. For example, the perceptual experience of rain is evidence for the belief that it is raining. Evidentialists have suggested various other forms of evidence, including memories, intuitions, and other beliefs. According to evidentialism, a belief is justified if the individual's evidence supports the belief and they hold the belief on the basis of this evidence.
Reliabilism is an externalist theory asserting that a reliable connection between belief and truth is required for justification. Some reliabilists explain this in terms of reliable processes. According to this view, a belief is justified if it is produced by a reliable belief-formation process, like perception. A belief-formation process is reliable if most of the beliefs it causes are true. A slightly different view focuses on beliefs rather than belief-formation processes, saying that a belief is justified if it is a reliable indicator of the fact it presents. This means that the belief tracks the fact: the person believes it because it is a fact but would not believe it otherwise.
Virtue epistemology is another type of externalism and is sometimes understood as a form of reliabilism. It says that a belief is justified if it manifests intellectual virtues. Intellectual virtues are capacities or traits that perform cognitive functions and help people form true beliefs. Suggested examples include faculties like vision, memory, and introspection.
Others
In the epistemology of perception, direct and indirect realists disagree about the connection between the perceiver and the perceived object. Direct realists say that this connection is direct, meaning that there is no difference between the object present in perceptual experience and the physical object causing this experience. According to indirect realism, the connection is indirect since there are mental entities, like ideas or sense data, that mediate between the perceiver and the external world. The contrast between direct and indirect realism is important for explaining the nature of illusions.
Constructivism in epistemology is the theory that how people view the world is not a simple reflection of external reality but an invention or a social construction. This view emphasizes the creative role of interpretation while undermining objectivity since social constructions may differ from society to society.
According to contrastivism, knowledge is a comparative term, meaning that to know something involves distinguishing it from relevant alternatives. For example, if a person spots a bird in the garden, they may know that it is a sparrow rather than an eagle but they may not know that it is a sparrow rather than an indistinguishable sparrow hologram.
Epistemic conservatism is a view about belief revision. It gives preference to the beliefs a person already has, asserting that a person should only change their beliefs if they have a good reason to. One motivation for adopting epistemic conservatism is that the cognitive resources of humans are limited, meaning that it is not feasible to constantly reexamine every belief.
Pragmatist epistemology is a form of fallibilism that emphasizes the close relation between knowing and acting. It sees the pursuit of knowledge as an ongoing process guided by common sense and experience while always open to revision.
Bayesian epistemology is a formal approach based on the idea that people have degrees of belief representing how certain they are. It uses probability theory to define norms of rationality that govern how certain people should be about their beliefs.
Phenomenological epistemology emphasizes the importance of first-person experience. It distinguishes between the natural and the phenomenological attitudes. The natural attitude focuses on objects belonging to common sense and natural science. The phenomenological attitude focuses on the experience of objects and aims to provide a presuppositionless description of how objects appear to the observer.
Particularism and generalism disagree about the right method of conducting epistemological research. Particularists start their inquiry by looking at specific cases. For example, to find a definition of knowledge, they rely on their intuitions about concrete instances of knowledge and particular thought experiments. They use these observations as methodological constraints that any theory of more general principles needs to follow. Generalists proceed in the opposite direction. They give preference to general epistemic principles, saying that it is not possible to accurately identify and describe specific cases without a grasp of these principles. Other methods in contemporary epistemology aim to extract philosophical insights from ordinary language or look at the role of knowledge in making assertions and guiding actions.
Postmodern epistemology criticizes the conditions of knowledge in advanced societies. This concerns in particular the metanarrative of a constant progress of scientific knowledge leading to a universal and foundational understanding of reality. Feminist epistemology critiques the effect of gender on knowledge. Among other topics, it explores how preconceptions about gender influence who has access to knowledge, how knowledge is produced, and which types of knowledge are valued in society. Decolonial scholarship criticizes the global influence of Western knowledge systems, often with the aim of decolonizing knowledge to undermine Western hegemony.
Various schools of epistemology are found in traditional Indian philosophy. Many of them focus on the different sources of knowledge, called . Perception, inference, and testimony are sources discussed by most schools. Other sources only considered by some schools are non-perception, which leads to knowledge of absences, and presumption. Buddhist epistemology tends to focus on immediate experience, understood as the presentation of unique particulars without the involvement of secondary cognitive processes, like thought and desire. Nyāya epistemology discusses the causal relation between the knower and the object of knowledge, which happens through reliable knowledge-formation processes. It sees perception as the primary source of knowledge, drawing a close connection between it and successful action. Mīmāṃsā epistemology understands the holy scriptures known as the Vedas as a key source of knowledge while discussing the problem of their right interpretation. Jain epistemology states that reality is many-sided, meaning that no single viewpoint can capture the entirety of truth.
Branches
Some branches of epistemology focus on the problems of knowledge within specific academic disciplines. The epistemology of science examines how scientific knowledge is generated and what problems arise in the process of validating, justifying, and interpreting scientific claims. A key issue concerns the problem of how individual observations can support universal scientific laws. Further topics include the nature of scientific evidence and the aims of science. The epistemology of mathematics studies the origin of mathematical knowledge. In exploring how mathematical theories are justified, it investigates the role of proofs and whether there are empirical sources of mathematical knowledge.
Epistemological problems are found in most areas of philosophy. The epistemology of logic examines how people know that an argument is valid. For example, it explores how logicians justify that modus ponens is a correct rule of inference or that all contradictions are false. Epistemologists of metaphysics investigate whether knowledge of ultimate reality is possible and what sources this knowledge could have. Knowledge of moral statements, like the claim that lying is wrong, belongs to the epistemology of ethics. It studies the role of ethical intuitions, coherence among moral beliefs, and the problem of moral disagreement. The ethics of belief is a closely related field covering the interrelation between epistemology and ethics. It examines the norms governing belief formation and asks whether violating them is morally wrong.
Religious epistemology studies the role of knowledge and justification for religious doctrines and practices. It evaluates the weight and reliability of evidence from religious experience and holy scriptures while also asking whether the norms of reason should be applied to religious faith. Social epistemology focuses on the social dimension of knowledge. While traditional epistemology is mainly interested in knowledge possessed by individuals, social epistemology covers knowledge acquisition, transmission, and evaluation within groups, with specific emphasis on how people rely on each other when seeking knowledge. Historical epistemology examines how the understanding of knowledge and related concepts has changed over time. It asks whether the main issues in epistemology are perennial and to what extent past epistemological theories are relevant to contemporary debates. It is particularly concerned with scientific knowledge and practices associated with it. It contrasts with the history of epistemology, which presents, reconstructs, and evaluates epistemological theories of philosophers in the past.
Naturalized epistemology is closely associated with the natural sciences, relying on their methods and theories to examine knowledge. Naturalistic epistemologists focus on empirical observation to formulate their theories and are often critical of approaches to epistemology that proceed by a priori reasoning. Evolutionary epistemology is a naturalistic approach that understands cognition as a product of evolution, examining knowledge and the cognitive faculties responsible for it from the perspective of natural selection. Epistemologists of language explore the nature of linguistic knowledge. One of their topics is the role of tacit knowledge, for example, when native speakers have mastered the rules of grammar but are unable to explicitly articulate those rules. Epistemologists of modality examine knowledge about what is possible and necessary. Epistemic problems that arise when two people have diverging opinions on a topic are covered by the epistemology of disagreement. Epistemologists of ignorance are interested in epistemic faults and gaps in knowledge.
There are distinct areas of epistemology dedicated to specific sources of knowledge. Examples are the epistemology of perception, the epistemology of memory, and the epistemology of testimony.
Some branches of epistemology are characterized by their research method. Formal epistemology employs formal tools found in logic and mathematics to investigate the nature of knowledge. Experimental epistemologists rely in their research on empirical evidence about common knowledge practices. Applied epistemology focuses on the practical application of epistemological principles to diverse real-world problems, like the reliability of knowledge claims on the internet, how to assess sexual assault allegations, and how racism may lead to epistemic injustice.
Metaepistemologists examine the nature, goals, and research methods of epistemology. As a metatheory, it does not directly defend a position about which epistemological theories are correct but examines their fundamental concepts and background assumptions.
Related fields
Epistemology and psychology were not defined as distinct fields until the 19th century; earlier investigations about knowledge often do not fit neatly into today's academic categories. Both contemporary disciplines study beliefs and the mental processes responsible for their formation and change. One important contrast is that psychology describes what beliefs people have and how they acquire them, thereby explaining why someone has a specific belief. The focus of epistemology is on evaluating beliefs, leading to a judgment about whether a belief is justified and rational in a particular case. Epistemology has a similar intimate connection to cognitive science, which understands mental events as processes that transform information. Artificial intelligence relies on the insights of epistemology and cognitive science to implement concrete solutions to problems associated with knowledge representation and automatic reasoning.
Logic is the study of correct reasoning. For epistemology, it is relevant to inferential knowledge, which arises when a person reasons from one known fact to another. This is the case, for example, if a person does not know directly that but comes to infer it based on their knowledge that , , and . Whether an inferential belief amounts to knowledge depends on the form of reasoning used, in particular, that the process does not violate the laws of logic. Another overlap between the two fields is found in the epistemic approach to fallacy theory. Fallacies are faulty arguments based on incorrect reasoning. The epistemic approach to fallacies explains why they are faulty, stating that arguments aim to expand knowledge. According to this view, an argument is a fallacy if it fails to do so. A further intersection is found in epistemic logic, which uses formal logical devices to study epistemological concepts like knowledge and belief.
Both decision theory and epistemology are interested in the foundations of rational thought and the role of beliefs. Unlike many approaches in epistemology, the main focus of decision theory lies less in the theoretical and more in the practical side, exploring how beliefs are translated into action. Decision theorists examine the reasoning involved in decision-making and the standards of good decisions. They identify beliefs as a central aspect of decision-making. One of their innovations is to distinguish between weaker and stronger beliefs. This helps them take the effect of uncertainty on decisions into consideration.
Epistemology and education have a shared interest in knowledge, with one difference being that education focuses on the transmission of knowledge, exploring the roles of both learner and teacher. Learning theory examines how people acquire knowledge. Behavioral learning theories explain the process in terms of behavior changes, for example, by associating a certain response with a particular stimulus. Cognitive learning theories study how the cognitive processes that affect knowledge acquisition transform information. Pedagogy looks at the transmission of knowledge from the teacher's side, exploring the teaching methods they may employ. In teacher-centered methods, the teacher takes the role of the main authority delivering knowledge and guiding the learning process. In student-centered methods, the teacher mainly supports and facilitates the learning process while the students take a more active role. The beliefs students have about knowledge, called personal epistemology, affect their intellectual development and learning success.
The anthropology of knowledge examines how knowledge is acquired, stored, retrieved, and communicated. It studies the social and cultural circumstances that affect how knowledge is reproduced and changes, covering the role of institutions like university departments and scientific journals as well as face-to-face discussions and online communications. It understands knowledge in a wide sense that encompasses various forms of understanding and culture, including practical skills. Unlike epistemology, it is not interested in whether a belief is true or justified but in how understanding is reproduced in society. The sociology of knowledge is a closely related field with a similar conception of knowledge. It explores how physical, demographic, economic, and sociocultural factors impact knowledge. It examines in what sociohistorical contexts knowledge emerges and the effects it has on people, for example, how socioeconomic conditions are related to the dominant ideology in a society.
History
Early reflections on the nature and sources of knowledge are found in ancient history. In ancient Greek philosophy, Plato (427–347 BCE) studied what knowledge is, examining how it differs from true opinion by being based on good reasons. According to him, the process of learning something is a form of recollection in which the soul remembers what it already knew before. Aristotle (384–322 BCE) was particularly interested in scientific knowledge, exploring the role of sensory experience and how to make inferences from general principles. The Hellenistic schools began to arise in the 4th century BCE. The Epicureans had an empiricist outlook, stating that sensations are always accurate and act as the supreme standard of judgments. The Stoics defended a similar position but limited themselves to lucid and specific sensations, which they regarded as true. The skepticists questioned that knowledge is possible, recommending instead suspension of judgment to arrive at a state of tranquility.
The Upanishads, philosophical scriptures composed in ancient India between 700 and 300 BCE, examined how people acquire knowledge, including the role of introspection, comparison, and deduction. In the 6th century BCE, the school of Ajñana developed a radical skepticism questioning the possibility and usefulness of knowledge. The school of Nyaya emerged in the 2nd century BCE and provided a systematic treatment of how people acquire knowledge, distinguishing between valid and invalid sources. When Buddhist philosophers later became interested in epistemology, they relied on concepts developed in Nyaya and other traditions. Buddhist philosopher Dharmakirti (6th or 7th century CE) analyzed the process of knowing as a series of causally related events.
Ancient Chinese philosophers understood knowledge as an interconnected phenomenon fundamentally linked to ethical behavior and social involvement. Many saw wisdom as the goal of attaining knowledge. Mozi (470–391 BCE) proposed a pragmatic approach to knowledge using historical records, sensory evidence, and practical outcomes to validate beliefs. Mencius explored analogical reasoning as another source of knowledge. Xunzi aimed to combine empirical observation and rational inquiry. He emphasized the importance of clarity and standards of reasoning without excluding the role of feeling and emotion.
The relation between reason and faith was a central topic in the medieval period. In Arabic–Persian philosophy, al-Farabi and Averroes (1126–1198) discussed how philosophy and theology interact and which is the better vehicle to truth. Al-Ghazali criticized many of the core teachings of previous Islamic philosophers, saying that they rely on unproven assumptions that do not amount to knowledge. In Western philosophy, Anselm of Canterbury (1033–1109) proposed that theological teaching and philosophical inquiry are in harmony and complement each other. Peter Abelard (1079–1142) argued against unquestioned theological authorities and said that all things are open to rational doubt. Influenced by Aristotle, Thomas Aquinas (1225–1274) developed an empiricist theory, stating that "nothing is in the intellect unless it first appeared in the senses". According to an early form of direct realism proposed by William of Ockham, perception of mind-independent objects happens directly without intermediaries. Meanwhile, in 14th-century India, Gaṅgeśa developed a reliabilist theory of knowledge and considered the problems of testimony and fallacies. In China, Wang Yangming (1472–1529) explored the unity of knowledge and action, holding that moral knowledge is inborn and can be attained by overcoming self-interest.
The course of modern philosophy was shaped by René Descartes (1596–1650), who claimed that philosophy must begin from a position of indubitable knowledge of first principles. Inspired by skepticism, he aimed to find absolutely certain knowledge by encountering truths that cannot be doubted. He thought that this is the case for the assertion "I think, therefore I am", from which he constructed the rest of his philosophical system. Descartes, together with Baruch Spinoza (1632–1677) and Gottfried Wilhelm Leibniz (1646–1716), belonged to the school of rationalism, which asserts that the mind possesses innate ideas independent of experience. John Locke (1632–1704) rejected this view in favor of an empiricism according to which the mind is a blank slate. This means that all ideas depend on sense experience, either as "ideas of sense", which are directly presented through the senses, or as "ideas of reflection", which the mind creates by reflecting on ideas of sense. David Hume (1711–1776) used this idea to explore the limits of what people can know. He said that knowledge of facts is never certain, adding that knowledge of relations between ideas, like mathematical truths, can be certain but contains no information about the world. Immanuel Kant (1724–1804) tried to find a middle position between rationalism and empiricism by identifying a type of knowledge that Hume had missed. For Kant, this is knowledge about principles that underlie all experience and structure it, such as spatial and temporal relations and fundamental categories of understanding.
In the 19th-century, Georg Wilhelm Friedrich Hegel (1770–1831) argued against empiricism, saying that sensory impressions on their own cannot amount to knowledge since all knowledge is actively structured by the knowing subject. John Stuart Mill (1806–1873) defended a wide-sweeping form of empiricism and explained knowledge of general truths through inductive reasoning. Charles Peirce (1839–1914) thought that all knowledge is fallible, emphasizing that knowledge seekers should always be ready to revise their beliefs if new evidence is encountered. He used this idea to argue against Cartesian foundationalism seeking absolutely certain truths.
In the 20th century, fallibilism was further explored by J. L. Austin (1911–1960) and Karl Popper (1902–1994). In continental philosophy, Edmund Husserl (1859–1938) applied the skeptic idea of suspending judgment to the study of experience. By not judging whether an experience is accurate or not, he tried to describe the internal structure of experience instead. Logical positivists, like A. J. Ayer (1910–1989), said that all knowledge is either empirical or analytic. Bertrand Russell (1872–1970) developed an empiricist sense-datum theory, distinguishing between direct knowledge by acquaintance of sense data and indirect knowledge by description, which is inferred from knowledge by acquaintance. Common sense had a central place in G. E. Moore's (1873–1958) epistemology. He used trivial observations, like the fact that he has two hands, to argue against abstract philosophical theories that deviate from common sense. Ordinary language philosophy, as practiced by the late Ludwig Wittgenstein (1889–1951), is a similar approach that tries to extract epistemological insights from how ordinary language is used.
Edmund Gettier (1927–2021) conceived counterexamples against the idea that knowledge is the same as justified true belief. These counterexamples prompted many philosophers to suggest alternative definitions of knowledge. One of the alternatives considered was reliabilism, which says that knowledge requires reliable sources, shifting the focus away from justification. Virtue epistemology, a closely related response, analyses belief formation in terms of the intellectual virtues or cognitive competencies involved in the process. Naturalized epistemology, as conceived by Willard Van Orman Quine (1908–2000), employs concepts and ideas from the natural sciences to formulate its theories. Other developments in late 20th-century epistemology were the emergence of social, feminist, and historical epistemology.
See also
Logology (science)
References
Notes
Citations
Bibliography
External links | 0.820849 | 0.999733 | 0.82063 |
Relativism | Relativism is a family of philosophical views which deny claims to objectivity within a particular domain and assert that valuations in that domain are relative to the perspective of an observer or the context in which they are assessed. There are many different forms of relativism, with a great deal of variation in scope and differing degrees of controversy among them. Moral relativism encompasses the differences in moral judgments among people and cultures. Epistemic relativism holds that there are no absolute principles regarding normative belief, justification, or rationality, and that there are only relative ones. Alethic relativism (also factual relativism) is the doctrine that there are no absolute truths, i.e., that truth is always relative to some particular frame of reference, such as a language or a culture (cultural relativism). Some forms of relativism also bear a resemblance to philosophical skepticism. Descriptive relativism seeks to describe the differences among cultures and people without evaluation, while normative relativism evaluates the word truthfulness of views within a given framework.
Forms of relativism
Anthropological versus philosophical relativism
Anthropological relativism refers to a methodological stance, in which the researcher suspends (or brackets) their own cultural prejudice while trying to understand beliefs or behaviors in their contexts. This has become known as methodological relativism, and concerns itself specifically with avoiding ethnocentrism or the application of one's own cultural standards to the assessment of other cultures. This is also the basis of the so-called "emic" and "etic" distinction, in which:
An emic or insider account of behavior is a description of a society in terms that are meaningful to the participant or actor's own culture; an emic account is therefore culture-specific, and typically refers to what is considered "common sense" within the culture under observation.
An etic or outsider account is a description of a society by an observer, in terms that can be applied to other cultures; that is, an etic account is culturally neutral, and typically refers to the conceptual framework of the social scientist. (This is complicated when it is scientific research itself that is under study, or when there is theoretical or terminological disagreement within the social sciences.)
Philosophical relativism, in contrast, asserts that the truth of a proposition depends on the metaphysical, or theoretical frame, or the instrumental method, or the context in which the proposition is expressed, or on the person, groups, or culture who interpret the proposition.
Methodological relativism and philosophical relativism can exist independently from one another, but most anthropologists base their methodological relativism on that of the philosophical variety.
Descriptive versus normative relativism
The concept of relativism also has importance both for philosophers and for anthropologists in another way. In general, anthropologists engage in descriptive relativism ("how things are" or "how things seem"), whereas philosophers engage in normative relativism ("how things ought to be"), although there is some overlap (for example, descriptive relativism can pertain to concepts, normative relativism to truth).
Descriptive relativism assumes that certain cultural groups have different modes of thought, standards of reasoning, and so forth, and it is the anthropologist's task to describe, but not to evaluate the validity of these principles and practices of a cultural group. It is possible for an anthropologist in his or her fieldwork to be a descriptive relativist about some things that typically concern the philosopher (e.g., ethical principles) but not about others (e.g., logical principles). However, the descriptive relativist's empirical claims about epistemic principles, moral ideals and the like are often countered by anthropological arguments that such things are universal, and much of the recent literature on these matters is explicitly concerned with the extent of, and evidence for, cultural or moral or linguistic or human universals.
The fact that the various species of descriptive relativism are empirical claims may tempt the philosopher to conclude that they are of little philosophical interest, but there are several reasons why this is not so. First, some philosophers, notably Kant, argue that certain sorts of cognitive differences between human beings (or even all rational beings) are impossible, so such differences could never be found to obtain in fact, an argument that places a priori limits on what empirical inquiry could discover and on what versions of descriptive relativism could be true. Second, claims about actual differences between groups play a central role in some arguments for normative relativism (for example, arguments for normative ethical relativism often begin with claims that different groups in fact have different moral codes or ideals). Finally, the anthropologist's descriptive account of relativism helps to separate the fixed aspects of human nature from those that can vary, and so a descriptive claim that some important aspect of experience or thought does (or does not) vary across groups of human beings tells us something important about human nature and the human condition.
Normative relativism concerns normative or evaluative claims that modes of thought, standards of reasoning, or the like are only right or wrong relative to a framework. 'Normative' is meant in a general sense, applying to a wide range of views; in the case of beliefs, for example, normative correctness equals truth. This does not mean, of course, that framework-relative correctness or truth is always clear, the first challenge being to explain what it amounts to in any given case (e.g., with respect to concepts, truth, epistemic norms). Normative relativism (say, in regard to normative ethical relativism) therefore implies that things (say, ethical claims) are not simply true in themselves, but only have truth values relative to broader frameworks (say, moral codes). (Many normative ethical relativist arguments run from premises about ethics to conclusions that assert the relativity of truth values, bypassing general claims about the nature of truth, but it is often more illuminating to consider the type of relativism under question directly.)
Legal relativism
In English common law, two (perhaps three) separate standards of proof are recognized:
proof based on the balance of probabilities is the lesser standard used in civil litigation, which cases mostly concern money or some other penalty, that, if further and better evidence should emerge, is reasonably reversible.
proof beyond reasonable doubt is used in criminal law cases where an accused's right to personal freedom or survival is in question, because such punishment is not reasonably reversible.
Absolute truth is so complex as to be only capable of being fully understood by the omniscient established during the Tudor period as the one true God
Related and contrasting positions
Relationism is the theory that there are only relations between individual entities, and no intrinsic
properties. Despite the similarity in name, it is held by some to be a position distinct from relativism—for instance, because "statements about relational properties [...] assert an absolute truth about things in the world".
On the other hand, others wish to equate relativism, relationism and even relativity, which is a precise theory of relationships between physical objects: Nevertheless, "This confluence of relativity theory with relativism became a strong contributing factor in the increasing prominence of relativism".
Whereas previous investigations of science only sought sociological or psychological explanations of failed scientific theories or pathological science, the 'strong programme' is more relativistic, assessing scientific truth and falsehood equally in a historic and cultural context.
Criticisms
A common argument against relativism suggests that it inherently refutes itself: the statement "all is relative" classes either as a relative statement or as an absolute one. If it is relative, then this statement does not rule out absolutes. If the statement is absolute, on the other hand, then it provides an example of an absolute statement, proving that not all truths are relative. However, this argument against relativism only applies to relativism that positions truth as relative–i.e. epistemological/truth-value relativism. More specifically, it is only extreme forms of epistemological relativism that can come in for this criticism as there are many epistemological relativists who posit that some aspects of what is regarded as factually "true" are not universal, yet still accept that other universal truths exist (e.g. gas laws or moral laws).
Another argument against relativism posits a Natural Law. Simply put, the physical universe works under basic principles: the "Laws of Nature". Some contend that a natural Moral Law may also exist, for example as argued by, Immanuel Kant in Critique of Practical Reason, Richard Dawkins in The God Delusion (2006) and addressed by C. S. Lewis in Mere Christianity (1952). Dawkins said "I think we face an equal but much more sinister challenge from the left, in the shape of cultural relativism - the view that scientific truth is only one kind of truth and it is not to be especially privileged".
Philosopher Hilary Putnam, among others, states that some forms of relativism make it impossible to believe one is in error. If there is no truth beyond an individual's belief that something is true, then an individual cannot hold their own beliefs to be false or mistaken. A related criticism is that relativizing truth to individuals destroys the distinction between truth and belief.
Views
Philosophical
Ancient
Sophism
Sophists are considered the founding fathers of relativism in Western philosophy. Elements of relativism emerged among the Sophists in the 5th century BC. Notably, it was Protagoras who coined the phrase, "Man is the measure of all things: of things which are, that they are, and of things which are not, that they are not." The thinking of the Sophists is mainly known through their opponent, Plato. In a paraphrase from Plato's dialogue Theaetetus, Protagoras said: "What is true for you is true for you, and what is true for me is true for me."
Modern
Bernard Crick
Bernard Crick, a British political scientist and advocate of relativism, suggested in In Defence of Politics (1962) that moral conflict between people is inevitable. He thought that only ethics can resolve such conflict, and when that occurs in public it results in politics. Accordingly, Crick saw the process of dispute resolution, harms reduction, mediation or peacemaking as central to all of moral philosophy. He became an important influence on feminists and later on the Greens.
Paul Feyerabend
Philosopher of science Paul Feyerabend is often considered to be a relativist, although he denied being one.
Feyerabend argued that modern science suffers from being methodologically monistic (the belief that only a single methodology can produce scientific progress). Feyerabend summarises his case in Against Method with the phrase "anything goes".
In an aphorism [Feyerabend] often repeated, "potentially every culture is all cultures". This is intended to convey that world views are not hermetically closed, since their leading concepts have an "ambiguity" - better, an open-endedness - which enables people from other cultures to engage with them. [...] It follows that relativism, understood as the doctrine that truth is relative to closed systems, can get no purchase. [...] For Feyerabend, both hermetic relativism and its absolutist rival [realism] serve, in their different ways, to "devalue human existence". The former encourages that unsavoury brand of political correctness which takes the refusal to criticise "other cultures" to the extreme of condoning murderous dictatorship and barbaric practices. The latter, especially in its favoured contemporary form of "scientific realism", with the excessive prestige it affords to the abstractions of "the monster 'science'", is in bed with a politics which likewise disdains variety, richness and everyday individuality - a politics which likewise "hides" its norms behind allegedly neutral facts, "blunts choices and imposes laws".
Thomas Kuhn
Thomas Kuhn's philosophy of science, as expressed in The Structure of Scientific Revolutions, is often interpreted as relativistic. He claimed that, as well as progressing steadily and incrementally ("normal science"), science undergoes periodic revolutions or "paradigm shifts", leaving scientists working in different paradigms with difficulty in even communicating. Thus the truth of a claim, or the existence of a posited entity, is relative to the paradigm employed. However, it is not necessary for him to embrace relativism because every paradigm presupposes the prior, building upon itself through history and so on. This leads to there being a fundamental, incremental, and referential structure of development which is not relative but again, fundamental.
From these remarks, one thing is however certain: Kuhn is not saying that incommensurable theories cannot be compared - what they can't be is compared in terms of a system of common measure. He very plainly says that they can be compared, and he reiterates this repeatedly in later work, in a (mostly in vain) effort to avert the crude and sometimes catastrophic misinterpretations he suffered from mainstream philosophers and post-modern relativists alike.
But Kuhn rejected the accusation of being a relativist later in his postscript:
scientific development is ... a unidirectional and irreversible process. Later scientific theories are better than earlier ones for solving puzzles ... That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress.
Some have argued that one can also read Kuhn's work as essentially positivist in its ontology: the revolutions he posits are epistemological, lurching toward a presumably 'better' understanding of an objective reality through the lens presented by the new paradigm. However, a number of passages in Structure do indeed appear to be distinctly relativist, and to directly challenge the notion of an objective reality and the ability of science to progress towards an ever-greater grasp of it, particularly through the process of paradigm change.
In the sciences there need not be progress of another sort. We may, to be more precise, have to relinquish the notion, explicit or implicit, that changes of paradigm carry scientists and those who learn from them closer and closer to the truth.
We are all deeply accustomed to seeing science as the one enterprise that draws constantly nearer to some goal set by nature in advance. But need there be any such goal? Can we not account for both science's existence and its success in terms of evolution from the community's state of knowledge at any given time? Does it really help to imagine that there is some one full, objective, true account of nature and that the proper measure of scientific achievement is the extent to which it brings us closer to that ultimate goal?
George Lakoff and Mark Johnson
George Lakoff and Mark Johnson define relativism in Metaphors We Live By as the rejection of both subjectivism and metaphysical objectivism in order to focus on the relationship between them, i.e. the metaphor by which we relate our current experience to our previous experience. In particular, Lakoff and Johnson characterize "objectivism" as a "straw man", and, to a lesser degree, criticize the views of Karl Popper, Kant and Aristotle.
Robert Nozick
In his book Invariances, Robert Nozick expresses a complex set of theories about the absolute and the relative. He thinks the absolute/relative distinction should be recast in terms of an invariant/variant distinction, where there are many things a proposition can be invariant with regard to or vary with. He thinks it is coherent for truth to be relative, and speculates that it might vary with time. He thinks necessity is an unobtainable notion, but can be approximated by robust invariance across a variety of conditions—although we can never identify a proposition that is invariant with regard to everything. Finally, he is not particularly warm to one of the most famous forms of relativism, moral relativism, preferring an evolutionary account.
Joseph Margolis
Joseph Margolis advocates a view he calls "robust relativism" and defends it in his books Historied Thought, Constructed World, Chapter 4 (California, 1995) and The Truth about Relativism (Blackwell, 1991). He opens his account by stating that our logics should depend on what we take to be the nature of the sphere to which we wish to apply our logics. Holding that there can be no distinctions which are not "privileged" between the alethic, the ontic, and the epistemic, he maintains that a many-valued logic just might be the most apt for aesthetics or history since, because in these practices, we are loath to hold to simple binary logic; and he also holds that many-valued logic is relativistic. (This is perhaps an unusual definition of "relativistic". Compare with his comments on "relationism".) To say that "True" and "False" are mutually exclusive and exhaustive judgements on Hamlet, for instance, really does seem absurd. A many-valued logicwith its values "apt", "reasonable", "likely", and so onseems intuitively more applicable to interpreting Hamlet. Where apparent contradictions arise between such interpretations, we might call the interpretations "incongruent", rather than dubbing either of them "false", because using many-valued logic implies that a measured value is a mixture of two extreme possibilities. Using the subset of many-valued logic, fuzzy logic, it can be said that various interpretations can be represented by membership in more than one possible truth set simultaneously. Fuzzy logic is therefore probably the best mathematical structure for understanding "robust relativism" and has been interpreted by Bart Kosko as philosophically being related to Zen Buddhism.
It was Aristotle who held that relativism implies that we should, sticking with appearances only, end up contradicting ourselves somewhere if we could apply all attributes to all ousiai (beings). Aristotle, however, made non-contradiction dependent upon his essentialism. If his essentialism is false, then so too is his ground for disallowing relativism. (Subsequent philosophers have found other reasons for supporting the principle of non-contradiction.)
Beginning with Protagoras and invoking Charles Sanders Peirce, Margolis shows that the historic struggle to discredit relativism is an attempt to impose an unexamined belief in the world's essentially rigid rule-like nature. Plato and Aristotle merely attacked "relationalism"the doctrine of true for l or true for k, and the like, where l and k are different speakers or different worldsor something similar (most philosophers would call this position "relativism"). For Margolis, "true" means true; that is, the alethic use of "true" remains untouched. However, in real world contexts, and context is ubiquitous in the real world, we must apply truth values. Here, in epistemic terms, we might tout court retire "true" as an evaluation and keep "false". The rest of our value-judgements could be graded from "extremely plausible" down to "false". Judgements which on a bivalent logic would be incompatible or contradictory are further seen as "incongruent", although one may well have more weight than the other. In short, relativistic logic is not, or need not be, the bugbear it is often presented to be. It may simply be the best type of logic to apply to certain very uncertain spheres of real experiences in the world (although some sort of logic needs to be applied in order to make that judgement). Those who swear by bivalent logic might simply be the ultimate keepers of the great fear of the flux.
Richard Rorty
Philosopher Richard Rorty has a somewhat paradoxical role in the debate over relativism: he is criticized for his relativistic views by many commentators, but has always denied that relativism applies to much anybody, being nothing more than a Platonic scarecrow. Rorty claims, rather, that he is a pragmatist, and that to construe pragmatism as relativism is to beg the question.
'"Relativism" is the traditional epithet applied to pragmatism by realists'
'"Relativism" is the view that every belief on a certain topic, or perhaps about any topic, is as good as every other. No one holds this view. Except for the occasional cooperative freshman, one cannot find anybody who says that two incompatible opinions on an important topic are equally good. The philosophers who get called 'relativists' are those who say that the grounds for choosing between such opinions are less algorithmic than had been thought.'
'In short, my strategy for escaping the self-referential difficulties into which "the Relativist" keeps getting himself is to move everything over from epistemology and metaphysics into cultural politics, from claims to knowledge and appeals to self-evidence to suggestions about what we should try.'
Rorty takes a deflationary attitude to truth, believing there is nothing of interest to be said about truth in general, including the contention that it is generally subjective. He also argues that the notion of warrant or justification can do most of the work traditionally assigned to the concept of truth, and that justification is relative; justification is justification to an audience, for Rorty.
In Contingency, Irony, and Solidarity he argues that the debate between so-called relativists and so-called objectivists is beside the point because they do not have enough premises in common for either side to prove anything to the other.
Nalin de Silva
In his book Mage Lokaya (My World), 1986, Nalin de Silva criticized the basis of the established western system of knowledge, and its propagation, which he refers as "domination throughout the world".He explained in this book that mind independent reality is impossible and knowledge is not found but constructed. Further he has introduced and developed the concept of "Constructive Relativism" as the basis on which knowledge is constructed relative to the sense organs, culture and the mind completely based on Avidya.
Colin Murray Turbayne
In his final book Metaphors for the Mind: The Creative Mind and Its Origins (1991), Colin Murray Turbayne joins the debate about relativism and realism by providing an analysis of the manner in which Platonic metaphors which were first presented in the procreation model of the Timaeus dialogue have evolved over time to influence the philosophical works of both George Berkeley and Emmanuel Kant. In addition, he illustrates the manner in which these ancient Greek metaphors have subsequently evolved to impact the development of the theories of "substance" and "attribute", which in turn have dominated the development of human thought and language in the 20th century.
In his The Myth of Metaphor (1962) Turbayne argues that it is perfectly possible to transcend the limitations which are inherent in such metaphors, including those incorporated within the framework of classical "objective" mechanistic Newtonian cosmology and scientific materialism in general. In Turbayne's view, one can strive to embrace a more satisfactory epistemology by first acknowledging the limitations imposed by such metaphorical systems. This can readily be accomplished by restoring Plato's metaphorical model to its original state in which both "male" and "female" aspects of the mind work in concert within the context of a harmonious balance during the process of creation.
Postmodernism
The term "relativism" often comes up in debates over postmodernism, poststructuralism and phenomenology. Critics of these perspectives often identify advocates with the label "relativism". For example, the Sapir–Whorf hypothesis is often considered a relativist view because it posits that linguistic categories and structures shape the way people view the world. Stanley Fish has defended postmodernism and relativism.
These perspectives do not strictly count as relativist in the philosophical sense, because they express agnosticism on the nature of reality and make epistemological rather than ontological claims. Nevertheless, the term is useful to differentiate them from realists who believe that the purpose of philosophy, science, or literary critique is to locate externally true meanings. Important philosophers and theorists such as Michel Foucault, Max Stirner, political movements such as post-anarchism or post-Marxism can also be considered as relativist in this sense - though a better term might be social constructivist.
The spread and popularity of this kind of "soft" relativism varies between academic disciplines. It has wide support in anthropology and has a majority following in cultural studies. It also has advocates in political theory and political science, sociology, and continental philosophy (as distinct from Anglo-American analytical philosophy). It has inspired empirical studies of the social construction of meaning such as those associated with labelling theory, which defenders can point to as evidence of the validity of their theories (albeit risking accusations of performative contradiction in the process). Advocates of this kind of relativism often also claim that recent developments in the natural sciences, such as Heisenberg's uncertainty principle, quantum mechanics, chaos theory and complexity theory show that science is now becoming relativistic. However, many scientists who use these methods continue to identify as realist or post-positivist, and some sharply criticize the association.
Religious
Buddhism
Madhyamaka Buddhism, which forms the basis for many Mahayana Buddhist schools and which was founded by Nāgārjuna. Nāgārjuna taught the idea of relativity. In the Ratnāvalī, he gives the example that shortness exists only in relation to the idea of length. The determination of a thing or object is only possible in relation to other things or objects, especially by way of contrast. He held that the relationship between the ideas of "short" and "long" is not due to intrinsic nature (svabhāva). This idea is also found in the Pali Nikāyas and Chinese Āgamas, in which the idea of relativity is expressed similarly: "That which is the element of light ... is seen to exist on account of [in relation to] darkness; that which is the element of good is seen to exist on account of bad; that which is the element of space is seen to exist on account of form."
Madhyamaka Buddhism discerns two levels of truth: relative and ultimate. The two truths doctrine states that there are Relative or conventional, common-sense truth, which describes our daily experience of a concrete world, and Ultimate truth, which describes the ultimate reality as sunyata, empty of concrete and inherent characteristics. Conventional truth may be understood, in contrast, as "obscurative truth" or "that which obscures the true nature". It is constituted by the appearances of mistaken awareness. Conventional truth would be the appearance that includes a duality of apprehender and apprehended, and objects perceived within that. Ultimate truth is the phenomenal world free from the duality of apprehender and apprehended.
Catholicism
The Catholic Church, especially under John Paul II and Pope Benedict XVI, has identified relativism as one of the most significant problems for faith and morals today.
According to the Church and to some theologians, relativism, as a denial of absolute truth, leads to moral license and a denial of the possibility of sin and of God. Whether moral or epistemological, relativism constitutes a denial of the capacity of the human mind and reason to arrive at truth. Truth, according to Catholic theologians and philosophers (following Aristotle) consists of adequatio rei et intellectus, the correspondence of the mind and reality. Another way of putting it states that the mind has the same form as reality. This means when the form of the computer in front of someone (the type, color, shape, capacity, etc.) is also the form that is in their mind, then what they know is true because their mind corresponds to objective reality.
The denial of an absolute reference, of an axis mundi, denies God, who equates to Absolute Truth, according to these Christian theologians. They link relativism to secularism, an obstruction of religion in human life.
Leo XIII
Pope Leo XIII (1810–1903) was the first known Pope to use the word "relativism", in his encyclical Humanum genus (1884). Leo condemned Freemasonry and claimed that its philosophical and political system was largely based on relativism.
John Paul II
John Paul II wrote in Veritatis Splendor
As is immediately evident, the crisis of truth is not unconnected with this development. Once the idea of a universal truth about the good, knowable by human reason, is lost, inevitably the notion of conscience also changes. Conscience is no longer considered in its primordial reality as an act of a person's intelligence, the function of which is to apply the universal knowledge of the good in a specific situation and thus to express a judgment about the right conduct to be chosen here and now. Instead, there is a tendency to grant to the individual conscience the prerogative of independently determining the criteria of good and evil and then acting accordingly. Such an outlook is quite congenial to an individualist ethic, wherein each individual is faced with his own truth, different from the truth of others. Taken to its extreme consequences, this individualism leads to a denial of the very idea of human nature.
In Evangelium Vitae (The Gospel of Life), he says:
Freedom negates and destroys itself, and becomes a factor leading to the destruction of others, when it no longer recognizes and respects its essential link with the truth. When freedom, out of a desire to emancipate itself from all forms of tradition and authority, shuts out even the most obvious evidence of an objective and universal truth, which is the foundation of personal and social life, then the person ends up by no longer taking as the sole and indisputable point of reference for his own choices the truth about good and evil, but only his subjective and changeable opinion or, indeed, his selfish interest and whim.
Benedict XVI
In April 2005, in his homily during Mass prior to the conclave which would elect him as Pope, then Cardinal Joseph Ratzinger talked about the world "moving towards a dictatorship of relativism":
How many winds of doctrine we have known in recent decades, how many ideological currents, how many ways of thinking. The small boat of thought of many Christians has often been tossed about by these waves – thrown from one extreme to the other: from Marxism to liberalism, even to libertinism; from collectivism to radical individualism; from atheism to a vague religious mysticism; from agnosticism to syncretism, and so forth. Every day new sects are created and what Saint Paul says about human trickery comes true, with cunning which tries to draw those into error (cf Ephesians 4, 14). Having a clear Faith, based on the Creed of the Church, is often labeled today as a fundamentalism. Whereas, relativism, which is letting oneself be tossed and "swept along by every wind of teaching", looks like the only attitude acceptable to today's standards. We are moving towards a dictatorship of relativism which does not recognize anything as certain and which has as its highest goal one's own ego and one's own desires. However, we have a different goal: the Son of God, true man. He is the measure of true humanism. Being an "Adult" means having a faith which does not follow the waves of today's fashions or the latest novelties. A faith which is deeply rooted in friendship with Christ is adult and mature. It is this friendship which opens us up to all that is good and gives us the knowledge to judge true from false, and deceit from truth.
On June 6, 2005, Pope Benedict XVI told educators:
Today, a particularly insidious obstacle to the task of education is the massive presence in our society and culture of that relativism which, recognizing nothing as definitive, leaves as the ultimate criterion only the self with its desires. And under the semblance of freedom it becomes a prison for each one, for it separates people from one another, locking each person into his or her own 'ego'.
Then during the World Youth Day in August 2005, he also traced to relativism the problems produced by the communist and sexual revolutions, and provided a counter-counter argument.
In the last century we experienced revolutions with a common programme–expecting nothing more from God, they assumed total responsibility for the cause of the world in order to change it. And this, as we saw, meant that a human and partial point of view was always taken as an absolute guiding principle. Absolutizing what is not absolute but relative is called totalitarianism. It does not liberate man, but takes away his dignity and enslaves him. It is not ideologies that save the world, but only a return to the living God, our Creator, the Guarantor of our freedom, the Guarantor of what is really good and true.
Pope Francis
Pope Francis refers in Evangelii gaudium to two forms of relativism, "doctrinal relativism" and a "practical relativism" typical of "our age". The latter is allied to "widespread indifference" to systems of belief.
Jainism
Mahavira (599-527 BC), the 24th Tirthankara of Jainism, developed a philosophy known as Anekantavada. John Koller describes anekāntavāda as "epistemological respect for view of others" about the nature of existence, whether it is "inherently enduring or constantly changing", but "not relativism; it does not mean conceding that all arguments and all views are equal".
Sikhism
In Sikhism the Gurus (spiritual teachers) have propagated the message of "many paths" leading to the one God and ultimate salvation for all souls who tread on the path of righteousness. They have supported the view that proponents of all faiths can, by doing good and virtuous deeds and by remembering the Lord, certainly achieve salvation. The students of the Sikh faith are told to accept all leading faiths as possible vehicles for attaining spiritual enlightenment provided the faithful study, ponder and practice the teachings of their prophets and leaders. The holy book of the Sikhs called the Sri Guru Granth Sahib says: "Do not say that the Vedas, the Bible and the Koran are false. Those who do not contemplate them are false." Guru Granth Sahib page 1350; later stating: "The seconds, minutes, and hours, days, weeks and months, and the various seasons originate from the one Sun; O nanak, in just the same way, the many forms originate from the Creator." Guru Granth Sahib page 12,13.
See also
References
Bibliography
Maria Baghramian, Relativism, London: Routledge, 2004,
Gad Barzilai, Communities and Law: Politics and Cultures of Legal Identities, Ann Arbor: University of Michigan Press, 2003,
Andrew Lionel Blais, On the Plurality of Actual Worlds, University of Massachusetts Press, 1997,
Benjamin Brown, Thoughts and Ways of Thinking: Source Theory and Its Applications. London: Ubiquity Press, 2017. .
Ernest Gellner, Relativism and the Social Sciences, Cambridge: Cambridge University Press, 1985,
Rom Harré and Michael Krausz, Varieties of Relativism, Oxford, UK; New York, NY: Blackwell, 1996,
Knight, Robert H. The Age of Consent: the Rise of Relativism and the Corruption of Popular Culture. Dallas, Tex.: Spence Publishing Co., 1998. xxiv, 253, [1] p.
Michael Krausz, ed., Relativism: A Contemporary Anthology, New York: Columbia University Press, 2010,
Martin Hollis, Steven Lukes, Rationality and Relativism, Oxford: Basil Blackwell, 1982,
Joseph Margolis, Michael Krausz, R. M. Burian, Eds., Rationality, Relativism, and the Human Sciences, Dordrecht: Boston, M. Nijhoff, 1986,
Jack W. Meiland, Michael Krausz, Eds. Relativism, Cognitive and Moral, Notre Dame: University of Notre Dame Press, 1982,
Markus Seidel, Epistemic Relativism: A Constructive Critique, Basingstoke: Palgrave Macmillan, 2014,
External links
Westacott, E. Relativism, 2005, Internet Encyclopedia of Philosophy
Westacott, E. Cognitive Relativism, 2006, Internet Encyclopedia of Philosophy
Professor Ronald Jones on relativism
What 'Being Relative' Means, a passage from Pierre Lecomte du Nouy's "Human Destiny" (1947)
BBC Radio 4 series "In Our Time", on Relativism - the battle against transcendent knowledge, 19 January 2006
Against Relativism, by Christopher Noriss
The Catholic Encyclopedia
Harvey Siegel reviews Paul Boghossian's Fear of Knowledge
Epistemological schools and traditions | 0.818478 | 0.995858 | 0.815088 |
Ontology | Ontology is the philosophical study of being. As one of the most fundamental concepts, being encompasses all of reality and every entity within it. To articulate the basic structure of being, ontology examines what all entities have in common and how they are divided into fundamental classes, known as categories. An influential distinction is between particular and universal entities. Particulars are unique, non-repeatable entities, like the person Socrates. Universals are general, repeatable entities, like the color green. Another contrast is between concrete objects existing in space and time, like a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality, employing categories such as substance, property, relation, state of affairs, and event.
Ontologists disagree about which entities exist on the most basic level. Platonic realism asserts that universals have objective existence. Conceptualism says that universals only exist in the mind while nominalism denies their existence. There are similar disputes about mathematical objects, unobservable objects assumed by scientific theories, and moral facts. Materialism says that, fundamentally, there is only matter while dualism asserts that mind and matter are independent principles. According to some ontologists, there are no objective answers to ontological questions but only perspectives shaped by different linguistic practices.
Ontology uses diverse methods of inquiry. They include the analysis of concepts and experience, the use of intuitions and thought experiments, and the integration of findings from natural science. Applied ontology employs ontological theories and principles to study entities belonging to a specific area. It is of particular relevance to information and computer science, which develop conceptual frameworks of limited domains. These frameworks are used to store information in a structured way, such as a college database tracking academic activities. Ontology is closely related to metaphysics and relevant to the fields of logic, theology, and anthropology.
The origins of ontology lie in the ancient period with speculations about the nature of being and the source of the universe, including ancient Indian, Chinese, and Greek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name.
Definition
Ontology is the study of being. It is the branch of philosophy that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. It aims to discover the foundational building blocks of the world and characterize reality as a whole in its most general aspects. In this regard, ontology contrasts with individual sciences like biology and astronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena. In some contexts, the term ontology refers not to the general study of being but to a specific ontological theory within this discipline. It can also mean a conceptual scheme or inventory of a particular domain.
Ontology is closely related to metaphysics but the exact relation of these two disciplines is disputed. According to a traditionally influential characterization, metaphysics is the study of fundamental reality in the widest sense while ontology is the subdiscipline of metaphysics that restricts itself to the most general features of reality. This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, like God, mind, and value. A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory. Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being. It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms.
The word ontology has its roots in the ancient Greek terms (, meaning ) and (, meaning ), literally, . The ancient Greeks did not use the term ontology, which was coined by philosophers in the 17th century.
Basic concepts
Being
Being, or existence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing the whole of reality and every entity within it. In its widest sense, being only contrasts with non-being or nothingness. It is controversial whether a more substantial analysis of the concept or meaning of being is possible. One proposal understands being as a property possessed by every entity. Critics of this view argue that an entity without being cannot have any properties, meaning that being cannot be a property since properties presuppose being. A different suggestion says that all beings share a set of essential features. According to the Eleatic principle, "power is the mark of being", meaning that only entities with a causal influence truly exist. According to a controversial proposal by philosopher George Berkeley, all existence is mental, expressed in his slogan "to be is to be perceived".
Depending on the context, the term being is sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and impermanent and is distinguished from becoming, which implies change. Another contrast is between being, as what truly exists, and phenomena, as what merely appears to exist. In some contexts, being expresses the fact that something is while essence expresses its qualities or what it is like.
Ontologists often divide being into fundamental classes or highest kinds, called categories of being. Proposed categories include substance, property, relation, state of affairs, and event. They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category. Some philosophers, like Aristotle, say that entities belonging to different categories exist in distinct ways. Others, like John Duns Scotus, insist that there are no differences in the mode of being, meaning that everything exists in the same way. A related dispute is whether some entities have a higher degree of being than others, an idea already found in Plato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees.
The relation between being and non-being is a frequent topic in ontology. Influential issues include the status of nonexistent objects and why there is something rather than nothing.
Particulars and universals
A central distinction in ontology is between particular and universal entities. Particulars, also called individuals, are unique, non-repeatable entities, like Socrates, the Taj Mahal, and Mars. Universals are general, repeatable entities, like the color green, the form circularity, and the virtue courage. Universals express aspects or features shared by particulars. For example, Mount Everest and Mount Fuji are particulars characterized by the universal mountain.
Universals can take the form of properties or relations. Properties express what entities are like. They are features or qualities possessed by an entity. Properties are often divided into essential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it. For instance, having three sides is an essential property of a triangle while being red is an accidental property. Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group. For example, being a city is a property while being east of is a relation, as in "Kathmandu is a city" and "Kathmandu is east of New Delhi". Relations are often divided into internal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation of resemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations.
Substances play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the property green and acquires the property red.
States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individual Socrates and the property wise. States of affairs that correspond to reality are called facts. Facts are truthmakers of statements, meaning that whether a statement is true or false depends on the underlying facts.
Events are particular entities that occur in time, like the fall of the Berlin Wall and the first moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet. Complex events, also called processes, are composed of a sequence of events.
Concrete and abstract objects
Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set of integers. They lack causal powers and do not undergo changes. It is controversial whether or in what sense abstract objects exist and how people can know about them.
Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and pages between them. Each of these components is itself constituted of smaller parts, like molecules, atoms, and elementary particles. Mereology studies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to a different view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another. The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it.
Abstract objects are closely related to fictional and intentional objects. Fictional objects are entities invented in works of fiction. They can be things, like the One Ring in J. R. R. Tolkien's book series The Lord of the Rings, and people, like the Monkey King in the novel Journey to the West. Some philosophers say that fictional objects are one type of abstract object, existing outside space and time. Others understand them as artifacts that are created as the works of fiction are written. Intentional objects are entities that exist within mental states, like perceptions, beliefs, and desires. For example, if a person thinks about the Loch Ness Monster then the Loch Ness Monster is the intentional object of this thought. People can think about existing and non-existing objects, making it difficult to assess the ontological status of intentional objects.
Other concepts
Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity. For instance, the surface of an apple cannot exist without the apple. An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level. It is closely related to metaphysical grounding, which is the relation between a ground and facts it explains.
An ontological commitment of a person or a theory is an entity that exists according to them. For instance, a person who believes in God has an ontological commitment to God. Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, the Quine–Putnam indispensability argument defends mathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers.
Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible that extraterrestrial life exists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Doha is the capital of Qatar". Ontologists often use the concept of possible worlds to analyze possibility and necessity. A possible world is a complete and consistent way how things could have been. For example, Haruki Murakami was born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea, possible world semantics says that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds.
In ontology, identity means that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also called exact similarity and indiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year".
Branches
There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole. Pure ontology contrasts with applied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science. It considers ontological problems in regard to specific entities such as matter, mind, numbers, God, and cultural artifacts.
Social ontology, a major subfield of applied ontology, studies social kinds, like money, gender, society, and language. It aims to determine the nature and essential features of these concepts while also examining their mode of existence. According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars. In the fields of computer science, information science, and knowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way. A related application in genetics is Gene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases.
Formal ontology is the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools of formal logic to express their findings in an abstract and general manner. Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area. Examples are ideal spatial beings in the area of geometry and living beings in the area of biology.
Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization.
Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion.
Metaontology studies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists". It is closely related to fundamental ontology, an approach developed by philosopher Martin Heidegger that seeks to uncover the meaning of being.
Schools of thought
Realism and anti-realism
The term realism is used for various theories that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there are objective facts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true. This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other. According to philosopher Rudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework.
In a more narrow sense, realism refers to the existence of certain types of entities. Realists about universals say that universals have mind-independent existence. According to Platonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universal red could exist by itself even if there were no red objects in the world. Aristotelian realism, also called moderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them. Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world. Nominalists defend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects.
Mathematical realism, a closely related view in the philosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence of mathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible to empirical observation. Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and game formalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation.
Modal realism is the theory that in addition to the actual world, there are countless possible worlds as real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by our counterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects.
Scientific realists say that the scientific description of the world is an accurate representation of reality. It is of particular relevance in regard to things that cannot be directly observed by humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature. Scientific anti-realism says that scientific theories are not descriptions of reality but instruments to predict observations and the outcomes of experiments.
Moral realists claim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right. Moral anti-realists either claim that moral principles are subjective and differ between persons and cultures, a position known as moral relativism, or outright deny the existence of moral facts, a view referred to as moral nihilism.
By number of categories
Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class. For example, some forms of nominalism state that only concrete particulars exist while some forms of bundle theory state that only properties exist. Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything.
The closely related discussion between monism and dualism is about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level. Materialism is an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states. Idealists take the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds. Neutral monism occupies a middle ground by saying that both mind and matter are derivative phenomena. Dualists state that mind and matter exist as independent principles, either as distinct substances or different types of properties. In a slightly different sense, monism contrasts with pluralism as a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality. Pluralism is more commonly accepted and says that several distinct entities exist.
By fundamental categories
The historically influential substance-attribute ontology is a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances. The closely related to substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless or bare particular that merely supports the properties.
Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality. Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible. According to process ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change. Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According to trope bundle theory, properties are particular entities that belong to a single bundle.
Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level. Ontic structural realism agrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate. Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact that the Earth is a planet consists of the particular object the Earth and the property being a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts.
In the history of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested by Aristotle, whose system includes ten categories: substance, quantity, quality, relation, place, date, posture, state, action, and passion. An early influential system of categories in Indian philosophy, first proposed in the Vaisheshika school, distinguishes between six categories: substance, quality, motion, universal, individuator, and inherence. Immanuel Kant's transcendental idealism includes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality. In more recent philosophy, theories of categories were developed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe.
Others
The dispute between constituent and relational ontologies concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties.
Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities. One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture. Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists.
The ontological theories of endurantism and perdurantism aim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed of temporal parts and, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves.
Differential ontology is a poststructuralist approach interested in the relation between the concepts of identity and difference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things.
Object-oriented ontology belongs to the school of speculative realism and examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception.
Methods
Methods of ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied by metaontology.
Conceptual analysis is a method to understand ontological concepts and clarify their meaning. It proceeds by analyzing their component parts and the necessary and sufficient conditions under which a concept applies to an entity. This information can help ontologists decide whether a certain type of entity, such as numbers, exists. Eidetic variation is a related method in phenomenological ontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential. The transcendental method begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or which conditions are required for this entity to exist.
Another approach is based on intuitions in the form of non-inferential impressions about the correctness of general principles. These principles can be used as the foundation on which an ontological system is built and expanded using deductive reasoning. A further intuition-based method relies on thought experiments to evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employing counterfactual thinking to assess the consequences of this situation. For example, some ontologists examine the relation between mind and matter by imagining creatures identical to humans but without consciousness.
Naturalistic methods rely on the insights of the natural sciences to determine what exists. According to an influential approach by Willard Van Orman Quine, ontology can be conducted by analyzing the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them.
Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction. The principle of Ockham's Razor says that simple theories are preferable. A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities. Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations. A further factor is how close a theory is to common sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue.
In applied ontology, ontological engineering is the process of creating and refining conceptual models of specific domains. Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in a formal language to ensure precision and, in some cases, automatic computability. In the following review phase, the validity of the ontology is assessed using test data. Various more specific instructions for how to carry out the different steps have been suggested. They include the Cyc method, Grüninger and Fox's methodology, and so-called METHONTOLOGY. In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch.
Related fields
Ontology overlaps with many disciplines, including logic, the study of correct reasoning. Ontologists often employ logical systems to express their insights, specifically in the field of formal ontology. Of particular interest to them is the existential quantifier, which is used to express what exists. In first-order logic, for example, the formula states that dogs exist. Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being. Doubts about the accuracy of natural language have led some ontologists to seek a new formal language, termed ontologese, for a better representation of the fundamental structure of reality.
Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which builds databases to store this information and defines computational processes to automatically transform and use it. For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name. In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help of upper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, like Suggested Upper Merged Ontology and Basic Formal Ontology.
Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework. Protein Ontology is a formal framework for the standardized representation of protein-related entities and their relationships. Gene Ontology and Sequence Ontology serve a similar purpose in the field of genetics. Environment Ontology is a knowledge representation focused on ecosystems and environmental processes. Friend of a Friend provides a conceptual framework to represent relations between people and their interests and activities.
The topic of ontology has received increased attention in anthropology since the 1990s, sometimes termed the "ontological turn". This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook of Indigenous people and how it differs from a Western perspective. As an example of this contrast, it has been argued that various indigenous communities ascribe intentionality to non-human entities, like plants, forests, or rivers. This outlook is known as animism and is also found in Native American ontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature.
Ontology is closely related to theology and its interest in the existence of God as an ultimate entity. The ontological argument, first proposed by Anselm of Canterbury, attempts to prove the existence of the divine. It defines God as the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence. Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming it ontotheology.
History
The roots of ontology in ancient philosophy are speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in the Upanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss in what sense ultimate reality is one or many. Samkhya, the first orthodox school of Indian philosophy, formulated an atheist dualist ontology based on the Upanishads, identifying pure consciousness and matter as its two foundational principles. The later Vaisheshika school proposed a comprehensive system of categories. In ancient China, Laozi's (6th century BCE) Taoism examines the underlying order of the universe, known as Tao, and how this order is shaped by the interaction of two basic forces, yin and yang. The philosophical movement of Xuanxue emerged in the 3rd century CE and explored the relation between being and non-being.
Starting in the 6th century BCE, Presocratic philosophers in ancient Greece aimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things. Parmenides (c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being. Inspired by Presocratic philosophy, Plato (427–347 BCE) developed his theory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms. Aristotle (384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being. The school of Neoplatonism arose in the 3rd century CE and proposed an ineffable source of everything, called the One, which is more basic than being itself.
The problem of universals was an influential topic in medieval ontology. Boethius (477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspired Peter Abelard (1079–1142 CE), who proposed that universals exist only in the mind. Thomas Aquinas (1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence and essence, between substance and accidents, and between matter and form. He also discussed the transcendentals, which are the most general properties or modes of being. John Duns Scotus (1266–1308) argued that all entities, including God, exist in the same way and that each entity has a unique essence, called haecceity. William of Ockham (c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known as Ockham's razor.
In Arabic-Persian philosophy, Avicenna (980–1037 CE) combined ontology with theology. He identified God as a necessary being that is the source of everything else, which only has contingent existence. In 8th-century Indian philosophy, the school of Advaita Vedanta emerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is an illusion. Starting in the 13th century CE, the Navya-Nyāya school built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation. 9th-century China saw the emergence of Neo-Confucianism, which developed the idea that a rational principle, known as li, is the ground of being and order of the cosmos.
René Descartes (1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact. Rejecting Descartes's dualism, Baruch Spinoza (1632–1677) proposed a monist ontology according to which there is only a single entity that is identical to God and nature. Gottfried Wilhelm Leibniz (1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another. John Locke (1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties. Christian Wolff (1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry. George Berkeley (1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds.
Immanuel Kant (1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system of pure concepts of understanding. Influenced by Kant's philosophy, Georg Wilhelm Friedrich Hegel (1770–1831) linked ontology and logic. He said that being and thought are identical and examined their foundational structures. Arthur Schopenhauer (1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of a blind and irrational will. Francis Herbert Bradley (1846–1924) saw absolute spirit as the ultimate and all-encompassing reality while denying that there are any external relations.
At the beginning of the 20th century, Edmund Husserl (1859–1938) developed phenomenology and employed its method, the description of experience, to address ontological problems. This idea inspired his student Martin Heidegger (1889–1976) to clarify the meaning of being by exploring the mode of human existence. Jean-Paul Sartre responded to Heidegger's philosophy by examining the relation between being and nothingness from the perspective of human existence, freedom, and consciousness. Based on the phenomenological method, Nicolai Hartmann (1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual.
Alexius Meinong (1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being. Arguing against this theory, Bertrand Russell (1872–1970) formulated a fact ontology known as logical atomism. This idea was further refined by the early Ludwig Wittgenstein (1889–1951) and inspired D. M. Armstrong's (1926–2014) ontology. Alfred North Whitehead (1861–1947), by contrast, developed a process ontology. Rudolf Carnap (1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework. He had a strong influence on Willard Van Orman Quine (1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems. Quine's student David Lewis (1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world. Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains.
See also
References
Notes
Citations
Sources
External links | 0.813667 | 0.999506 | 0.813266 |
Philosophy of education | The philosophy of education is the branch of applied philosophy that investigates the nature of education as well as its aims and problems. It also examines the concepts and presuppositions of education theories. It is an interdisciplinary field that draws inspiration from various disciplines both within and outside philosophy, like ethics, political philosophy, psychology, and sociology. Many of its theories focus specifically on education in schools but it also encompasses other forms of education. Its theories are often divided into descriptive theories, which provide a value-neutral description of what education is, and normative theories, which investigate how education should be practiced.
A great variety of topics is discussed in the philosophy of education. Some studies provide a conceptual analysis of the fundamental concepts of education. Others center around the aims or purpose of education, like passing on knowledge and the development of the abilities of good reasoning, judging, and acting. An influential discussion concerning the epistemic aims of education is whether education should focus mainly on the transmission of true beliefs or rather on the abilities to reason and arrive at new knowledge. In this context, many theorists emphasize the importance of critical thinking in contrast to indoctrination. Another debate about the aims of education is whether the primary beneficiary is the student or the society to which the student belongs.
Many of the more specific discussions in the philosophy of education concern the contents of the curriculum. This involves the questions of whether, when, and in what detail a certain topic, like sex education or religion, should be taught. Other debates focus on the specific contents and methods used in moral, art, and science education. Some philosophers investigate the relation between education and power, often specifically regarding the power used by modern states to compel children to attend school. A different issue is the problem of the equality of education and factors threatening it, like discrimination and unequal distribution of wealth. Some philosophers of education promote a quantitative approach to educational research, which follows the example of the natural sciences by using wide experimental studies. Others prefer a qualitative approach, which is closer to the methodology of the social sciences and tends to give more prominence to individual case studies.
Various schools of philosophy have developed their own perspective on the main issues of education. Existentialists emphasize the role of authenticity while pragmatists give particular prominence to active learning and discovery. Feminists and postmodernists often try to uncover and challenge biases and forms of discrimination present in current educational practices. Other philosophical movements include perennialism, classical education, essentialism, critical pedagogy, and progressivism. The history of the philosophy of education started in ancient philosophy but only emerged as a systematic branch of philosophy in the latter half of the 20th century.
Definition
The philosophy of education is the branch of philosophy that examines the nature, aims, and problems of education. As the philosophical study of education, it investigates its topic similar to how other discipline-specific branches of philosophy, like the philosophy of science or the philosophy of law, study their topics. A central task for the philosophy of education is to make explicit the various fundamental assumptions and disagreements at work in its field and to evaluate the arguments raised for and against the different positions. The issue of education has a great many manifestations in various fields. Because of this, both the breadth and the influence of the philosophy of education are significant and wide-ranging, touching many other branches of philosophy, such as ethics, political philosophy, epistemology, metaphysics, and philosophy of mind. Its theories are often formulated from the perspective of these other philosophical disciplines. But due to its interdisciplinary nature, it also attracts contributions from scholars belonging to fields outside the domain of philosophy.
While there is wide agreement on the general topics discussed in the philosophy of education, it has proven difficult to give a precise definition of it. The philosophy of education belongs mainly to applied philosophy. According to some definitions, it can be characterized as an offshoot of ethics. But not everyone agrees with this characterization since the philosophy of education has a more theoretical side as well, which includes the examination of the fundamental concepts and theories of education as well as their philosophical implications. These two sides are sometimes referred to as the outward and the inward looking nature of the philosophy of education. Its topics can range from very general questions, like the nature of the knowledge worth teaching, to more specific issues, like how to teach art or whether public schools should implement standardized curricula and testing.
The problem of education was already an important topic in ancient philosophy and has remained so to the present day. But it only emerged as a distinct branch of philosophy in the latter half of the 20th century, when it became the subject of a systematic study and analysis. The term "education" can refer either to the process of educating or to the field of study investigating education as this process. This ambiguity is also reflected on the level of the philosophy of education, which encompasses the study of the philosophical presuppositions and issues both of education as a process and as a discipline. Many works in the philosophy of education focus explicitly or implicitly on the education happening in schools. But in its widest sense, education takes place in various other fields as well, such as at home, in libraries, in museums, or in the public media. Different types of education can be distinguished, such as formal and informal education or private and public education.
Subdivisions
Different subdivisions of the philosophy of education have been suggested. One categorization distinguishes between descriptive and normative issues. Descriptive theories aim to describe what education is and how to understand its related concepts. This includes also epistemological questions, which ask not whether a theory about education is true or false, but how one can arrive at the knowledge to answer such questions. Normative theories, on the other hand, try to give an account of how education should be practiced or what is the right form of education. Some normative theories are built on a wider ethical framework of what is right or good and then arrive at their educational normative theories by applying this framework to the practice of education. But the descriptive and the normative approaches are intertwined and cannot always be clearly separated since descriptive findings often directly imply various normative attitudes.
Another categorization divides topics in the philosophy of education into the nature and aims of education on the one hand, and the methods and circumstances of education on the other hand. The latter section may again be divided into concrete normative theories and the study of the conceptual and methodological presuppositions of these theories. Other classifications additionally include areas for topics such as the role of reasoning and morality as well as issues pertaining to social and political topics and the curriculum.
The theories within the philosophy of education can also be subdivided based on the school of philosophy they belong to. Various schools of philosophy, such as existentialism, pragmatism, Marxism, postmodernism, and feminism, have developed their own perspective on the main issues of education. They often include normative theories about how education should or should not be practiced and are in most cases controversial.
Another approach is to simply list all topics discussed in the philosophy of education. Among them are the issues and presuppositions concerning sex education, science education, aesthetic education, religious education, moral education, multicultural education, professional education, theories of teaching and learning, the measurement of learning, knowledge and its value, cultivating reason, epistemic and moral aims of education, authority, fallibilism, and fallibility.
Finally, yet another way that philosophy of education is often tacitly divided is in terms of western versus non-western and “global south” perspectives. For many generations, philosophy of education has maintained a relatively ethnocentric orientation, with little attention paid to ideas from outside Europe and North America, but this is starting to change in the 21st century due to decolonization and related movements.
Main topics
Fundamental concepts of education
The starting point of many philosophical inquiries into a field is the examination and clarification of the fundamental concepts used in this field, often in the form of conceptual analysis. This approach is particularly prominent in the analytic tradition. It aims to make ambiguities explicit and to uncover various implicit and potentially false assumptions associated with these terms.
Theorists in this field often emphasize the importance of this form of investigation since all subsequent work on more specific issues already has to assume at least implicitly what their central terms mean to demarcate their field. For example, in order to study what constitutes good education, one has to have a notion of what the term "education" means and how to achieve, measure, and evaluate it. Definitions of education can be divided into thin and thick definitions. Thin definitions are neutral and descriptive. They usually emphasize the role of the transmission of knowledge and understanding in education. Thick definitions include additional normative components, for example, by stating that the process in question has to have certain positive results to be called education. According to one thick definition, education means that the person educated has acquired knowledge and intellectual skills, values these factors, and has thus changed for the better. These characteristics can then be used to distinguish education from other closely related terms, such as "indoctrination". Other fundamental notions in the philosophy of education include the concepts of teaching, learning, student, schooling, and rearing.
Aims of education
A central question in the philosophy of education concerns the aims of education, i.e. the question of why people should be educated and what goals should be pursued in the process of education. This issue is highly relevant for evaluating educational practices and products by assessing how well they manage to realize these goals. There is a lot of disagreement and various theories have been proposed concerning the aims of education. Prominent suggestions include that education should foster knowledge, curiosity, creativity, rationality, and critical thinking while also promoting the tendency to think, feel, and act morally. The individual should thereby develop as a person, and achieve self-actualization by realizing their potential. Some theorists emphasize the cultivation of liberal ideals, such as freedom, autonomy, and open-mindedness, while others stress the importance of docility, obedience to authority, and ideological purity, sometimes also with a focus on piety and religious faith. Many suggestions concern the social domain, such as fostering a sense of community and solidarity and thus turning the individual into a productive member of society while protecting them from the potentially negative influences of society. The discussion of these positions and the arguments cited for and against them often include references to various disciplines in their justifications, such as ethics, psychology, anthropology, and sociology.
There is wide consensus concerning certain general aims of education, like that it should foster all students, help them in the development of their ability to reason, and guide them in how to judge and act. But these general characteristics are usually too vague to be of much help and there are many disagreements about the more specific suggestions of what education should aim for. Some attempts have been made to provide an overarching framework of these different aims. According to one approach, education should at its core help the individual lead a good life. All the different more specific goals are aims of education to the extent that they serve this ultimate purpose. On this view, it may be argued that fostering rationality and autonomy in the students are aims of education to the extent that increased rationality and autonomy will result in the student leading a better life.
The different theories of the aims of education are sometimes divided into goods-based, skills-based, and character-based accounts. Goods-based accounts hold that the ultimate aim of education is to produce some form of epistemic good, such as truth, knowledge, and understanding. Skills-based accounts, on the other hand, see the development of certain skills, like rationality as well as critical and independent thinking as the goal of education. For character-based accounts, the character traits or virtues of the learner play the central role, often with an emphasis on moral and civic traits like kindness, justice, and honesty.
Epistemic
Many theories emphasize the epistemic aims of education. According to the epistemic approach, the central aim of education has to do with knowledge, for example, to pass on knowledge accumulated in the societal effort from one generation to the next. This process may be seen both as the development of the student's mind as well as the transmission of a valuable heritage. Such an approach is sometimes rejected by pragmatists, who emphasize experimentation and critical thinking over the transmission of knowledge. Others have argued that this constitutes a false dichotomy: that the transmission of knowledge and the development of a rational and critical mind are intertwined aims of education that depend on and support each other. In this sense, education aims also at fostering the ability to acquire new knowledge. This includes both instilling true beliefs in the students as well as teaching the methods and forms of evidence responsible for verifying existing beliefs and arriving at new knowledge. It promotes the epistemic autonomy of students and may help them challenge unwarranted claims by epistemic authorities. In its widest sense, the epistemic approach includes various related goals, such as imparting true beliefs or knowledge to the students as well as teaching dispositions and abilities, such as rationality, critical thinking, understanding, and other intellectual virtues.
Critical thinking and indoctrination
Critical thinking is often cited as one of the central aims of education. There is no generally accepted definition of critical thinking. But there is wide agreement that it is reasonable, reflective, careful, and focused on determining what to believe or how to act. It has clarity and rationality as its standards and includes a metacognitive component monitoring not just the solution of the problem at hand but also ensuring that it complies with its own standards in the process. In this sense, education is not just about conveying many true beliefs to the students. Instead, the students' ability to arrive at conclusions by themselves and the disposition to question pre-existing beliefs should also be fostered, often with the goal of benefitting not just the student but society at large. But not everyone agrees with the positive role ascribed to critical thinking in education. Objections are often based on disagreements about what it means to reason well. Some critics argue that there is no universally correct form of reasoning. According to them, education should focus more on teaching subject-specific skills and less on imparting a universal method of thinking. Other objections focus on the allegation that critical thinking is not as neutral, universal, and presuppositionless as some of its proponents claim. On this view, it involves various implicit biases, like egocentrism or distanced objectivity, and culture-specific values arising from its roots in the philosophical movement of the European Enlightenment.
The problem of critical thinking is closely connected to that of indoctrination. Many theorists hold that indoctrination is in important ways different from education and should be avoided in education. But others contend that indoctrination should be part of education or even that there is no difference between the two. These different positions depend a lot on how "indoctrination" is to be defined. Most definitions of indoctrination agree that its goal is to get the student to accept and embrace certain beliefs. It has this in common with most forms of education but differs from it in other ways. According to one definition, the belief acquisition in indoctrination happens without regard for the evidential support of these beliefs, i.e. without presenting proper arguments and reasons for adopting them. According to another, the beliefs are instilled in such a way as to discourage the student to question or assess for themselves the believed contents. In this sense, the goals of indoctrination are exactly opposite to other aims of education, such as rationality and critical thinking. In this sense, education tries to impart not just beliefs but also make the students more open-minded and conscious of human fallibility. An intimately related issue is whether the aim of education is to mold the mind of the pupil or to liberate it by strengthening its capacity for critical and independent inquiry.
An important consequence of this debate concerns the problem of testimony, i.e to what extent students should trust the claims of teachers and books. It has been argued that this issue depends a lot on the age and the intellectual development of the student. In the earlier stages of education, a high level of trust on the side of the students may be necessary. But the more their intellectual capacities develop, the more they should use them when trying to assess the plausibility of claims and the reasons for and against them. In this regard, it has been argued that, especially for young children, weaker forms of indoctrination may be necessary while they still lack the intellectual capacities to evaluate the reasons for and against certain claims and thus to critically assess them. In this sense, one can distinguish unavoidable or acceptable forms of indoctrination from their avoidable or unacceptable counterparts. But this distinction is not always affirmed and some theorists contend that all forms of indoctrination are bad or unacceptable.
Individual and society
A recurrent source of disagreement about the aims of education concerns the question of who is the primary beneficiary of education: the individual educated or the society having this individual as its member. In many cases, the interests of both are aligned. On the one hand, many new opportunities in life open to the individual through education, especially concerning their career. On the other hand, education makes it more likely that the person becomes a good, law-abiding, and productive member of society. But this issue becomes more problematic in cases where the interests of the individual and society conflict with each other. This poses the question of whether individual autonomy should take precedence over communal welfare. According to comprehensive liberals, for example, education should emphasize the self-directedness of the students. On this view, it is up to the student to choose their own path in life. The role of education is to provide them with the necessary resources but it does not direct the student with respect to what constitutes an ethically good path in life. This position is usually rejected by communitarians, who stress the importance of social cohesion by being part of the community and sharing a common good.
Curriculum
An important and controversial issue in the philosophy of education concerns the contents of the curriculum, i.e. the question of what should be taught to students. This includes both the selection of subjects to be taught and the consideration of arguments for and against the inclusion of a particular topic. This issue is intimately tied to the aims of education: one may argue that a certain subject should be included in the curriculum because it serves one of the aims of education.
While many positions about what subjects to include in the curriculum are controversial, some particular issues stand out where these controversies go beyond the academic discourse to a wide public discourse, like questions about sexual and religious education. Controversies in sex education involve both biological aspects, such as the functioning of sex organs, and social aspects, such as sexual practices and gender identities. Disagreements in this area concern which aspects are taught and in which detail as well as to which age groups these teachings should be directed. Debates on religious education include questions like whether religion should be taught as a distinct subject and, if so, whether it should be compulsory. Other questions include which religion or religions should be taught and to what degree religious views should influence other topics, such as ethics or sex education.
Another prominent topic in this field concerns the subject of moral education. This field is sometimes referred to as "educational ethics". Disagreements in this field concern which moral beliefs and values should be taught to the students. This way, many of the disagreements in moral philosophy are reflected in the field of moral education. Some theorists in the Kantian tradition emphasize the importance of moral reasoning and enabling children to become morally autonomous agents who can tell right from wrong. Theorists in the Aristotelian tradition, on the other hand, focus more on moral habituation through the development of virtues that concern both perception, affect, and judgment in regard to moral situations. A related issue, heavily discussed in ancient philosophy, is the extent to which morality can be taught at all instead of just being an inborn disposition.
Various discussions also concern the role of art and aesthetics in public education. It has been argued that the creativity learned in these areas can be applied to various other fields and may thereby benefit the student in various ways. It has been argued that aesthetic education also has indirect effects on various other issues, such as shaping the student's sensibilities in the fields of morality and politics as well as heightening their awareness of self and others.
Some researchers reject the possibility of objectivity in general. They use this claim to argue against universal forms of education, which they see as hiding particular worldviews, beliefs, and interests under a false cover. This is sometimes utilized to advance an approach focused on more diversity, for example, by giving more prominence in education to the great variety of cultures, customs, languages, and lifestyles without giving preference to any of them.
Different approaches to solving these disputes are employed. In some cases, psychology in the field of child development, learning, and motivation can provide important general insights. More specific questions about the curriculum of a particular subject, such as mathematics, are often strongly influenced by the philosophy of this specific discipline, such as the philosophy of mathematics.
Power
The problem of power is another issue in the philosophy of education. Of specific interest on this topic is that the modern states compel children to attend school, so-called compulsory education. The children and their parents usually have few to no ways of opting out or changing the established curriculum. An important question in this respect is why or whether modern states are justified to use this form of power. For example, various liberationist movements belonging to the fields of deschooling and unschooling reject this power and argue that the children's welfare is best served in the absence of compulsory schooling in general. This is sometimes based on the idea that the best form of learning does not happen while studying but instead occurs as a side-effect while doing something else. This position is often rejected by pointing out that it is based on overly optimistic presuppositions about the children's natural and unguided development of rationality. While some objections focus on compulsory education in general, a less radical and more common criticism concerns specific compulsory topics in the curriculum, for example, in relation to sexuality or religion. Another contemporary debate in the United States concerns the practice of standardized testing: it has been argued that this discriminates against certain racial, cultural, or religious minorities since the standardized test may implicitly assume various presuppositions not shared by these minorities. Other issues in relation to power concern the authority and responsibility teachers have towards their students.
Postmodern theorists often see established educational practices as instruments of power used by elites in society to further their own interests. Important aspects in this regard are the unequal power relation between the state and its institutions in contrast to the individual as well as the control that can thus be employed due to the close connection between power and knowledge, specifically the knowledge passed on through education.
Equality
A recurrent demand on public education is that all students should be treated equally and in a fair manner. One reason for this demand is that education plays a central role for the child's path and prospects in life, which should not be limited by unfair or arbitrary external circumstances. But there are various disagreements about how this demand is best understood and whether it is applicable in all cases. An initial problem concerns what is meant by "equality". In the field of education, it is often understood as equality of opportunity. In this sense, the demand for equality implies that education should open the same opportunities to everyone. This means, among other things, that students from higher social classes should not enjoy a competitive advantage over others. One difficulty with this demand, when understood in a wide sense, is that there are many sources of educational inequality and it is not always in the best interest to eliminate all of them. For example, parents who are concerned with their young children's education may read them bedtime stories early on and thereby provide them with a certain advantage over other children who do not enjoy this privilege. But disallowing such practices to level the field would have serious negative side-effects. A weaker position on this issue does not demand full equality but holds instead that educational policies should ensure that certain factors, like race, native language, and disabilities, do not pose obstacles to the equality of opportunity.
A closely related topic is whether all students, both high and low performers, should be treated equally. According to some, more resources should be dedicated to low performers, to help them get to an average level, while others recommend a preferential treatment for high performers in order to help them fully develop their exceptional abilities and thereby benefit society at large. A similar problem is the issue of specialization. It concerns the question of whether all students should follow the same curriculum or to what extent they should specialize early on in specific fields according to their interests and skills.
Marxist critiques of the school systems in capitalist societies often focus on the inequality they cause by sorting students for different economic positions. While overtly this process happens based on individual effort and desert, they argue that this just masks and reinforces the underlying influence of the preexisting social class structure. This is sometimes integrated into a wider Marxist perspective on society which holds that education in capitalist societies plays the role of upholding this inequality and thereby reproduces the capitalist relations of production.
Other criticisms of the dominant paradigms in education are often voiced by feminist and postmodern theorists. They usually point to alleged biases and forms of discrimination present in current practices that should be eliminated. Feminists often hold that traditional education is overly man-oriented and thereby oppresses women in some form. This bias was present to severe degrees in earlier forms of education and a lot of progress has been made towards more gender-equal forms of education. Nonetheless, feminists often contend that certain problems still persist in contemporary education. Some argue, for example, that this manifests itself in the prominence given to cognitive development in education, which is said to be associated primarily with masculinity in contrast to a more feminine approach based on emotion and intuition. A related criticism holds that there is an overemphasis on abilities belonging to the public sphere, like reason and objectivity, in contrast to equally important characteristics belonging to the private sphere, like compassion and empathy.
Epistemology
The philosophy of education is also interested in the epistemology of education. This term is often used to talk about the epistemic aims of education, i.e. questions like whether educators should aim at transmitting justified true beliefs rather than merely true beliefs or should additionally foster other epistemic virtues like critical thinking. In a different sense, the epistemology of education concerns the issue of how we arrive at knowledge on educational matters. This is especially relevant in the field of educational research, which is an active field of investigation with many studies being published on a regular basis. It is also quite influential in regard to educational policy and practice. Epistemological questions in this field concern the objectivity of its insights.
An important methodological divide in this area, often referred to as the "paradigm wars", is between the quantitative or statistical approach in contrast to the qualitative or ethnographical approach. The quantitative approach usually focuses on wide experimental studies and employs statistical methods to uncover the general causal factors responsible for educational phenomena. It has been criticized based on the claim that its method, which is inspired by the natural sciences, is inappropriate for understanding the complex cultural and motivational patterns investigated by the social sciences. The qualitative approach, on the other hand, gives more weight to particular case studies for reaching its conclusions. Its opponents hold that this approach lacks the methodological rigor to arrive at well-warranted knowledge. The mixed-method research is a recent contemporary approach in which the methods of both camps are combined. The question of the most promising approach is relevant to how funding budgets are spent on research, which in its turn has important implications for policymaking.
Others
One question concerns how the learners are to be conceptualized. John Locke sees the mind as a blank slate or a tabula rasa that passively absorbs information and is filled with contents through experience. This view contrasts with a more pragmatist perspective, which in its emphasis on practice sees students not as passive absorbers but as active learners that should be encouraged to discover and learn things by themselves.
Another disputed topic is the role of testing in public education. Some theorists have argued that it is counterproductive since it puts undue pressure on the students. But testing also plays various critical roles, such as providing feedback on the learning progress both to the student, their parents, and their teachers. Concrete discussions on the role of testing often focus less on whether it should be done at all and more on how much importance should be ascribed to the test results. This also includes questions about the form of testing, for example, whether it should be standardized. Standardized tests present the same questions and scoring system to all students taking the test and are often motivated by a desire for objective and fair evaluations both of students and schools. Opponents have argued that this approach tends to favor certain social groups over others and severely limits the creativity and effectiveness of teachers.
Philosophical movements
Existentialist
The existentialist sees the world as one's personal subjectivity, where goodness, truth, and reality are individually defined. Reality is a world of existing, truth subjectively chosen, and goodness a matter of freedom. The subject matter of existentialist classrooms should be a matter of personal choice. Teachers view the individual as an entity within a social context in which the learner must confront others' views to clarify his or her own. Character development emphasizes individual responsibility for decisions. Real answers come from within the individual, not from outside authority. Examining life through authentic thinking involves students in genuine learning experiences. Existentialists are opposed to thinking about students as objects to be measured, tracked, or standardized. Such educators want the educational experience to focus on creating opportunities for self-direction and self-actualization. They start with the student, rather than on curriculum content.
Perennialism
Perennialists believe that one should teach the things that one deems to be of everlasting importance to all people everywhere. They believe that the most important topics develop a person. Since details of fact change constantly, these cannot be the most important. Therefore, one should teach principles, not facts. Since people are human, one should teach first about humans, not machines or techniques. Since people are people first, and workers second if at all, one should teach liberal topics first, not vocational topics. The focus is primarily on teaching reasoning and wisdom rather than facts, the liberal arts rather than vocational training.
Classical education
The Classical education movement advocates a form of education based in the traditions of Western culture, with a particular focus on education as understood and taught in the Middle Ages. The term "classical education" has been used in English for several centuries, with each era modifying the definition and adding its own selection of topics. By the end of the 18th century, in addition to the trivium and quadrivium of the Middle Ages, the definition of a classical education embraced study of literature, poetry, drama, philosophy, history, art, and languages. In the 20th and 21st centuries it is used to refer to a broad-based study of the liberal arts and sciences, as opposed to a practical or pre-professional program. Classical Education can be described as rigorous and systematic, separating children and their learning into three rigid categories, Grammar, Dialectic, and Rhetoric.
Essentialism
According to educational essentialism, there are certain essential facts about the world that every student needs to learn and master. It is a form of traditional education that relies on long-standing and established subjects and teaching methods. Essentialists usually focus on subjects like reading, writing, mathematics, and science, usually starting with very basic skills while progressively increasing complexity. They prefer a teacher-centered approach, meaning that the teacher acts as the authority figure guiding the learning activity while students are expected to follow their lead.
Social reconstructionism and critical pedagogy
Critical pedagogy is an "educational movement, guided by passion and principle, to help students develop consciousness of freedom, recognize authoritarian tendencies, and connect knowledge to power and the ability to take constructive action." Based in Marxist theory, critical pedagogy draws on radical democracy, anarchism, feminism, and other movements for social justice.
Democratic education
Democratic education is a theory of learning and school governance in which students and staff participate freely and equally in a school democracy. In a democratic school, there is typically shared decision-making among students and staff on matters concerning living, working, and learning together.
Progressivism
Educational progressivism is the belief that education must be based on the principle that humans are social animals who learn best in real-life activities with other people. Progressivists, like proponents of most educational theories, claim to rely on the best available scientific theories of learning. Most progressive educators believe that children learn as if they were scientists, following a process similar to John Dewey's model of learning known as "the pattern of inquiry": 1) Become aware of the problem. 2) Define the problem. 3) Propose hypotheses to solve it. 4) Evaluate the consequences of the hypotheses from one's past experience. 5) Test the likeliest solution.
Unschooling
Unschooling is a range of educational philosophies and practices centered on allowing children to learn through their natural life experiences, including child directed play, game play, household responsibilities, work experience, and social interaction, rather than through a more traditional school curriculum. Unschooling encourages exploration of activities led by the children themselves, facilitated by the adults. Unschooling differs from conventional schooling principally in the thesis that standard curricula and conventional grading methods, as well as other features of traditional schooling, are counterproductive to the goal of maximizing the education of each child.
Contemplative education
Contemplative education focuses on bringing introspective practices such as mindfulness and yoga into curricular and pedagogical processes for diverse aims grounded in secular, spiritual, religious and post-secular perspectives. Contemplative approaches may be used in the classroom, especially in tertiary or (often in modified form) in secondary education. Parker Palmer is a recent pioneer in contemplative methods. The Center for Contemplative Mind in Society founded a branch focusing on education, The Association for Contemplative Mind in Higher Education.
Contemplative methods may also be used by teachers in their preparation; Waldorf education was one of the pioneers of the latter approach. In this case, inspiration for enriching the content, format, or teaching methods may be sought through various practices, such as consciously reviewing the previous day's activities; actively holding the students in consciousness; and contemplating inspiring pedagogical texts. Zigler suggested that only through focusing on their own spiritual development could teachers positively impact the spiritual development of students.
History
Ancient
Plato
Plato's educational philosophy was grounded in a vision of an ideal Republic wherein the individual was best served by being subordinated to a just society due to a shift in emphasis that departed from his predecessors. The mind and body were to be considered separate entities. In the dialogues of Phaedo, written in his "middle period" (360 BCE), Plato expressed his distinctive views about the nature of knowledge, reality, and the soul:When the soul and body are united, then nature orders the soul to rule and govern, and the body to obey and serve. Now which of these two functions is akin to the divine? and which to the mortal? Does not the divine appear ... to be that which naturally orders and rules, and the mortal to be that which is subject and servant?On this premise, Plato advocated removing children from their mothers' care and raising them as wards of the state, with great care being taken to differentiate children suitable to the various castes, the highest receiving the most education, so that they could act as guardians of the city and care for the less able. Education would be holistic, including facts, skills, physical discipline, and music and art, which he considered the highest form of endeavor.
Plato believed that talent was distributed non-genetically and thus must be found in children born in any social class. He built on this by insisting that those suitably gifted were to be trained by the state so that they might be qualified to assume the role of a ruling class. What this established was essentially a system of selective public education premised on the assumption that an educated minority of the population were, by virtue of their education (and inborn educability), sufficient for healthy governance.
Plato's writings contain some of the following ideas:
Elementary education would be confined to the guardian class till the age of 18, followed by two years of compulsory military training and then by higher education for those who qualified. While elementary education made the soul responsive to the environment, higher education helped the soul to search for truth which illuminated it. Both boys and girls receive the same kind of education. Elementary education consisted of music and gymnastics, designed to train and blend gentle and fierce qualities in the individual and create a harmonious person.
At the age of 20, a selection was made. The best students would take an advanced course in mathematics, geometry, astronomy and harmonics. The first course in the scheme of higher education would last for ten years. It would be for those who had a flair for science. At the age of 30 there would be another selection; those who qualified would study dialectics and metaphysics, logic and philosophy for the next five years. After accepting junior positions in the army for 15 years, a man would have completed his theoretical and practical education by the age of 50.
Aristotle
Only fragments of Aristotle's treatise On Education are still in existence. We thus know of his philosophy of education primarily through brief passages in other works. Aristotle considered human nature, habit and reason to be equally important forces to be cultivated in education. Thus, for example, he considered repetition to be a key tool to develop good habits. The teacher was to lead the student systematically; this differs, for example, from Socrates' emphasis on questioning his listeners to bring out their own ideas (though the comparison is perhaps incongruous since Socrates was dealing with adults).
Aristotle placed great emphasis on balancing the theoretical and practical aspects of subjects taught. Subjects he explicitly mentions as being important included reading, writing and mathematics; music; physical education; literature and history; and a wide range of sciences. He also mentioned the importance of play.
One of education's primary missions for Aristotle, perhaps its most important, was to produce good and virtuous citizens for the polis. All who have meditated on the art of governing mankind have been convinced that the fate of empires depends on the education of youth.
Medieval
Ibn Sina
In the medieval Islamic world, an elementary school was known as a maktab, which dates back to at least the 10th century. Like madrasahs (which referred to higher education), a maktab was often attached to a mosque. In the 11th century, Ibn Sina (known as Avicenna in the West), wrote a chapter dealing with the maktab entitled "The Role of the Teacher in the Training and Upbringing of Children", as a guide to teachers working at maktab schools. He wrote that children can learn better if taught in classes instead of individual tuition from private tutors, and he gave a number of reasons for why this is the case, citing the value of competition and emulation among pupils as well as the usefulness of group discussions and debates. Ibn Sina described the curriculum of a maktab school in some detail, describing the curricula for two stages of education in a maktab school.
Ibn Sina wrote that children should be sent to a maktab school from the age of 6 and be taught primary education until they reach the age of 14. During which time, he wrote that they should be taught the Qur'an, Islamic metaphysics, language, literature, Islamic ethics, and manual skills (which could refer to a variety of practical skills).
Ibn Sina refers to the secondary education stage of maktab schooling as the period of specialization, when pupils should begin to acquire manual skills, regardless of their social status. He writes that children after the age of 14 should be given a choice to choose and specialize in subjects they have an interest in, whether it was reading, manual skills, literature, preaching, medicine, geometry, trade and commerce, craftsmanship, or any other subject or profession they would be interested in pursuing for a future career. He wrote that this was a transitional stage and that there needs to be flexibility regarding the age in which pupils graduate, as the student's emotional development and chosen subjects need to be taken into account.
The empiricist theory of 'tabula rasa' was also developed by Ibn Sina. He argued that the "human intellect at birth is rather like a tabula rasa, a pure potentiality that is actualized through education and comes to know" and that knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" which is developed through a "syllogistic method of reasoning; observations lead to prepositional statements, which when compounded lead to further abstract concepts." He further argued that the intellect itself "possesses levels of development from the material intellect (al-‘aql al-hayulani), that potentiality that can acquire knowledge to the active intellect (al-‘aql al-fa‘il), the state of the human intellect in conjunction with the perfect source of knowledge."
Ibn Tufail
In the 12th century, the Andalusian-Arabian philosopher and novelist Ibn Tufail (known as "Abubacer" or "Ebn Tophail" in the West) demonstrated the empiricist theory of 'tabula rasa' as a thought experiment through his Arabic philosophical novel, Hayy ibn Yaqzan, in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. Some scholars have argued that the Latin translation of his philosophical novel, Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in "An Essay Concerning Human Understanding".
Modern
Michel de Montaigne
Child education was among the psychological topics that Michel de Montaigne wrote about. His essays On the Education of Children, On Pedantry, and On Experience explain the views he had on child education. Some of his views on child education are still relevant today.
Montaigne's views on the education of children were opposed to the common educational practices of his day. He found fault both with what was taught and how it was taught. Much of the education during Montaigne's time was focused on the reading of the classics and learning through books.Montaigne disagreed with learning strictly through books. He believed it was necessary to educate children in a variety of ways. He also disagreed with the way information was being presented to students. It was being presented in a way that encouraged students to take the information that was taught to them as absolute truth. Students were denied the chance to question the information. Therefore, students could not truly learn. Montaigne believed that, to learn truly, a student had to take the information and make it their own.
At the foundation Montaigne believed that the selection of a good tutor was important for the student to become well educated. Education by a tutor was to be conducted at the pace of the student.He believed that a tutor should be in dialogue with the student, letting the student speak first. The tutor also should allow for discussions and debates to be had. Such a dialogue was intended to create an environment in which students would teach themselves. They would be able to realize their mistakes and make corrections to them as necessary.
Individualized learning was integral to his theory of child education. He argued that the student combines information already known with what is learned and forms a unique perspective on the newly learned information. Montaigne also thought that tutors should encourage the natural curiosity of students and allow them to question things.He postulated that successful students were those who were encouraged to question new information and study it for themselves, rather than simply accepting what they had heard from the authorities on any given topic. Montaigne believed that a child's curiosity could serve as an important teaching tool when the child is allowed to explore the things that the child is curious about.
Experience also was a key element to learning for Montaigne. Tutors needed to teach students through experience rather than through the mere memorization of information often practised in book learning.He argued that students would become passive adults, blindly obeying and lacking the ability to think on their own. Nothing of importance would be retained and no abilities would be learned. He believed that learning through experience was superior to learning through the use of books. For this reason he encouraged tutors to educate their students through practice, travel, and human interaction. In doing so, he argued that students would become active learners, who could claim knowledge for themselves.
Montaigne's views on child education continue to have an influence in the present. Variations of Montaigne's ideas on education are incorporated into modern learning in some ways. He argued against the popular way of teaching in his day, encouraging individualized learning. He believed in the importance of experience, over book learning and memorization. Ultimately, Montaigne postulated that the point of education was to teach a student how to have a successful life by practicing an active and socially interactive lifestyle.
John Locke
In Some Thoughts Concerning Education and Of the Conduct of the Understanding John Locke composed an outline on how to educate this mind in order to increase its powers and activity:
"The business of education is not, as I think, to make them perfect in any one of the sciences, but so to open and dispose their minds as may best make them capable of any, when they shall apply themselves to it."
"If men are for a long time accustomed only to one sort or method of thoughts, their minds grow stiff in it, and do not readily turn to another. It is therefore to give them this freedom, that I think they should be made to look into all sorts of knowledge, and exercise their understandings in so wide a variety and stock of knowledge. But I do not propose it as a variety and stock of knowledge, but a variety and freedom of thinking, as an increase of the powers and activity of the mind, not as an enlargement of its possessions."
Locke expressed the belief that education maketh the man, or, more fundamentally, that the mind is an "empty cabinet", with the statement, "I think I may say that of all the men we meet with, nine parts of ten are what they are, good or evil, useful or not, by their education."
Locke also wrote that "the little and almost insensible impressions on our tender infancies have very important and lasting consequences." He argued that the "associations of ideas" that one makes when young are more important than those made later because they are the foundation of the self: they are, put differently, what first mark the tabula rasa. In his Essay, in which is introduced both of these concepts, Locke warns against, for example, letting "a foolish maid" convince a child that "goblins and sprites" are associated with the night for "darkness shall ever afterwards bring with it those frightful ideas, and they shall be so joined, that he can no more bear the one than the other."
"Associationism", as this theory would come to be called, exerted a powerful influence over eighteenth-century thought, particularly educational theory, as nearly every educational writer warned parents not to allow their children to develop negative associations. It also led to the development of psychology and other new disciplines with David Hartley's attempt to discover a biological mechanism for associationism in his Observations on Man (1749).
Jean-Jacques Rousseau
Jean-Jacques Rousseau, though he paid his respects to Plato's philosophy, rejected it as impractical due to the decayed state of society. Rousseau also had a different theory of human development; where Plato held that people are born with skills appropriate to different castes (though he did not regard these skills as being inherited), Rousseau held that there was one developmental process common to all humans. This was an intrinsic, natural process, of which the primary behavioral manifestation was curiosity. This differed from Locke's 'tabula rasa' in that it was an active process deriving from the child's nature, which drove the child to learn and adapt to its surroundings.
Rousseau wrote in his book Emile that all children are perfectly designed organisms, ready to learn from their surroundings so as to grow into virtuous adults, but due to the malign influence of corrupt society, they often fail to do so. Rousseau advocated an educational method which consisted of removing the child from society—for example, to a country home—and alternately conditioning him through changes to his environment and setting traps and puzzles for him to solve or overcome.
Rousseau was unusual in that he recognized and addressed the potential of a problem of legitimation for teaching. He advocated that adults always be truthful with children, and in particular that they never hide the fact that the basis for their authority in teaching was purely one of physical coercion: "I'm bigger than you." Once children reached the age of reason, at about 12, they would be engaged as free individuals in the ongoing process of their own.
He once said that a child should grow up without adult interference and that the child must be guided to suffer from the experience of the natural consequences of his own acts or behaviour. When he experiences the consequences of his own acts, he advises himself.
"Rousseau divides development into five stages (a book is devoted to each). Education in the first two stages seeks to the senses: only when Émile is about 12 does the tutor begin to work to develop his mind. Later, in Book 5, Rousseau examines the education of Sophie (whom Émile is to marry). Here he sets out what he sees as the essential differences that flow from sex. 'The man should be strong and active; the woman should be weak and passive' (Everyman edn: 322). From this difference comes a contrasting education. They are not to be brought up in ignorance and kept to housework: Nature means them to think, to will, to love to cultivate their minds as well as their persons; she puts these weapons in their hands to make up for their lack of strength and to enable them to direct the strength of men. They should learn many things, but only such things as suitable' (Everyman edn.: 327)."
Émile
Immanuel Kant
Immanuel Kant believed that education differs from training in that the former involves thinking whereas the latter does not. In addition to educating reason, of central importance to him was the development of character and teaching of moral maxims. Kant was a proponent of public education and of learning by doing.
Charlotte Mason
Charlotte Mason was a British educator who invested her life in improving the quality of children's education. Her ideas led to a method used by some homeschoolers. Mason's philosophy of education is probably best summarized by the principles given at the beginning of each of her books. Two key mottos taken from those principles are "Education is an atmosphere, a discipline, a life" and "Education is the science of relations." She believed that children were born persons and should be respected as such; they should also be taught the Way of the Will and the Way of Reason. Her motto for students was "I am, I can, I ought, I will." Charlotte Mason believed that children should be introduced to subjects through living books, not through the use of "compendiums, abstracts, or selections." She used abridged books only when the content was deemed inappropriate for children. She preferred that parents or teachers read aloud those texts (such as Plutarch and the Old Testament), making omissions only where necessary.
20th and 21st century
Rudolf Steiner (Waldorf education)
Waldorf education (also known as Steiner or Steiner-Waldorf education) is a humanistic approach to pedagogy based upon the educational philosophy of the Austrian philosopher Rudolf Steiner, the founder of anthroposophy. Now known as Waldorf or Steiner education, his pedagogy emphasizes a balanced development of cognitive, affective/artistic, and practical skills (head, heart, and hands). Schools are normally self-administered by faculty; emphasis is placed upon giving individual teachers the freedom to develop creative methods.
Steiner's theory of child development divides education into three discrete developmental stages predating but with close similarities to the stages of development described by Piaget. Early childhood education occurs through imitation; teachers provide practical activities and a healthy environment. Steiner believed that young children should meet only goodness. Elementary education is strongly arts-based, centered on the teacher's creative authority; the elementary school-age child should meet beauty. Secondary education seeks to develop the judgment, intellect, and practical idealism; the adolescent should meet truth.
Learning is interdisciplinary, integrating practical, artistic, and conceptual elements. The approach emphasizes the role of the imagination in learning, developing thinking that includes a creative as well as an analytic component. The educational philosophy's overarching goals are to provide young people the basis on which to develop into free, morally responsible and integrated individuals, and to help every child fulfill his or her unique destiny, the existence of which anthroposophy posits. Schools and teachers are given considerable freedom to define curricula within collegial structures.
John Dewey
In Democracy and Education: An Introduction to the Philosophy of Education, John Dewey stated that education, in its broadest sense, is the means of the "social continuity of life" given the "primary ineluctable facts of the birth and death of each one of the constituent members in a social group". Education is therefore a necessity, for "the life of the group goes on." Dewey was a proponent of Educational Progressivism and was a relentless campaigner for reform of education, pointing out that the authoritarian, strict, pre-ordained knowledge approach of modern traditional education was too concerned with delivering knowledge, and not enough with understanding students' actual experiences.
In 1896, Dewey opened the Laboratory School at the University of Chicago in an institutional effort to pursue together rather than apart "utility and culture, absorption and expression, theory and practice, [which] are [indispensable] elements in any educational scheme. As the unified head of the departments of Philosophy, Psychology and Pedagogy, John Dewey articulated a desire to organize an educational experience where children could be more creative than the best of progressive models of his day. Transactionalism as a pragmatic philosophy grew out of the work he did in the Laboratory School. The two most influential works that stemmed from his research and study were The Child and the Curriculum (1902) and Democracy and Education (1916). Dewey wrote of the dualisms that plagued educational philosophy in the latter book: "Instead of seeing the educative process steadily and as a whole, we see conflicting terms. We get the case of the child vs. the curriculum; of the individual nature vs. social culture." Dewey found that the preoccupation with facts as knowledge in the educative process led students to memorize "ill-understood rules and principles" and while second-hand knowledge learned in mere words is a beginning in study, mere words can never replace the ability to organize knowledge into both useful and valuable experience.
Maria Montessori
The Montessori method arose from Dr. Maria Montessori's discovery of what she referred to as "the child's true normal nature" in 1907, which happened in the process of her experimental observation of young children given freedom in an environment prepared with materials designed for their self-directed learning activity. The method itself aims to duplicate this experimental observation of children to bring about, sustain and support their true natural way of being.
William Heard Kilpatrick
William Heard Kilpatrick was a US American philosopher of education and a colleague and a successor of John Dewey. He was a major figure in the progressive education movement of the early 20th century. Kilpatrick developed the Project Method for early childhood education, which was a form of Progressive Education organized curriculum and classroom activities around a subject's central theme. He believed that the role of a teacher should be that of a "guide" as opposed to an authoritarian figure. Kilpatrick believed that children should direct their own learning according to their interests and should be allowed to explore their environment, experiencing their learning through the natural senses. Proponents of Progressive Education and the Project Method reject traditional schooling that focuses on memorization, rote learning, strictly organized classrooms (desks in rows; students always seated), and typical forms of assessment.
William Chandler Bagley
William Chandler Bagley taught in elementary schools before becoming a professor of education at the University of Illinois, where he served as the Director of the School of Education from 1908 until 1917. He was a professor of education at Teachers College, Columbia, from 1917 to 1940. An opponent of pragmatism and progressive education, Bagley insisted on the value of knowledge for its own sake, not merely as an instrument, and he criticized his colleagues for their failure to emphasize systematic study of academic subjects. Bagley was a proponent of educational essentialism.
A. S. Neill
A. S. Neill founded Summerhill School, the oldest existing democratic school in Suffolk, England, in 1921. He wrote a number of books that now define much of contemporary democratic education philosophy. Neill believed that the happiness of the child should be the paramount consideration in decisions about the child's upbringing, and that this happiness grew from a sense of personal freedom. He felt that deprivation of this sense of freedom during childhood, and the consequent unhappiness experienced by the repressed child, was responsible for many of the psychological disorders of adulthood.
Martin Heidegger
Martin Heidegger's philosophizing about education was primarily related to higher education. He believed that teaching and research in the university should be unified and that students should be taught "to focus on and explicitly investigate the ontological presuppositions which implicitly guide research in each domain of knowledge,” an approach he believed would "encourage revolutionary transformation in the sciences and humanities."
Jean Piaget
Jean Piaget was a Swiss developmental psychologist known for his epistemological studies with children. His theory of cognitive development and epistemological view are together called "genetic epistemology". Piaget placed great importance on the education of children. As the Director of the International Bureau of Education, he declared in 1934 that "only education is capable of saving our societies from possible collapse, whether violent, or gradual." Piaget created the International Centre for Genetic Epistemology in Geneva in 1955 and directed it until 1980. According to Ernst von Glasersfeld, Jean Piaget is "the great pioneer of the constructivist theory of knowing."
Jean Piaget described himself as an epistemologist, interested in the process of the qualitative development of knowledge. As he says in the introduction of his book Genetic Epistemology: "What the genetic epistemology proposes is discovering the roots of the different varieties of knowledge, since its elementary forms, following to the next levels, including also the scientific knowledge."
Mortimer Jerome Adler
Mortimer Jerome Adler was an American philosopher, educator, and popular author. As a philosopher he worked within the Aristotelian and Thomistic traditions. He lived for the longest stretches in New York City, Chicago, San Francisco, and San Mateo, California. He worked for Columbia University, the University of Chicago, Encyclopædia Britannica, and Adler's own Institute for Philosophical Research. Adler was married twice and had four children. Adler was a proponent of educational perennialism.
Harry S. Broudy
Harry S. Broudy's philosophical views were based on the tradition of classical realism, dealing with truth, goodness, and beauty. However he was also influenced by the modern philosophy existentialism and instrumentalism. In his textbook Building a Philosophy of Education he has two major ideas that are the main points to his philosophical outlook: The first is truth and the second is universal structures to be found in humanity's struggle for education and the good life. Broudy also studied issues on society's demands on school. He thought education would be a link to unify the diverse society and urged the society to put more trust and a commitment to the schools and a good education.
Jerome Bruner
Another important contributor to the inquiry method in education is Jerome Bruner. His books The Process of Education and Toward a Theory of Instruction are landmarks in conceptualizing learning and curriculum development. He argued that any subject can be taught in some intellectually honest form to any child at any stage of development. This notion was an underpinning for his concept of the "spiral" (helical) curriculum which posited the idea that a curriculum should revisit basic ideas, building on them until the student had grasped the full formal concept. He emphasized intuition as a neglected but essential feature of productive thinking. He felt that interest in the material being learned was the best stimulus for learning rather than external motivation such as grades. Bruner developed the concept of discovery learning which promoted learning as a process of constructing new ideas based on current or past knowledge. Students are encouraged to discover facts and relationships and continually build on what they already know.
Paulo Freire
A Brazilian philosopher and educator committed to the cause of educating the impoverished peasants of his nation and collaborating with them in the pursuit of their liberation from what he regarded as "oppression", Paulo Freire is best known for his attack on what he called the "banking concept of education", in which the student was viewed as an empty account to be filled by the teacher. Freire also suggests that a deep reciprocity be inserted into our notions of teacher and student; he comes close to suggesting that the teacher-student dichotomy be completely abolished, instead promoting the roles of the participants in the classroom as the teacher-student (a teacher who learns) and the student-teacher (a learner who teaches). In its early, strong form this kind of classroom has sometimes been criticized on the grounds that it can mask rather than overcome the teacher's authority.
Aspects of the Freirian philosophy have been highly influential in academic debates over "participatory development" and development more generally. Freire's emphasis on what he describes as "emancipation" through interactive participation has been used as a rationale for the participatory focus of development, as it is held that 'participation' in any form can lead to empowerment of poor or marginalised groups. Freire was a proponent of critical pedagogy.
"He participated in the import of European doctrines and ideas into Brazil,
assimilated them to the needs of a specific socio-economic situation, and thus expanded and refocused them in a thought-provoking way"
John Holt
In 1964 John Holt published his first book, How Children Fail, asserting that the academic failure of schoolchildren was not despite the efforts of the schools, but actually because of the schools. How Children Fail ignited a firestorm of controversy. Holt was catapulted into the American national consciousness to the extent that he made appearances on major TV talk shows, wrote book reviews for Life magazine, and was a guest on the To Tell The Truth TV game show. In his follow-up work, How Children Learn, published in 1967, Holt tried to elucidate the learning process of children and why he believed school short circuits that process.
Nel Noddings
Nel Noddings' first sole-authored book Caring: A Feminine Approach to Ethics and Moral Education (1984) followed close on the 1982 publication of Carol Gilligan's ground-breaking work in the ethics of care In a Different Voice. While her work on ethics continued, with the publication of Women and Evil (1989) and later works on moral education, most of her later publications have been on the philosophy of education and educational theory. Her most significant works in these areas have been Educating for Intelligent Belief or Unbelief (1993) and Philosophy of Education (1995).
Noddings' contribution to education philosophy centers around the ethic of care. Her belief was that a caring teacher-student relationship will result in the teacher designing a differentiated curriculum for each student, and that this curriculum would be based around the students' particular interests and needs. The teacher's claim to care must not be based on a one time virtuous decision but an ongoing interest in the students' welfare.
Professional organizations and associations
See also
Education sciences
Methodology
Learning theory (education)
Outline of educational aims
Pedagogy
Philosophy education
References
Further reading
Classic and Contemporary Readings in the Philosophy of Education, by Steven M. Cahn, 1997, .
A Companion to the Philosophy of Education (Blackwell Companions to Philosophy), ed. by Randall Curren, Paperback edition, 2006, .
The Blackwell Guide to the Philosophy of Education, ed. by Nigel Blake, Paul Smeyers, Richard Smith, and Paul Standish, Paperback edition, 2003, .
Philosophy of Education (Westview Press, Dimension of Philosophy Series), by Nel Noddings, Paperback edition, 1995, .
Andre Kraak, Michael Young Education in Retrospect: Policy And Implementation Since 1990
Daan Thoomes, The necessity of education. In: The History of education and childhood. Radboud University, Nijmegen, 2000
External links
"Philosophy of Education". In Stanford Encyclopedia of Philosophy
Encyclopedia of Philosophy of Education
Thinkers of Education. UNESCO-International Bureau of Education website
Education studies | 0.810864 | 0.996986 | 0.808419 |
Philosophy | Philosophy ('love of wisdom' in Ancient Greek) is a systematic study of general and fundamental questions concerning topics like existence, reason, knowledge, value, mind, and language. It is a rational and critical inquiry that reflects on its own methods and assumptions.
Historically, many of the individual sciences, such as physics and psychology, formed part of philosophy. However, they are considered separate academic disciplines in the modern sense of the term. Influential traditions in the history of philosophy include Western, Arabic–Persian, Indian, and Chinese philosophy. Western philosophy originated in Ancient Greece and covers a wide area of philosophical subfields. A central topic in Arabic–Persian philosophy is the relation between reason and revelation. Indian philosophy combines the spiritual problem of how to reach enlightenment with the exploration of the nature of reality and the ways of arriving at knowledge. Chinese philosophy focuses principally on practical issues in relation to right social conduct, government, and self-cultivation.
Major branches of philosophy are epistemology, ethics, logic, and metaphysics. Epistemology studies what knowledge is and how to acquire it. Ethics investigates moral principles and what constitutes right conduct. Logic is the study of correct reasoning and explores how good arguments can be distinguished from bad ones. Metaphysics examines the most general features of reality, existence, objects, and properties. Other subfields are aesthetics, philosophy of language, philosophy of mind, philosophy of religion, philosophy of science, philosophy of mathematics, philosophy of history, and political philosophy. Within each branch, there are competing schools of philosophy that promote different principles, theories, or methods.
Philosophers use a great variety of methods to arrive at philosophical knowledge. They include conceptual analysis, reliance on common sense and intuitions, use of thought experiments, analysis of ordinary language, description of experience, and critical questioning. Philosophy is related to many other fields, including the sciences, mathematics, business, law, and journalism. It provides an interdisciplinary perspective and studies the scope and fundamental concepts of these fields. It also investigates their methods and ethical implications.
Etymology
The word philosophy comes from the Ancient Greek words and . Some sources say that the term was coined by the pre-Socratic philosopher Pythagoras, but this is not certain.
The word entered the English language primarily from Old French and Anglo-Norman starting around 1175 CE. The French is itself a borrowing from the Latin . The term philosophy acquired the meanings of "advanced study of the speculative subjects (logic, ethics, physics, and metaphysics)", "deep wisdom consisting of love of truth and virtuous living", "profound learning as transmitted by the ancient writers", and "the study of the fundamental nature of knowledge, reality, and existence, and the basic limits of human understanding".
Before the modern age, the term philosophy was used in a wide sense. It included most forms of rational inquiry, such as the individual sciences, as its subdisciplines. For instance, natural philosophy was a major branch of philosophy. This branch of philosophy encompassed a wide range of fields, including disciplines like physics, chemistry, and biology. An example of this usage is the 1687 book Philosophiæ Naturalis Principia Mathematica by Isaac Newton. This book referred to natural philosophy in its title, but it is today considered a book of physics.
The meaning of philosophy changed toward the end of the modern period when it acquired the more narrow meaning common today. In this new sense, the term is mainly associated with philosophical disciplines like metaphysics, epistemology, and ethics. Among other topics, it covers the rational study of reality, knowledge, and values. It is distinguished from other disciplines of rational inquiry such as the empirical sciences and mathematics.
Conceptions of philosophy
General conception
The practice of philosophy is characterized by several general features: it is a form of rational inquiry, it aims to be systematic, and it tends to critically reflect on its own methods and presuppositions. It requires attentively thinking long and carefully about the provocative, vexing, and enduring problems central to the human condition.
The philosophical pursuit of wisdom involves asking general and fundamental questions. It often does not result in straightforward answers but may help a person to better understand the topic, examine their life, dispel confusion, and overcome prejudices and self-deceptive ideas associated with common sense. For example, Socrates stated that "the unexamined life is not worth living" to highlight the role of philosophical inquiry in understanding one's own existence. And according to Bertrand Russell, "the man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the cooperation or consent of his deliberate reason."
Academic definitions
Attempts to provide more precise definitions of philosophy are controversial and are studied in metaphilosophy. Some approaches argue that there is a set of essential features shared by all parts of philosophy. Others see only weaker family resemblances or contend that it is merely an empty blanket term. Precise definitions are often only accepted by theorists belonging to a certain philosophical movement and are revisionistic according to Søren Overgaard et al. in that many presumed parts of philosophy would not deserve the title "philosophy" if they were true.
Some definitions characterize philosophy in relation to its method, like pure reasoning. Others focus on its topic, for example, as the study of the biggest patterns of the world as a whole or as the attempt to answer the big questions. Such an approach is pursued by Immanuel Kant, who holds that the task of philosophy is united by four questions: "What can I know?"; "What should I do?"; "What may I hope?"; and "What is the human being?" Both approaches have the problem that they are usually either too wide, by including non-philosophical disciplines, or too narrow, by excluding some philosophical sub-disciplines.
Many definitions of philosophy emphasize its intimate relation to science. In this sense, philosophy is sometimes understood as a proper science in its own right. According to some naturalistic philosophers, such as W. V. O. Quine, philosophy is an empirical yet abstract science that is concerned with wide-ranging empirical patterns instead of particular observations. Science-based definitions usually face the problem of explaining why philosophy in its long history has not progressed to the same extent or in the same way as the sciences. This problem is avoided by seeing philosophy as an immature or provisional science whose subdisciplines cease to be philosophy once they have fully developed. In this sense, philosophy is sometimes described as "the midwife of the sciences".
Other definitions focus on the contrast between science and philosophy. A common theme among many such conceptions is that philosophy is concerned with meaning, understanding, or the clarification of language. According to one view, philosophy is conceptual analysis, which involves finding the necessary and sufficient conditions for the application of concepts. Another definition characterizes philosophy as thinking about thinking to emphasize its self-critical, reflective nature. A further approach presents philosophy as a linguistic therapy. According to Ludwig Wittgenstein, for instance, philosophy aims at dispelling misunderstandings to which humans are susceptible due to the confusing structure of ordinary language.
Phenomenologists, such as Edmund Husserl, characterize philosophy as a "rigorous science" investigating essences. They practice a radical suspension of theoretical assumptions about reality to get back to the "things themselves", that is, as originally given in experience. They contend that this base-level of experience provides the foundation for higher-order theoretical knowledge, and that one needs to understand the former to understand the latter.
An early approach found in ancient Greek and Roman philosophy is that philosophy is the spiritual practice of developing one's rational capacities. This practice is an expression of the philosopher's love of wisdom and has the aim of improving one's well-being by leading a reflective life. For example, the Stoics saw philosophy as an exercise to train the mind and thereby achieve eudaimonia and flourish in life.
History
As a discipline, the history of philosophy aims to provide a systematic and chronological exposition of philosophical concepts and doctrines. Some theorists see it as a part of intellectual history, but it also investigates questions not covered by intellectual history such as whether the theories of past philosophers are true and have remained philosophically relevant. The history of philosophy is primarily concerned with theories based on rational inquiry and argumentation; some historians understand it in a looser sense that includes myths, religious teachings, and proverbial lore.
Influential traditions in the history of philosophy include Western, Arabic–Persian, Indian, and Chinese philosophy. Other philosophical traditions are Japanese philosophy, Latin American philosophy, and African philosophy.
Western
Western philosophy originated in Ancient Greece in the 6th century BCE with the pre-Socratics. They attempted to provide rational explanations of the cosmos as a whole. The philosophy following them was shaped by Socrates (469–399 BCE), Plato (427–347 BCE), and Aristotle (384–322 BCE). They expanded the range of topics to questions like how people should act, how to arrive at knowledge, and what the nature of reality and mind is. The later part of the ancient period was marked by the emergence of philosophical movements, for example, Epicureanism, Stoicism, Skepticism, and Neoplatonism. The medieval period started in the 5th century CE. Its focus was on religious topics and many thinkers used ancient philosophy to explain and further elaborate Christian doctrines.
The Renaissance period started in the 14th century and saw a renewed interest in schools of ancient philosophy, in particular Platonism. Humanism also emerged in this period. The modern period started in the 17th century. One of its central concerns was how philosophical and scientific knowledge are created. Specific importance was given to the role of reason and sensory experience. Many of these innovations were used in the Enlightenment movement to challenge traditional authorities. Several attempts to develop comprehensive systems of philosophy were made in the 19th century, for instance, by German idealism and Marxism. Influential developments in 20th-century philosophy were the emergence and application of formal logic, the focus on the role of language as well as pragmatism, and movements in continental philosophy like phenomenology, existentialism, and post-structuralism. The 20th century saw a rapid expansion of academic philosophy in terms of the number of philosophical publications and philosophers working at academic institutions. There was also a noticeable growth in the number of female philosophers, but they still remained underrepresented.
Arabic–Persian
Arabic–Persian philosophy arose in the early 9th century CE as a response to discussions in the Islamic theological tradition. Its classical period lasted until the 12th century CE and was strongly influenced by ancient Greek philosophers. It employed their ideas to elaborate and interpret the teachings of the Quran.
Al-Kindi (801–873 CE) is usually regarded as the first philosopher of this tradition. He translated and interpreted many works of Aristotle and Neoplatonists in his attempt to show that there is a harmony between reason and faith. Avicenna (980–1037 CE) also followed this goal and developed a comprehensive philosophical system to provide a rational understanding of reality encompassing science, religion, and mysticism. Al-Ghazali (1058–1111 CE) was a strong critic of the idea that reason can arrive at a true understanding of reality and God. He formulated a detailed critique of philosophy and tried to assign philosophy a more limited place besides the teachings of the Quran and mystical insight. Following Al-Ghazali and the end of the classical period, the influence of philosophical inquiry waned. Mulla Sadra (1571–1636 CE) is often regarded as one of the most influential philosophers of the subsequent period. The increasing influence of Western thought and institutions in the 19th and 20th centuries gave rise to the intellectual movement of Islamic modernism, which aims to understand the relation between traditional Islamic beliefs and modernity.
Indian
One of the distinguishing features of Indian philosophy is that it integrates the exploration of the nature of reality, the ways of arriving at knowledge, and the spiritual question of how to reach enlightenment. It started around 900 BCE when the Vedas were written. They are the foundational scriptures of Hinduism and contemplate issues concerning the relation between the self and ultimate reality as well as the question of how souls are reborn based on their past actions. This period also saw the emergence of non-Vedic teachings, like Buddhism and Jainism. Buddhism was founded by Gautama Siddhartha (563–483 BCE), who challenged the Vedic idea of a permanent self and proposed a path to liberate oneself from suffering. Jainism was founded by Mahavira (599–527 BCE), who emphasized non-violence as well as respect toward all forms of life.
The subsequent classical period started roughly 200 BCE and was characterized by the emergence of the six orthodox schools of Hinduism: Nyāyá, Vaiśeṣika, Sāṃkhya, Yoga, Mīmāṃsā, and Vedanta. The school of Advaita Vedanta developed later in this period. It was systematized by Adi Shankara (–750 CE), who held that everything is one and that the impression of a universe consisting of many distinct entities is an illusion. A slightly different perspective was defended by Ramanuja (1017–1137 CE), who founded the school of Vishishtadvaita Vedanta and argued that individual entities are real as aspects or parts of the underlying unity. He also helped to popularize the Bhakti movement, which taught devotion toward the divine as a spiritual path and lasted until the 17th to 18th centuries CE. The modern period began roughly 1800 CE and was shaped by encounters with Western thought. Philosophers tried to formulate comprehensive systems to harmonize diverse philosophical and religious teachings. For example, Swami Vivekananda (1863–1902 CE) used the teachings of Advaita Vedanta to argue that all the different religions are valid paths toward the one divine.
Chinese
Chinese philosophy is particularly interested in practical questions associated with right social conduct, government, and self-cultivation. Many schools of thought emerged in the 6th century BCE in competing attempts to resolve the political turbulence of that period. The most prominent among them were Confucianism and Daoism. Confucianism was founded by Confucius (551–479 BCE). It focused on different forms of moral virtues and explored how they lead to harmony in society. Daoism was founded by Laozi (6th century BCE) and examined how humans can live in harmony with nature by following the Dao or the natural order of the universe. Other influential early schools of thought were Mohism, which developed an early form of altruistic consequentialism, and Legalism, which emphasized the importance of a strong state and strict laws.
Buddhism was introduced to China in the 1st century CE and diversified into new forms of Buddhism. Starting in the 3rd century CE, the school of Xuanxue emerged. It interpreted earlier Daoist works with a specific emphasis on metaphysical explanations. Neo-Confucianism developed in the 11th century CE. It systematized previous Confucian teachings and sought a metaphysical foundation of ethics. The modern period in Chinese philosophy began in the early 20th century and was shaped by the influence of and reactions to Western philosophy. The emergence of Chinese Marxism—which focused on class struggle, socialism, and communism—resulted in a significant transformation of the political landscape. Another development was the emergence of New Confucianism, which aims to modernize and rethink Confucian teachings to explore their compatibility with democratic ideals and modern science.
Other traditions
Traditional Japanese philosophy assimilated and synthesized ideas from different traditions, including the indigenous Shinto religion and Chinese and Indian thought in the forms of Confucianism and Buddhism, both of which entered Japan in the 6th and 7th centuries. Its practice is characterized by active interaction with reality rather than disengaged examination. Neo-Confucianism became an influential school of thought in the 16th century and the following Edo period and prompted a greater focus on language and the natural world. The Kyoto School emerged in the 20th century and integrated Eastern spirituality with Western philosophy in its exploration of concepts like absolute nothingness (zettai-mu), place (basho), and the self.
Latin American philosophy in the pre-colonial period was practiced by indigenous civilizations and explored questions concerning the nature of reality and the role of humans. It has similarities to indigenous North American philosophy, which covered themes such as the interconnectedness of all things. Latin American philosophy during the colonial period, starting around 1550, was dominated by religious philosophy in the form of scholasticism. Influential topics in the post-colonial period were positivism, the philosophy of liberation, and the exploration of identity and culture.
Early African philosophy, like Ubuntu philosophy, was focused on community, morality, and ancestral ideas. Systematic African philosophy emerged at the beginning of the 20th century. It discusses topics such as ethnophilosophy, négritude, pan-Africanism, Marxism, postcolonialism, the role of cultural identity, and the critique of Eurocentrism.
Core branches
Philosophical questions can be grouped into several branches. These groupings allow philosophers to focus on a set of similar topics and interact with other thinkers who are interested in the same questions. Epistemology, ethics, logic, and metaphysics are sometimes listed as the main branches. There are many other subfields besides them and the different divisions are neither exhaustive nor mutually exclusive. For example, political philosophy, ethics, and aesthetics are sometimes linked under the general heading of value theory as they investigate normative or evaluative aspects. Furthermore, philosophical inquiry sometimes overlaps with other disciplines in the natural and social sciences, religion, and mathematics.
Epistemology
Epistemology is the branch of philosophy that studies knowledge. It is also known as theory of knowledge and aims to understand what knowledge is, how it arises, what its limits are, and what value it has. It further examines the nature of truth, belief, justification, and rationality. Some of the questions addressed by epistemologists include "By what method(s) can one acquire knowledge?"; "How is truth established?"; and "Can we prove causal relations?"
Epistemology is primarily interested in declarative knowledge or knowledge of facts, like knowing that Princess Diana died in 1997. But it also investigates practical knowledge, such as knowing how to ride a bicycle, and knowledge by acquaintance, for example, knowing a celebrity personally.
One area in epistemology is the analysis of knowledge. It assumes that declarative knowledge is a combination of different parts and attempts to identify what those parts are. An influential theory in this area claims that knowledge has three components: it is a belief that is justified and true. This theory is controversial and the difficulties associated with it are known as the Gettier problem. Alternative views state that knowledge requires additional components, like the absence of luck; different components, like the manifestation of cognitive virtues instead of justification; or they deny that knowledge can be analyzed in terms of other phenomena.
Another area in epistemology asks how people acquire knowledge. Often-discussed sources of knowledge are perception, introspection, memory, inference, and testimony. According to empiricists, all knowledge is based on some form of experience. Rationalists reject this view and hold that some forms of knowledge, like innate knowledge, are not acquired through experience. The regress problem is a common issue in relation to the sources of knowledge and the justification they offer. It is based on the idea that beliefs require some kind of reason or evidence to be justified. The problem is that the source of justification may itself be in need of another source of justification. This leads to an infinite regress or circular reasoning. Foundationalists avoid this conclusion by arguing that some sources can provide justification without requiring justification themselves. Another solution is presented by coherentists, who state that a belief is justified if it coheres with other beliefs of the person.
Many discussions in epistemology touch on the topic of philosophical skepticism, which raises doubts about some or all claims to knowledge. These doubts are often based on the idea that knowledge requires absolute certainty and that humans are unable to acquire it.
Ethics
Ethics, also known as moral philosophy, studies what constitutes right conduct. It is also concerned with the moral evaluation of character traits and institutions. It explores what the standards of morality are and how to live a good life. Philosophical ethics addresses such basic questions as "Are moral obligations relative?"; "Which has priority: well-being or obligation?"; and "What gives life meaning?"
The main branches of ethics are meta-ethics, normative ethics, and applied ethics. Meta-ethics asks abstract questions about the nature and sources of morality. It analyzes the meaning of ethical concepts, like right action and obligation. It also investigates whether ethical theories can be true in an absolute sense and how to acquire knowledge of them. Normative ethics encompasses general theories of how to distinguish between right and wrong conduct. It helps guide moral decisions by examining what moral obligations and rights people have. Applied ethics studies the consequences of the general theories developed by normative ethics in specific situations, for example, in the workplace or for medical treatments.
Within contemporary normative ethics, consequentialism, deontology, and virtue ethics are influential schools of thought. Consequentialists judge actions based on their consequences. One such view is utilitarianism, which argues that actions should increase overall happiness while minimizing suffering. Deontologists judge actions based on whether they follow moral duties, such as abstaining from lying or killing. According to them, what matters is that actions are in tune with those duties and not what consequences they have. Virtue theorists judge actions based on how the moral character of the agent is expressed. According to this view, actions should conform to what an ideally virtuous agent would do by manifesting virtues like generosity and honesty.
Logic
Logic is the study of correct reasoning. It aims to understand how to distinguish good from bad arguments. It is usually divided into formal and informal logic. Formal logic uses artificial languages with a precise symbolic representation to investigate arguments. In its search for exact criteria, it examines the structure of arguments to determine whether they are correct or incorrect. Informal logic uses non-formal criteria and standards to assess the correctness of arguments. It relies on additional factors such as content and context.
Logic examines a variety of arguments. Deductive arguments are mainly studied by formal logic. An argument is deductively valid if the truth of its premises ensures the truth of its conclusion. Deductively valid arguments follow a rule of inference, like modus ponens, which has the following logical form: "p; if p then q; therefore q". An example is the argument "today is Sunday; if today is Sunday then I don't have to go to work today; therefore I don't have to go to work today".
The premises of non-deductive arguments also support their conclusion, although this support does not guarantee that the conclusion is true. One form is inductive reasoning. It starts from a set of individual cases and uses generalization to arrive at a universal law governing all cases. An example is the inference that "all ravens are black" based on observations of many individual black ravens. Another form is abductive reasoning. It starts from an observation and concludes that the best explanation of this observation must be true. This happens, for example, when a doctor diagnoses a disease based on the observed symptoms.
Logic also investigates incorrect forms of reasoning. They are called fallacies and are divided into formal and informal fallacies based on whether the source of the error lies only in the form of the argument or also in its content and context.
Metaphysics
Metaphysics is the study of the most general features of reality, such as existence, objects and their properties, wholes and their parts, space and time, events, and causation. There are disagreements about the precise definition of the term and its meaning has changed throughout the ages. Metaphysicians attempt to answer basic questions including "Why is there something rather than nothing?"; "Of what does reality ultimately consist?"; and "Are humans free?"
Metaphysics is sometimes divided into general metaphysics and specific or special metaphysics. General metaphysics investigates being as such. It examines the features that all entities have in common. Specific metaphysics is interested in different kinds of being, the features they have, and how they differ from one another.
An important area in metaphysics is ontology. Some theorists identify it with general metaphysics. Ontology investigates concepts like being, becoming, and reality. It studies the categories of being and asks what exists on the most fundamental level. Another subfield of metaphysics is philosophical cosmology. It is interested in the essence of the world as a whole. It asks questions including whether the universe has a beginning and an end and whether it was created by something else.
A key topic in metaphysics concerns the question of whether reality only consists of physical things like matter and energy. Alternative suggestions are that mental entities (such as souls and experiences) and abstract entities (such as numbers) exist apart from physical things. Another topic in metaphysics concerns the problem of identity. One question is how much an entity can change while still remaining the same entity. According to one view, entities have essential and accidental features. They can change their accidental features but they cease to be the same entity if they lose an essential feature. A central distinction in metaphysics is between particulars and universals. Universals, like the color red, can exist at different locations at the same time. This is not the case for particulars including individual persons or specific objects. Other metaphysical questions are whether the past fully determines the present and what implications this would have for the existence of free will.
Other major branches
There are many other subfields of philosophy besides its core branches. Some of the most prominent are aesthetics, philosophy of language, philosophy of mind, philosophy of religion, philosophy of science, and political philosophy.
Aesthetics in the philosophical sense is the field that studies the nature and appreciation of beauty and other aesthetic properties, like the sublime. Although it is often treated together with the philosophy of art, aesthetics is a broader category that encompasses other aspects of experience, such as natural beauty. In a more general sense, aesthetics is "critical reflection on art, culture, and nature". A key question in aesthetics is whether beauty is an objective feature of entities or a subjective aspect of experience. Aesthetic philosophers also investigate the nature of aesthetic experiences and judgments. Further topics include the essence of works of art and the processes involved in creating them.
The philosophy of language studies the nature and function of language. It examines the concepts of meaning, reference, and truth. It aims to answer questions such as how words are related to things and how language affects human thought and understanding. It is closely related to the disciplines of logic and linguistics. The philosophy of language rose to particular prominence in the early 20th century in analytic philosophy due to the works of Frege and Russell. One of its central topics is to understand how sentences get their meaning. There are two broad theoretical camps: those emphasizing the formal truth conditions of sentences and those investigating circumstances that determine when it is suitable to use a sentence, the latter of which is associated with speech act theory.
The philosophy of mind studies the nature of mental phenomena and how they are related to the physical world. It aims to understand different types of conscious and unconscious mental states, like beliefs, desires, intentions, feelings, sensations, and free will. An influential intuition in the philosophy of mind is that there is a distinction between the inner experience of objects and their existence in the external world. The mind-body problem is the problem of explaining how these two types of thing—mind and matter—are related. The main traditional responses are materialism, which assumes that matter is more fundamental; idealism, which assumes that mind is more fundamental; and dualism, which assumes that mind and matter are distinct types of entities. In contemporary philosophy, another common view is functionalism, which understands mental states in terms of the functional or causal roles they play. The mind-body problem is closely related to the hard problem of consciousness, which asks how the physical brain can produce qualitatively subjective experiences.
The philosophy of religion investigates the basic concepts, assumptions, and arguments associated with religion. It critically reflects on what religion is, how to define the divine, and whether one or more gods exist. It also includes the discussion of worldviews that reject religious doctrines. Further questions addressed by the philosophy of religion are: "How are we to interpret religious language, if not literally?"; "Is divine omniscience compatible with free will?"; and, "Are the great variety of world religions in some way compatible in spite of their apparently contradictory theological claims?" It includes topics from nearly all branches of philosophy. It differs from theology since theological debates typically take place within one religious tradition, whereas debates in the philosophy of religion transcend any particular set of theological assumptions.
The philosophy of science examines the fundamental concepts, assumptions, and problems associated with science. It reflects on what science is and how to distinguish it from pseudoscience. It investigates the methods employed by scientists, how their application can result in knowledge, and on what assumptions they are based. It also studies the purpose and implications of science. Some of its questions are "What counts as an adequate explanation?"; "Is a scientific law anything more than a description of a regularity?"; and "Can some special sciences be explained entirely in the terms of a more general science?" It is a vast field that is commonly divided into the philosophy of the natural sciences and the philosophy of the social sciences, with further subdivisions for each of the individual sciences under these headings. How these branches are related to one another is also a question in the philosophy of science. Many of its philosophical issues overlap with the fields of metaphysics or epistemology.
Political philosophy is the philosophical inquiry into the fundamental principles and ideas governing political systems and societies. It examines the basic concepts, assumptions, and arguments in the field of politics. It investigates the nature and purpose of government and compares its different forms. It further asks under what circumstances the use of political power is legitimate, rather than a form of simple violence. In this regard, it is concerned with the distribution of political power, social and material goods, and legal rights. Other topics are justice, liberty, equality, sovereignty, and nationalism. Political philosophy involves a general inquiry into normative matters and differs in this respect from political science, which aims to provide empirical descriptions of actually existing states. Political philosophy is often treated as a subfield of ethics. Influential schools of thought in political philosophy are liberalism, conservativism, socialism, and anarchism.
Methods
Methods of philosophy are ways of conducting philosophical inquiry. They include techniques for arriving at philosophical knowledge and justifying philosophical claims as well as principles used for choosing between competing theories. A great variety of methods have been employed throughout the history of philosophy. Many of them differ significantly from the methods used in the natural sciences in that they do not use experimental data obtained through measuring equipment. The choice of one's method usually has important implications both for how philosophical theories are constructed and for the arguments cited for or against them. This choice is often guided by epistemological considerations about what constitutes philosophical evidence.
Methodological disagreements can cause conflicts among philosophical theories or about the answers to philosophical questions. The discovery of new methods has often had important consequences both for how philosophers conduct their research and for what claims they defend. Some philosophers engage in most of their theorizing using one particular method while others employ a wider range of methods based on which one fits the specific problem investigated best.
Conceptual analysis is a common method in analytic philosophy. It aims to clarify the meaning of concepts by analyzing them into their component parts. Another method often employed in analytic philosophy is based on common sense. It starts with commonly accepted beliefs and tries to draw unexpected conclusions from them, which it often employs in a negative sense to criticize philosophical theories that are too far removed from how the average person sees the issue. It is similar to how ordinary language philosophy approaches philosophical questions by investigating how ordinary language is used.
Various methods in philosophy give particular importance to intuitions, that is, non-inferential impressions about the correctness of specific claims or general principles. For example, they play an important role in thought experiments, which employ counterfactual thinking to evaluate the possible consequences of an imagined situation. These anticipated consequences can then be used to confirm or refute philosophical theories. The method of reflective equilibrium also employs intuitions. It seeks to form a coherent position on a certain issue by examining all the relevant beliefs and intuitions, some of which often have to be deemphasized or reformulated to arrive at a coherent perspective.
Pragmatists stress the significance of concrete practical consequences for assessing whether a philosophical theory is true. According to the pragmatic maxim as formulated by Charles Sanders Peirce, the idea a person has of an object is nothing more than the totality of practical consequences they associate with this object. Pragmatists have also used this method to expose disagreements as merely verbal, that is, to show they make no genuine difference on the level of consequences.
Phenomenologists seek knowledge of the realm of appearance and the structure of human experience. They insist upon the first-personal character of all experience and proceed by suspending theoretical judgments about the external world. This technique of phenomenological reduction is known as "bracketing" or epoché. The goal is to give an unbiased description of the appearances of things.
Methodological naturalism places great emphasis on the empirical approach and the resulting theories found in the natural sciences. In this way, it contrasts with methodologies that give more weight to pure reasoning and introspection.
Relation to other fields
Philosophy is closely related to many other fields. It is sometimes understood as a metadiscipline that clarifies their nature and limits. It does this by critically examining their basic concepts, background assumptions, and methods. In this regard, it plays a key role in providing an interdisciplinary perspective. It bridges the gap between different disciplines by analyzing which concepts and problems they have in common. It shows how they overlap while also delimiting their scope. Historically, most of the individual sciences originated from philosophy.
The influence of philosophy is felt in several fields that require difficult practical decisions. In medicine, philosophical considerations related to bioethics affect issues like whether an embryo is already a person and under what conditions abortion is morally permissible. A closely related philosophical problem is how humans should treat other animals, for instance, whether it is acceptable to use non-human animals as food or for research experiments. In relation to business and professional life, philosophy has contributed by providing ethical frameworks. They contain guidelines on which business practices are morally acceptable and cover the issue of corporate social responsibility.
Philosophical inquiry is relevant to many fields that are concerned with what to believe and how to arrive at evidence for one's beliefs. This is a key issue for the sciences, which have as one of their prime objectives the creation of scientific knowledge. Scientific knowledge is based on empirical evidence but it is often not clear whether empirical observations are neutral or already include theoretical assumptions. A closely connected problem is whether the available evidence is sufficient to decide between competing theories. Epistemological problems in relation to the law include what counts as evidence and how much evidence is required to find a person guilty of a crime. A related issue in journalism is how to ensure truth and objectivity when reporting on events.
In the fields of theology and religion, there are many doctrines associated with the existence and nature of God as well as rules governing correct behavior. A key issue is whether a rational person should believe these doctrines, for example, whether revelation in the form of holy books and religious experiences of the divine are sufficient evidence for these beliefs.
Philosophy in the form of logic has been influential in the fields of mathematics and computer science. Further fields influenced by philosophy include psychology, sociology, linguistics, education, and the arts. The close relation between philosophy and other fields in the contemporary period is reflected in the fact that many philosophy graduates go on to work in related fields rather than in philosophy itself.
In the field of politics, philosophy addresses issues such as how to assess whether a government policy is just. Philosophical ideas have prepared and shaped various political developments. For example, ideals formulated in Enlightenment philosophy laid the foundation for constitutional democracy and played a role in the American Revolution and the French Revolution. Marxist philosophy and its exposition of communism was one of the factors in the Russian Revolution and the Chinese Communist Revolution. In India, Mahatma Gandhi's philosophy of non-violence shaped the Indian independence movement.
An example of the cultural and critical role of philosophy is found in its influence on the feminist movement through philosophers such as Mary Wollstonecraft, Simone de Beauvoir, and Judith Butler. It has shaped the understanding of key concepts in feminism, for instance, the meaning of gender, how it differs from biological sex, and what role it plays in the formation of personal identity. Philosophers have also investigated the concepts of justice and equality and their implications with respect to the prejudicial treatment of women in male-dominated societies.
The idea that philosophy is useful for many aspects of life and society is sometimes rejected. According to one such view, philosophy is mainly undertaken for its own sake and does not make significant contributions to existing practices or external goals.
See also
List of important publications in philosophy
List of philosophical problems
List of philosophy awards
List of philosophy journals
List of years in philosophy
Lists of philosophers
References
Notes
Citations
Bibliography
(for an earlier version, see: )
External links
Internet Encyclopedia of Philosophy – a peer-reviewed online encyclopedia of philosophy
Stanford Encyclopedia of Philosophy – an online encyclopedia of philosophy maintained by Stanford University
PhilPapers – a comprehensive directory of online philosophical articles and books by academic philosophers
Internet Philosophy Ontology Project – a model of relationships between philosophical ideas, thinkers, and journals
Articles containing video clips
Humanities
Main topic articles | 0.805632 | 0.999908 | 0.805558 |
Philosophical methodology | Philosophical methodology encompasses the methods used to philosophize and the study of these methods. Methods of philosophy are procedures for conducting research, creating new theories, and selecting between competing theories. In addition to the description of methods, philosophical methodology also compares and evaluates them.
Philosophers have employed a great variety of methods. Methodological skepticism tries to find principles that cannot be doubted. The geometrical method deduces theorems from self-evident axioms. The phenomenological method describes first-person experience. Verificationists study the conditions of empirical verification of sentences to determine their meaning. Conceptual analysis decomposes concepts into fundamental constituents. Common-sense philosophers use widely held beliefs as their starting point of inquiry, whereas ordinary language philosophers extract philosophical insights from ordinary language. Intuition-based methods, like thought experiments, rely on non-inferential impressions. The method of reflective equilibrium seeks coherence among beliefs, while the pragmatist method assesses theories by their practical consequences. The transcendental method studies the conditions without which an entity could not exist. Experimental philosophers use empirical methods.
The choice of method can significantly impact how theories are constructed and the arguments used to support them. As a result, methodological disagreements can lead to philosophical disagreements.
Definition
The term "philosophical methodology" refers either to the methods used to philosophize or to the branch of metaphilosophy studying these methods. A method is a way of doing things, such as a set of actions or decisions, in order to achieve a certain goal, when used under the right conditions. In the context of inquiry, a method is a way of conducting one's research and theorizing, like inductive or axiomatic methods in logic or experimental methods in the sciences. Philosophical methodology studies the methods of philosophy. It is not primarily concerned with whether a philosophical position, such as metaphysical dualism or utilitarianism, is true or false. Instead, it asks how one can determine which position should be adopted.
In the widest sense, any principle for choosing between competing theories may be considered as part of the methodology of philosophy. In this sense, the philosophical methodology is "the general study of criteria for theory selection". For example, Occam’s Razor is a methodological principle of theory selection favoring simple over complex theories. A closely related aspect of philosophical methodology concerns the question of which conventions one needs to adopt necessarily to succeed at theory making. But in a more narrow sense, only guidelines that help philosophers learn about facts studied by philosophy qualify as philosophical methods. This is the more common sense, which applies to most of the methods listed in this article. In this sense, philosophical methodology is closely related to epistemology in that it consists in epistemological methods that enable philosophers to arrive at knowledge. Because of this, the problem of the methods of philosophy is central to how philosophical claims are to be justified.
An important difference in philosophical methodology concerns the distinction between descriptive and normative questions. Descriptive questions ask what methods philosophers actually use or used in the past, while normative questions ask what methods they should use. The normative aspect of philosophical methodology expresses the idea that there is a difference between good and bad philosophy. In this sense, philosophical methods either articulate the standards of evaluation themselves or the practices that ensure that these standards are met. Philosophical methods can be understood as tools that help the theorist do good philosophy and arrive at knowledge. The normative question of philosophical methodology is quite controversial since different schools of philosophy often have very different views on what constitutes good philosophy and how to achieve it.
Methods
A great variety of philosophical methods has been proposed. Some of these methods were developed as a reaction to other methods, for example, to counter skepticism by providing a secure path to knowledge. In other cases, one method may be understood as a development or a specific application of another method. Some philosophers or philosophical movements give primacy to one specific method, while others use a variety of methods depending on the problem they are trying to solve. It has been argued that many of the philosophical methods are also commonly used implicitly in more crude forms by regular people and are only given a more careful, critical, and systematic exposition in philosophical methodology.
Methodological skepticism
Methodological skepticism, also referred to as Cartesian doubt, uses systematic doubt as a method of philosophy. It is motivated by the search for an absolutely certain foundation of knowledge. The method for finding these foundations is doubt: only that which is indubitable can serve this role. While this approach has been influential, it has also received various criticisms. One problem is that it has proven very difficult to find such absolutely certain claims if the doubt is applied in its most radical form. Another is that while absolute certainty may be desirable, it is by no means necessary for knowledge. In this sense, it excludes too much and seems to be unwarranted and arbitrary, since it is not clear why very certain theorems justified by strong arguments should be abandoned just because they are not absolutely certain. This can be seen in relation to the insights discovered by the empirical sciences, which have proven very useful even though they are not indubitable.
Geometrical method
The geometrical method came to particular prominence through rationalists like Baruch Spinoza. It starts from a small set of self-evident axioms together with relevant definitions and tries to deduce a great variety of theorems from this basis, thereby mirroring the methods found in geometry. Historically, it can be understood as a response to methodological skepticism: it consists in trying to find a foundation of certain knowledge and then expanding this foundation through deductive inferences. The theorems arrived at this way may be challenged in two ways. On the one hand, they may be derived from axioms that are not as self-evident as their defenders proclaim and thereby fail to inherit the status of absolute certainty. For example, many philosophers have rejected the claim of self-evidence concerning one of René Descartes's first principles stating that "he can know that whatever he perceives clearly and distinctly is true only if he first knows that God exists and is not a deceiver". Another example is the causal axiom of Spinoza's system that "the knowledge of an effect depends on and involves knowledge of its cause", which has been criticized in various ways. In this sense, philosophical systems built using the geometrical method are open to criticisms that reject their basic axioms. A different form of objection holds that the inference from the axioms to the theorems may be faulty, for example, because it does not follow a rule of inference or because it includes implicitly assumed premises that are not themselves self-evident.
Phenomenological method
Phenomenology is the science of appearances - broadly speaking, the science of phenomenon, given that almost all phenomena are perceived. The phenomenological method aims to study the appearances themselves and the relations found between them. This is achieved through the so-called phenomenological reduction, also known as epoché or bracketing: the researcher suspends their judgments about the natural external world in order to focus exclusively on the experience of how things appear to be, independent of whether these appearances are true or false. One idea behind this approach is that our presuppositions of what things are like can get in the way of studying how they appear to be and thereby mislead the researcher into thinking they know the answer instead of looking for themselves. The phenomenological method can also be seen as a reaction to methodological skepticism since its defenders traditionally claimed that it could lead to absolute certainty and thereby help philosophy achieve the status of a rigorous science. But phenomenology has been heavily criticized because of this overly optimistic outlook concerning the certainty of its insights. A different objection to the method of phenomenological reduction holds that it involves an artificial stance that gives too much emphasis on the theoretical attitude at the expense of feeling and practical concerns.
Another phenomenological method is called "eidetic variation". It is used to study the essences of things. This is done by imagining an object of the kind under investigation. The features of this object are then varied in order to see whether the resulting object still belongs to the investigated kind. If the object can survive the change of a certain feature then this feature is inessential to this kind. Otherwise, it belongs to the kind's essence. For example, when imagining a triangle, one can vary its features, like the length of its sides or its color. These features are inessential since the changed object is still a triangle, but it ceases to be a triangle if a fourth side is added.
Verificationism
The method of verificationism consists in understanding sentences by analyzing their characteristic conditions of verification, i.e. by determining which empirical observations would prove them to be true. A central motivation behind this method has been to distinguish meaningful from meaningless sentences. This is sometimes expressed through the claim that "[the] meaning of a statement is the method of its verification". Meaningful sentences, like the ones found in the natural sciences, have clear conditions of empirical verification. But since most metaphysical sentences cannot be verified by empirical observations, they are deemed to be non-sensical by verificationists. Verificationism has been criticized on various grounds. On the one hand, it has proved very difficult to give a precise formulation that includes all scientific claims, including the ones about unobservables. This is connected to the problem of underdetermination in the philosophy of science: the problem that the observational evidence is often insufficient to determine which theory is true. This would lead to the implausible conclusion that even for the empirical sciences, many of their claims would be meaningless. But on a deeper level, the basic claim underlying verificationism seems itself to be meaningless by its own standards: it is not clear what empirical observations could verify the claim that the meaning of a sentence is the method of its verification. In this sense, verificationism would be contradictory by directly refuting itself. These and other problems have led some theorists, especially from the sciences, to adopt falsificationism instead. It is a less radical approach that holds that serious theories or hypotheses should at least be falsifiable, i.e. there should be some empirical observations that could prove them wrong.
Conceptual analysis
The goal of conceptual analysis is to decompose or analyze a given concept into its fundamental constituents. It consists in considering a philosophically interesting concept, like knowledge, and determining the necessary and sufficient conditions for whether the application of this concept is true. The resulting claim about the relation between the concept and its constituents is normally seen as knowable a priori since it is true only in virtue of the involved concepts and thereby constitutes an analytic truth. Usually, philosophers use their own intuitions to determine whether a concept is applicable to a specific situation to test their analyses. But other approaches have also been utilized by using not the intuitions of philosophers but of regular people, an approach often defended by experimental philosophers.
G. E. Moore proposed that the correctness of a conceptual analysis can be tested using the open question method. According to this view, asking whether the decomposition fits the concept should result in a closed or pointless question. If it results in an open or intelligible question, then the analysis does not exactly correspond to what we have in mind when we use the term. This can be used, for example, to reject the utilitarian claim that "goodness" is "whatever maximizes happiness". The underlying argument is that the question "Is what is good what maximizes happiness?" is an open question, unlike the question "Is what is good what is good?", which is a closed question. One problem with this approach is that it results in a very strict conception of what constitutes a correct conceptual analysis, leading to the conclusion that many concepts, like "goodness", are simple or indefinable.
Willard Van Orman Quine criticized conceptual analysis as part of his criticism of the analytic-synthetic distinction. This objection is based on the idea that all claims, including how concepts are to be decomposed, are ultimately based on empirical evidence. Another problem with conceptual analysis is that it is often very difficult to find an analysis of a concept that really covers all its cases. For this reason, Rudolf Carnap has suggested a modified version that aims to cover only the most paradigmatic cases while excluding problematic or controversial cases. While this approach has become more popular in recent years, it has also been criticized based on the argument that it tends to change the subject rather than resolve the original problem. In this sense, it is closely related to the method of conceptual engineering, which consists in redefining concepts in fruitful ways or developing new interesting concepts. This method has been applied, for example, to the concepts of gender and race.
Common sense
The method of common sense is based on the fact that we already have a great variety of beliefs that seem very certain to us, even if we do not believe them based on explicit arguments. Common sense philosophers use these beliefs as their starting point of philosophizing. This often takes the form of criticism directed against theories whose premises or conclusions are very far removed from how the average person thinks about the issue in question. G. E. Moore, for example, rejects J. M. E. McTaggart's sophisticated argumentation for the unreality of time based on his common-sense impression that time exists. He holds that his simple common-sense impression is much more certain than that McTaggart's arguments are sound, even though Moore was unable to pinpoint where McTaggart's arguments went wrong. According to his method, common sense constitutes an evidence base. This base may be used to eliminate philosophical theories that stray too far away from it, that are abstruse from its perspective. This can happen because either the theory itself or consequences that can be drawn from it violate common sense. For common sense philosophers, it is not the task of philosophy to question common sense. Instead, they should analyze it to formulate theories in accordance with it.
One important argument against this method is that common sense has often been wrong in the past, as is exemplified by various scientific discoveries. This suggests that common sense is in such cases just an antiquated theory that is eventually eliminated by the progress of science. For example, Albert Einstein's theory of relativity constitutes a radical departure from the common-sense conception of space and time, and quantum physics poses equally serious problems to how we tend to think about how elementary particles behave. This puts into question that common sense is a reliable source of knowledge. Another problem is that for many issues, there is no one universally accepted common-sense opinion. In such cases, common sense only amounts to the majority opinion, which should not be blindly accepted by researchers. This problem can be approached by articulating a weaker version of the common-sense method. One such version is defended by Roderick Chisholm, who allows that theories violating common sense may still be true. He contends that, in such cases, the theory in question is prima facie suspect and the burden of proof is always on its side. But such a shift in the burden of proof does not constitute a blind belief in common sense since it leaves open the possibility that, for various issues, there is decisive evidence against the common-sense opinion.
Ordinary language philosophy
The method of ordinary language philosophy consists in tackling philosophical questions based on how the related terms are used in ordinary language. In this sense, it is related to the method of common sense but focuses more on linguistic aspects. Some types of ordinary language philosophy only take a negative form in that they try to show how philosophical problems are not real problems at all. Instead, it is aimed to show that false assumptions, to which humans are susceptible due to the confusing structure of natural language, are responsible for this false impression. Other types take more positive approaches by defending and justifying philosophical claims, for example, based on what sounds insightful or odd to the average English speaker.
One problem for ordinary language philosophy is that regular speakers may have many different reasons for using a certain expression. Sometimes they intend to express what they believe, but other times they may be motivated by politeness or other conversational norms independent of the truth conditions of the expressed sentences. This significantly complicates ordinary language philosophy, since philosophers have to take the specific context of the expression into account, which may considerably alter its meaning. This criticism is partially mitigated by J. L. Austin's approach to ordinary language philosophy. According to him, ordinary language already has encoded many important distinctions and is our point of departure in theorizing. But "ordinary language is not the last word: in principle, it can everywhere be supplemented and improved upon and superseded". However, it also falls prey to another criticism: that it is often not clear how to distinguish ordinary from non-ordinary language. This makes it difficult in all but the paradigmatic cases to decide whether a philosophical claim is or is not supported by ordinary language.
Intuition and thought experiments
Methods based on intuition, like ethical intuitionism, use intuitions to evaluate whether a philosophical claim is true or false. In this context, intuitions are seen as a non-inferential source of knowledge: they consist in the impression of correctness one has when considering a certain claim. They are intellectual seemings that make it appear to the thinker that the considered proposition is true or false without the need to consider arguments for or against the proposition. This is sometimes expressed by saying that the proposition in question is self-evident. Examples of such propositions include "torturing a sentient being for fun is wrong" or "it is irrational to believe both something and its opposite". But not all defenders of intuitionism restrict intuitions to self-evident propositions. Instead, often weaker non-inferential impressions are also included as intuitions, such as a mother's intuition that her child is innocent of a certain crime.
Intuitions can be used in various ways as a philosophical method. On the one hand, philosophers may consult their intuitions in relation to very general principles, which may then be used to deduce further theorems. Another technique, which is often applied in ethics, consists in considering concrete scenarios instead of general principles. This often takes the form of thought experiments, in which certain situations are imagined with the goal of determining the possible consequences of the imagined scenario. These consequences are assessed using intuition and counterfactual thinking. For this reason, thought experiments are sometimes referred to as intuition pumps: they activate the intuitions concerning the specific situation, which may then be generalized to arrive at universal principles. In some cases, the imagined scenario is physically possible but it would not be feasible to make an actual experiment due to the costs, negative consequences, or technological limitations. But other thought experiments even work with scenarios that defy what is physically possible. It is controversial to what extent thought experiments merit to be characterized as real experiments and whether the insights they provide are reliable.
One problem with intuitions in general and thought experiments in particular consists in assessing their epistemological status, i.e. whether, how much, and in which circumstances they provide justification in comparison to other sources of knowledge. Some of its defenders claim that intuition is a reliable source of knowledge just like perception, with the difference being that it happens without the sensory organs. Others compare it not to perception but to the cognitive ability to evaluate counterfactual conditionals, which may be understood as the capacity to answer what-if questions. But the reliability of intuitions has been contested by its opponents. For example, wishful thinking may be the reason why it intuitively seems to a person that a proposition is true without providing any epistemological support for this proposition. Another objection, often raised in the empirical and naturalist tradition, is that intuitions do not constitute a reliable source of knowledge since the practitioner restricts themselves to an inquiry from their armchair instead of looking at the world to make empirical observations.
Reflective equilibrium
Reflective equilibrium is a state in which a thinker has the impression that they have considered all the relevant evidence for and against a theory and have made up their mind on this issue. It is a state of coherent balance among one's beliefs. This does not imply that all the evidence has really been considered, but it is tied to the impression that engaging in further inquiry is unlikely to make one change one's mind, i.e. that one has reached a stable equilibrium. In this sense, it is the endpoint of the deliberative process on the issue in question. The philosophical method of reflective equilibrium aims at reaching this type of state by mentally going back and forth between all relevant beliefs and intuitions. In this process, the thinker may have to let go of some beliefs or deemphasize certain intuitions that do not fit into the overall picture in order to progress.
In this wide sense, reflective equilibrium is connected to a form of coherentism about epistemological justification and is thereby opposed to foundationalist attempts at finding a small set of fixed and unrevisable beliefs from which to build one's philosophical theory. One problem with this wide conception of the reflective equilibrium is that it seems trivial: it is a truism that the rational thing to do is to consider all the evidence before making up one's mind and to strive towards building a coherent perspective. But as a method to guide philosophizing, this is usually too vague to provide specific guidance.
When understood in a more narrow sense, the method aims at finding an equilibrium between particular intuitions and general principles. On this view, the thinker starts with intuitions about particular cases and formulates general principles that roughly reflect these intuitions. The next step is to deal with the conflicts between the two by adjusting both the intuitions and the principles to reconcile them until an equilibrium is reached. One problem with this narrow interpretation is that it depends very much on the intuitions one started with. This means that different philosophers may start with very different intuitions and may therefore be unable to find a shared equilibrium. For example, the narrow method of reflective equilibrium may lead some moral philosophers towards utilitarianism and others towards Kantianism.
Pragmatic method
The pragmatic method assesses the truth or falsity of theories by looking at the consequences of accepting them. In this sense, "[t]he test of truth is utility: it's true if it works". Pragmatists approach intractable philosophical disputes in a down-to-earth fashion by asking about the concrete consequences associated, for example, with whether an abstract metaphysical theory is true or false. This is also intended to clarify the underlying issues by spelling out what would follow from them. Another goal of this approach is to expose pseudo-problems, which involve a merely verbal disagreement without any genuine difference on the level of the consequences between the competing standpoints.
Succinct summaries of the pragmatic method base it on the pragmatic maxim, of which various versions exist. An important version is due to Charles Sanders Peirce: "Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of those effects is the whole of our conception of the object." Another formulation is due to William James: "To develop perfect clearness in our thoughts of an object, then, we need only consider what effects of a conceivable practical kind the object may involve – what sensations we are to expect from it and what reactions we must prepare". Various criticisms to the pragmatic method have been raised. For example, it is commonly rejected that the terms "true" and "useful" mean the same thing. A closely related problem is that believing in a certain theory may be useful to one person and useless to another, which would mean the same theory is both true and false.
Transcendental method
The transcendental method is used to study phenomena by reflecting on the conditions of possibility of these phenomena. This method usually starts out with an obvious fact, often about our mental life, such as what we know or experience. It then goes on to argue that for this fact to obtain, other facts also have to obtain: they are its conditions of possibility. This type of argument is called "transcendental argument": it argues that these additional assumptions also have to be true because otherwise, the initial fact would not be the case. For example, it has been used to argue for the existence of an external world based on the premise that the experience of the temporal order of our mental states would not be possible otherwise. Another example argues in favor of a description of nature in terms of concepts such as motion, force, and causal interaction based on the claim that an objective account of nature would not be possible otherwise.
Transcendental arguments have faced various challenges. On the one hand, the claim that the belief in a certain assumption is necessary for the experience of a certain entity is often not obvious. So in the example above, critics can argue against the transcendental argument by denying the claim that an external world is necessary for the experience of the temporal order of our mental states. But even if this point is granted, it does not guarantee that the assumption itself is true. So even if the belief in a given proposition is a psychological necessity for a certain experience, it does not automatically follow that this belief itself is true. Instead, it could be the case that humans are just wired in such a way that they have to believe in certain false assumptions.
Experimental philosophy
Experimental philosophy is the most recent development of the methods discussed in this article: it began only in the early years of the 21st century. Experimental philosophers try to answer philosophical questions by gathering empirical data. It is an interdisciplinary approach that applies the methods of psychology and the cognitive sciences to topics studied by philosophy. This usually takes the form of surveys probing the intuitions of ordinary people and then drawing conclusions from the findings. For example, one such inquiry came to the conclusion that justified true belief may be sufficient for knowledge despite various Gettier cases claiming to show otherwise. The method of experimental philosophy can be used both in a negative or a positive program. As a negative program, it aims to challenge traditional philosophical movements and positions. This can be done, for example, by showing how the intuitions used to defend certain claims vary a lot depending on factors such as culture, gender, or ethnicity. This variation casts doubt on the reliability of the intuitions and thereby also on theories supported by them. As a positive program, it uses empirical data to support its own philosophical claims. It differs from other philosophical methods in that it usually studies the intuitions of ordinary people and uses them, and not the experts' intuitions, as philosophical evidence.
One problem for both the positive and the negative approaches is that the data obtained from surveys do not constitute hard empirical evidence since they do not directly express the intuitions of the participants. The participants may react to subtle pragmatic cues in giving their answers, which brings with it the need for further interpretation in order to get from the given answers to the intuitions responsible for these answers. Another problem concerns the question of how reliable the intuitions of ordinary people on the often very technical issues are. The core of this objection is that, for many topics, the opinions of ordinary people are not very reliable since they have little familiarity with the issues themselves and the underlying problems they may pose. For this reason, it has been argued that they cannot replace the expert intuitions found in trained philosophers. Some critics have even argued that experimental philosophy does not really form part of philosophy. This objection does not reject that the method of experimental philosophy has value, it just rejects that this method belongs to philosophical methodology.
Others
Various other philosophical methods have been proposed. The Socratic method or Socratic debate is a form of cooperative philosophizing in which one philosopher usually first states a claim, which is then scrutinized by their interlocutor by asking them questions about various related claims, often with the implicit goal of putting the initial claim into doubt. It continues to be a popular method for teaching philosophy. Plato and Aristotle emphasize the role of wonder in the practice of philosophy. On this view, "philosophy begins in wonder" and "[i]t was their wonder, astonishment, that first led men to philosophize and still leads them". This position is also adopted in the more recent philosophy of Nicolai Hartmann. Various other types of methods were discussed in ancient Greek philosophy, like analysis, synthesis, dialectics, demonstration, definition, and reduction to absurdity. The medieval philosopher Thomas Aquinas identifies composition and division as ways of forming propositions while he sees invention and judgment as forms of reasoning from the known to the unknown.
Various methods for the selection between competing theories have been proposed. They often focus on the theoretical virtues of the involved theories. One such method is based on the idea that, everything else being equal, the simpler theory is to be preferred. Another gives preference to the theory that provides the best explanation. According to the method of epistemic conservatism, we should, all other things being equal, prefer the theory which, among its competitors, is the most conservative, i.e. the one closest to the beliefs we currently hold. One problem with these methods of theory selection is that it is usually not clear how the different virtues are to be weighted, often resulting in cases where they are unable to resolve disputes between competing theories that excel at different virtues.
Methodological naturalism holds that all philosophical claims are synthetic claims that ultimately depend for their justification or rejection on empirical observational evidence. In this sense, philosophy is continuous with the natural sciences in that they both give priority to the scientific method for investigating all areas of reality.
According to truthmaker theorists, every true proposition is true because another entity, its truthmaker, exists. This principle can be used as a methodology to critically evaluate philosophical theories. In particular, this concerns theories that accept certain truths but are unable to provide their truthmaker. Such theorists are derided as ontological cheaters. For example, this can be applied to philosophical presentism, the view that nothing outside the present exists. Philosophical presentists usually accept the very common belief that dinosaurs existed but have trouble in providing a truthmaker for this belief since they deny existence to past entities.
In philosophy, the term "genealogical method" refers to a form of criticism that tries to expose commonly held beliefs by uncovering their historical origin and function. For example, it may be used to reject specific moral claims or the status of truth by giving a concrete historical reconstruction of how their development was contingent on power relations in society. This is usually accompanied by the assertion that these beliefs were accepted and became established, because of non-rational considerations, such as because they served the interests of a predominant class.
Disagreements and influence
The disagreements within philosophy do not only concern which first-order philosophical claims are true, they also concern the second-order issue of which philosophical methods to use. One way to evaluate philosophical methods is to assess how well they do at solving philosophical problems. The question of the nature of philosophy has important implications for which methods of inquiry are appropriate to philosophizing. Seeing philosophy as an empirical science brings its methods much closer to the methods found in the natural sciences. Seeing it as the attempt to clarify concepts and increase understanding, on the other hand, usually leads to a methodology much more focused on apriori reasoning. In this sense, philosophical methodology is closely tied up with the question of how philosophy is to be defined. Different conceptions of philosophy often associated it with different goals, leading to certain methods being more or less suited to reach the corresponding goal.
The interest in philosophical methodology has risen a lot in contemporary philosophy. But some philosophers reject its importance by emphasizing that "preoccupation with questions about methods tends to distract us from prosecuting the methods themselves". However, such objections are often dismissed by pointing out that philosophy is at its core a reflective and critical enterprise, which is perhaps best exemplified by its preoccupation with its own methods. This is also backed up by the arguments to the effect that one's philosophical method has important implications for how one does philosophy and which philosophical claims one accepts or rejects. Since philosophy also studies the methodology of other disciplines, such as the methods of science, it has been argued that the study of its own methodology is an essential part of philosophy.
In several instances in the history of philosophy, the discovery of a new philosophical method, such as Cartesian doubt or the phenomenological method, has had important implications both on how philosophers conducted their theorizing and what claims they set out to defend. In some cases, such discoveries led the involved philosophers to overly optimistic outlooks, seeing them as historic breakthroughs that would dissolve all previous disagreements in philosophy.
Relation to other fields
Science
The methods of philosophy differ in various respects from the methods found in the natural sciences. One important difference is that philosophy does not use experimental data obtained through measuring equipment like telescopes or cloud chambers to justify its claims. For example, even philosophical naturalists emphasizing the close relation between philosophy and the sciences mostly practice a form of armchair theorizing instead of gathering empirical data. Experimental philosophers are an important exception: they use methods found in social psychology and other empirical sciences to test their claims.
One reason for the methodological difference between philosophy and science is that philosophical claims are usually more speculative and cannot be verified or falsified by looking through a telescope. This problem is not solved by citing works published by other philosophers, since it only defers the question of how their insights are justified. An additional complication concerning testimony is that different philosophers often defend mutually incompatible claims, which poses the challenge of how to select between them. Another difference between scientific and philosophical methodology is that there is wide agreement among scientists concerning their methods, testing procedures, and results. This is often linked to the fact that science has seen much more progress than philosophy.
Epistemology
An important goal of philosophical methods is to assist philosophers in attaining knowledge. This is often understood in terms of evidence. In this sense, philosophical methodology is concerned with the questions of what constitutes philosophical evidence, how much support it offers, and how to acquire it. In contrast to the empirical sciences, it is often claimed that empirical evidence is not used in justifying philosophical theories, that philosophy is less about the empirical world and more about how we think about the empirical world. In this sense, philosophy is often identified with conceptual analysis, which is concerned with explaining concepts and showing their interrelations. Philosophical naturalists often reject this line of thought and hold that empirical evidence can confirm or disconfirm philosophical theories, at least indirectly.
Philosophical evidence, which may be obtained, for example, through intuitions or thought experiments, is central for justifying basic principles and axioms. These principles can then be used as premises to support further conclusions. Some approaches to philosophical methodology emphasize that these arguments have to be deductively valid, i.e. that the truth of their premises ensures the truth of their conclusion. In other cases, philosophers may commit themselves to working hypotheses or norms of investigation even though they lack sufficient evidence. Such assumptions can be quite fruitful in simplifying the possibilities the philosopher needs to consider and by guiding them to ask interesting questions. But the lack of evidence makes this type of enterprise vulnerable to criticism.
See also
Scholarly method
Scientific method
Historical method
Dialectic
References
External links
Metaphilosophy | 0.811069 | 0.989667 | 0.802688 |
Ethics | Ethics is the philosophical study of moral phenomena. Also called moral philosophy, it investigates normative questions about what people ought to do or which behavior is morally right. Its main branches include normative ethics, applied ethics, and metaethics.
Normative ethics aims to find general principles that govern how people should act. Applied ethics examines concrete ethical problems in real-life situations, such as abortion, treatment of animals, and business practices. Metaethics explores the underlying assumptions and concepts of ethics. It asks whether there are objective moral facts, how moral knowledge is possible, and how moral judgments motivate people. Influential normative theories are consequentialism, deontology, and virtue ethics. According to consequentialists, an act is right if it leads to the best consequences. Deontologists focus on acts themselves, saying that they must adhere to duties, like telling the truth and keeping promises. Virtue ethics sees the manifestation of virtues, like courage and compassion, as the fundamental principle of morality.
Ethics is closely connected to value theory, which studies the nature and types of value, like the contrast between intrinsic and instrumental value. Moral psychology is a related empirical field and investigates psychological processes involved in morality, such as reasoning and the formation of character. Descriptive ethics describes the dominant moral codes and beliefs in different societies and considers their historical dimension.
The history of ethics started in the ancient period with the development of ethical principles and theories in ancient Egypt, India, China, and Greece. This period saw the emergence of ethical teachings associated with Hinduism, Buddhism, Confucianism, Daoism, and contributions of philosophers like Socrates and Aristotle. During the medieval period, ethical thought was strongly influenced by religious teachings. In the modern period, this focus shifted to a more secular approach concerned with moral experience, reasons for acting, and the consequences of actions. An influential development in the 20th century was the emergence of metaethics.
Definition
Ethics, also called moral philosophy, is the study of moral phenomena. It is one of the main branches of philosophy and investigates the nature of morality and the principles that govern the moral evaluation of conduct, character traits, and institutions. It examines what obligations people have, what behavior is right and wrong, and how to lead a good life. Some of its key questions are "How should one live?" and "What gives meaning to life?". In contemporary philosophy, ethics is usually divided into normative ethics, applied ethics, and metaethics.
Morality is about what people ought to do rather than what they actually do, what they want to do, or what social conventions require. As a rational and systematic field of inquiry, ethics studies practical reasons why people should act one way rather than another. Most ethical theories seek universal principles that express a general standpoint of what is objectively right and wrong. In a slightly different sense, the term ethics can also refer to individual ethical theories in the form of a rational system of moral principles, such as Aristotelian ethics, and to a moral code that certain societies, social groups, or professions follow, as in Protestant work ethic and medical ethics.
The English word ethics has its roots in the Ancient Greek word , meaning and . This word gave rise to the Ancient Greek word , which was translated into Latin as and entered the English language in the 15th century through the Old French term . The term morality originates in the Latin word , meaning and . It was introduced into the English language during the Middle English period through the Old French term .
The terms ethics and morality are usually used interchangeably but some philosophers distinguish between the two. According to one view, morality focuses on what moral obligations people have while ethics is broader and includes ideas about what is good and how to lead a meaningful life. Another difference is that codes of conduct in specific areas, such as business and environment, are usually termed ethics rather than morality, as in business ethics and environmental ethics.
Normative ethics
Normative ethics is the philosophical study of ethical conduct and investigates the fundamental principles of morality. It aims to discover and justify general answers to questions like "How should one live?" and "How should people act?", usually in the form of universal or domain-independent principles that determine whether an act is right or wrong. For example, given the particular impression that it is wrong to set a child on fire for fun, normative ethics aims to find more general principles that explain why this is the case, like the principle that one should not cause extreme suffering to the innocent, which may itself be explained in terms of a more general principle. Many theories of normative ethics also aim to guide behavior by helping people make moral decisions.
Theories in normative ethics state how people should act or what kind of behavior is correct. They do not aim to describe how people normally act, what moral beliefs ordinary people have, how these beliefs change over time, or what ethical codes are upheld in certain social groups. These topics belong to descriptive ethics and are studied in fields like anthropology, sociology, and history rather than normative ethics.
Some systems of normative ethics arrive at a single principle covering all possible cases. Others encompass a small set of basic rules that address all or at least the most important moral considerations. One difficulty for systems with several basic principles is that these principles may conflict with each other in some cases and lead to ethical dilemmas.
Distinct theories in normative ethics suggest different principles as the foundation of morality. The three most influential schools of thought are consequentialism, deontology, and virtue ethics. These schools are usually presented as exclusive alternatives, but depending on how they are defined, they can overlap and do not necessarily exclude one another. In some cases, they differ in which acts they see as right or wrong. In other cases, they recommend the same course of action but provide different justifications for why it is right.
Consequentialism
Consequentialism, also called teleological ethics, says that morality depends on consequences. According to the most common view, an act is right if it brings the best future. This means that there is no alternative course of action that has better consequences. A key aspect of consequentialist theories is that they provide a characterization of what is good and then define what is right in terms of what is good. For example, classical utilitarianism says that pleasure is good and that the action leading to the most overall pleasure is right. Consequentialism has been discussed indirectly since the formulation of classical utilitarianism in the late 18th century. A more explicit analysis of this view happened in the 20th century, when the term was coined by G. E. M. Anscombe.
Consequentialists usually understand the consequences of an action in a very wide sense that includes the totality of its effects. This is based on the idea that actions make a difference in the world by bringing about a causal chain of events that would not have existed otherwise. A core intuition behind consequentialism is that the future should be shaped to achieve the best possible outcome.
The act itself is usually not seen as part of the consequences. This means that if an act has intrinsic value or disvalue, it is not included as a factor. Some consequentialists see this as a flaw, saying that all value-relevant factors need to be considered. They try to avoid this complication by including the act itself as part of the consequences. A related approach is to characterize consequentialism not in terms of consequences but in terms of outcome, with the outcome being defined as the act together with its consequences.
Most forms of consequentialism are agent-neutral. This means that the value of consequences is assessed from a neutral perspective, that is, acts should have consequences that are good in general and not just good for the agent. It is controversial whether agent-relative moral theories, like ethical egoism, should be considered as types of consequentialism.
Types
There are many different types of consequentialism. They differ based on what type of entity they evaluate, what consequences they take into consideration, and how they determine the value of consequences. Most theories assess the moral value of acts. However, consequentialism can also be used to evaluate motives, character traits, rules, and policies.
Many types assess the value of consequences based on whether they promote happiness or suffering. But there are also alternative evaluative principles, such as desire satisfaction, autonomy, freedom, knowledge, friendship, beauty, and self-perfection. Some forms of consequentialism hold that there is only a single source of value. The most prominent among them is utilitarianism, which states that the moral value of acts only depends on the pleasure and suffering they cause. An alternative approach says that there are many different sources of value, which all contribute to one overall value. Before the 20th century, consequentialists were only concerned with the total of value or the aggregate good. In the 20th century, alternative views were developed that additionally consider the distribution of value. One of them states that an equal distribution of goods is better than an unequal distribution even if the aggregate good is the same.
There are disagreements about which consequences should be assessed. An important distinction is between act consequentialism and rule consequentialism. According to act consequentialism, the consequences of an act determine its moral value. This means that there is a direct relation between the consequences of an act and its moral value. Rule consequentialism, by contrast, holds that an act is right if it follows a certain set of rules. Rule consequentialism determines the best rules by considering their outcomes at a community level. People should follow the rules that lead to the best consequences when everyone in the community follows them. This implies that the relation between an act and its consequences is indirect. For example, if telling the truth is one of the best rules, then according to rule consequentialism, a person should tell the truth even in specific cases where lying would lead to better consequences.
Another disagreement is between actual and expected consequentialism. According to the traditional view, only the actual consequences of an act affect its moral value. One difficulty of this view is that many consequences cannot be known in advance. This means that in some cases, even well-planned and intentioned acts are morally wrong if they inadvertently lead to negative outcomes. An alternative perspective states that what matters are not the actual consequences but the expected consequences. This view takes into account that when deciding what to do, people have to rely on their limited knowledge of the total consequences of their actions. According to this view, a course of action has positive moral value despite leading to an overall negative outcome if it had the highest expected value, for example, because the negative outcome could not be anticipated or was unlikely.
A further difference is between maximizing and satisficing consequentialism. According to maximizing consequentialism, only the best possible act is morally permitted. This means that acts with positive consequences are wrong if there are alternatives with even better consequences. One criticism of maximizing consequentialism is that it demands too much by requiring that people do significantly more than they are socially expected to. For example, if the best action for someone with a good salary would be to donate 70% of their income to charity, it would be morally wrong for them to only donate 65%. Satisficing consequentialism, by contrast, only requires that an act is "good enough" even if it is not the best possible alternative. According to this view, it is possible to do more than one is morally required to do.
Mohism in ancient Chinese philosophy is one of the earliest forms of consequentialism. It arose in the 5th century BCE and argued that political action should promote justice as a means to increase the welfare of the people.
Utilitarianism
The most well-known form of consequentialism is utilitarianism. In its classical form, it is an act consequentialism that sees happiness as the only source of intrinsic value. This means that an act is morally right if it produces "the greatest good for the greatest number" by increasing happiness and reducing suffering. Utilitarians do not deny that other things also have value, like health, friendship, and knowledge. However, they deny that these things have intrinsic value. Instead, they say that they have extrinsic value because they affect happiness and suffering. In this regard, they are desirable as a means but, unlike happiness, not as an end. The view that pleasure is the only thing with intrinsic value is called ethical or evaluative hedonism.
Classical utilitarianism was initially formulated by Jeremy Bentham at the end of the 18th century and further developed by John Stuart Mill. Bentham introduced the hedonic calculus to assess the value of consequences. Two key aspects of the hedonic calculus are the intensity and the duration of pleasure. According to this view, a pleasurable experience has a high value if it has a high intensity and lasts for a long time. A common criticism of Bentham's utilitarianism argued that its focus on the intensity of pleasure promotes an immoral lifestyle centered around indulgence in sensory gratification. Mill responded to this criticism by distinguishing between higher and lower pleasures. He stated that higher pleasures, like the intellectual satisfaction of reading a book, are more valuable than lower pleasures, like the sensory enjoyment of food and drink, even if their intensity and duration are the same. Since its original formulation, many variations of utilitarianism have developed, including the difference between act and rule utilitarianism and between maximizing and satisficing utilitarianism.
Deontology
Deontology assesses the moral rightness of actions based on a set of norms or principles. These norms describe the requirements that all actions need to follow. They may include principles like telling the truth, keeping promises, and not intentionally harming others. Unlike consequentialists, deontologists hold that the validity of general moral principles does not directly depend on their consequences. They state that these principles should be followed in every case since they express how actions are inherently right or wrong. According to moral philosopher David Ross, it is wrong to break a promise even if no harm comes from it. Deontologists are interested in which actions are right and often allow that there is a gap between what is right and what is good. Many focus on prohibitions and describe which acts are forbidden under any circumstances.
Agent-centered and patient-centered
Agent-centered deontological theories focus on the person who acts and the duties they have. Agent-centered theories often focus on the motives and intentions behind people's actions, highlighting the importance of acting for the right reasons. They tend to be agent-relative, meaning that the reasons for which people should act depend on personal circumstances. For example, a parent has a special obligation to their child, while a stranger does not have this kind of obligation toward a child they do not know. Patient-centered theories, by contrast, focus on the people affected by actions and the rights they have. An example is the requirement to treat other people as ends and not merely as a means to an end. This requirement can be used to argue, for example, that it is wrong to kill a person against their will even if this act would save the lives of several others. Patient-centered deontological theories are usually agent-neutral, meaning that they apply equally to everyone in a situation, regardless of their specific role or position.
Kantianism
Immanuel Kant (1724–1804) is one of the most well-known deontologists. He states that reaching outcomes that people desire, such as being happy, is not the main purpose of moral actions. Instead, he argues that there are universal principles that apply to everyone independent of their desires. He uses the term categorical imperative for these principles, saying that they have their source in the structure of practical reason and are true for all rational agents. According to Kant, to act morally is to act in agreement with reason as expressed by these principles while violating them is both immoral and irrational.
Kant provided several formulations of the categorical imperative. One formulation says that a person should only follow maxims that can be universalized. This means that the person would want everyone to follow the same maxim as a universal law applicable to everyone. Another formulation states that one should treat other people always as ends in themselves and never as mere means to an end. This formulation focuses on respecting and valuing other people for their own sake rather than using them in the pursuit of personal goals.
In either case, Kant says that what matters is to have a good will. A person has a good will if they respect the moral law and form their intentions and motives in agreement with it. Kant states that actions motivated in such a way are unconditionally good, meaning that they are good even in cases where they result in undesirable consequences.
Others
Divine command theory says that God is the source of morality. It states that moral laws are divine commands and that to act morally is to obey and follow God's will. While all divine command theorists agree that morality depends on God, there are disagreements about the precise content of the divine commands, and theorists belonging to different religions tend to propose different moral laws. For example, Christian and Jewish divine command theorists may argue that the Ten Commandments express God's will while Muslims may reserve this role for the teachings of the Quran.
Contractualists reject the reference to God as the source of morality and argue instead that morality is based on an explicit or implicit social contract between humans. They state that actual or hypothetical consent to this contract is the source of moral norms and duties. To determine which duties people have, contractualists often rely on a thought experiment about what rational people under ideal circumstances would agree on. For example, if they would agree that people should not lie then there is a moral obligation to refrain from lying. Because it relies on consent, contractualism is often understood as a patient-centered form of deontology. Famous social contract theorists include Thomas Hobbes, John Locke, Jean-Jacques Rousseau, and John Rawls.
Discourse ethics also focuses on social agreement on moral norms but says that this agreement is based on communicative rationality. It aims to arrive at moral norms for pluralistic modern societies that encompass a diversity of viewpoints. A universal moral norm is seen as valid if all rational discourse participants do or would approve. This way, morality is not imposed by a single moral authority but arises from the moral discourse within society. This discourse should aim to establish an ideal speech situation to ensure fairness and inclusivity. In particular, this means that discourse participants are free to voice their different opinions without coercion but are at the same time required to justify them using rational argumentation.
Virtue ethics
The main concern of virtue ethics is how virtues are expressed in actions. As such, it is neither directly interested in the consequences of actions nor in universal moral duties. Virtues are positive character traits like honesty, courage, kindness, and compassion. They are usually understood as dispositions to feel, decide, and act in a certain manner by being wholeheartedly committed to this manner. Virtues contrast with vices, which are their harmful counterparts.
Virtue theorists usually say that the mere possession of virtues by itself is not sufficient. Instead, people should manifest virtues in their actions. An important factor is the practical wisdom, also called phronesis, of knowing when, how, and which virtue to express. For example, a lack of practical wisdom may lead courageous people to perform morally wrong actions by taking unnecessary risks that should better be avoided.
Different types of virtue ethics differ on how they understand virtues and their role in practical life. Eudaimonism is the original form of virtue theory developed in Ancient Greek philosophy and draws a close relation between virtuous behavior and happiness. It states that people flourish by living a virtuous life. Eudaimonist theories often hold that virtues are positive potentials residing in human nature and that actualizing these potentials results in leading a good and happy life. Agent-based theories, by contrast, see happiness only as a side effect and focus instead on the admirable traits and motivational characteristics expressed while acting. This is often combined with the idea that one can learn from exceptional individuals what those characteristics are. Feminist ethics of care are another form of virtue ethics. They emphasize the importance of interpersonal relationships and say that benevolence by caring for the well-being of others is one of the key virtues.
Influential schools of virtue ethics in ancient philosophy were Aristotelianism and Stoicism. According to Aristotle (384–322 BCE), each virtue is a golden mean between two types of vices: excess and deficiency. For example, courage is a virtue that lies between the deficient state of cowardice and the excessive state of recklessness. Aristotle held that virtuous action leads to happiness and makes people flourish in life. Stoicism emerged about 300 BCE and taught that, through virtue alone, people can achieve happiness characterized by a peaceful state of mind free from emotional disturbances. The Stoics advocated rationality and self-mastery to achieve this state. In the 20th century, virtue ethics experienced a resurgence thanks to philosophers such as Elizabeth Anscombe, Philippa Foot, Alasdair MacIntyre, and Martha Nussbaum.
Other traditions
There are many other schools of normative ethics in addition to the three main traditions. Pragmatist ethics focuses on the role of practice and holds that one of the key tasks of ethics is to solve practical problems in concrete situations. It has certain similarities to utilitarianism and its focus on consequences but concentrates more on how morality is embedded in and relative to social and cultural contexts. Pragmatists tend to give more importance to habits than to conscious deliberation and understand morality as a habit that should be shaped in the right way.
Postmodern ethics agrees with pragmatist ethics about the cultural relativity of morality. It rejects the idea that there are objective moral principles that apply universally to all cultures and traditions. It asserts that there is no one coherent ethical code since morality itself is irrational and humans are morally ambivalent beings. Postmodern ethics instead focuses on how moral demands arise in specific situations as one encounters other people.
Ethical egoism is the view that people should act in their self-interest or that an action is morally right if the person acts for their own benefit. It differs from psychological egoism, which states that people actually follow their self-interest without claiming that they should do so. Ethical egoists may act in agreement with commonly accepted moral expectations and benefit other people, for example, by keeping promises, helping friends, and cooperating with others. However, they do so only as a means to promote their self-interest. Ethical egoism is often criticized as an immoral and contradictory position.
Normative ethics has a central place in most religions. Key aspects of Jewish ethics are to follow the 613 commandments of God according to the Mitzvah duty found in the Torah and to take responsibility for societal welfare. Christian ethics puts less emphasis on following precise laws and teaches instead the practice of selfless love, such as the Great Commandment to "Love your neighbor as yourself". The Five Pillars of Islam constitute a basic framework of Muslim ethics and focus on the practice of faith, prayer, charity, fasting during Ramadan, and pilgrimage to Mecca. Buddhists emphasize the importance of compassion and loving-kindness towards all sentient entities. A similar outlook is found in Jainism, which has non-violence as its principal virtue. Duty is a central aspect of Hindu ethics and is about fulfilling social obligations, which may vary depending on a person's social class and stage of life. Confucianism places great emphasis on harmony in society and sees benevolence as a key virtue. Taoism extends the importance of living in harmony to the whole world and teaches that people should practice effortless action by following the natural flow of the universe. Indigenous belief systems, like Native American philosophy and the African Ubuntu philosophy, often emphasize the interconnectedness of all living beings and the environment while stressing the importance of living in harmony with nature.
Metaethics
Metaethics is the branch of ethics that examines the nature, foundations, and scope of moral judgments, concepts, and values. It is not interested in which actions are right but in what it means for an action to be right and whether moral judgments are objective and can be true at all. It further examines the meaning of morality and other moral terms. Metaethics is a metatheory that operates on a higher level of abstraction than normative ethics by investigating its underlying assumptions. Metaethical theories typically do not directly judge which normative ethical theories are correct. However, metaethical theories can still influence normative theories by examining their foundational principles.
Metaethics overlaps with various branches of philosophy. On the level of ontology, it examines whether there are objective moral facts. Concerning semantics, it asks what the meaning of moral terms are and whether moral statements have a truth value. The epistemological side of metaethics discusses whether and how people can acquire moral knowledge. Metaethics overlaps with psychology because of its interest in how moral judgments motivate people to act. It also overlaps with anthropology since it aims to explain how cross-cultural differences affect moral assessments.
Basic concepts
Metaethics examines basic ethical concepts and their relations. Ethics is primarily concerned with normative statements about what ought to be the case, in contrast to descriptive statements, which are about what is the case. Duties and obligations express requirements of what people ought to do. Duties are sometimes defined as counterparts of the rights that always accompany them. According to this view, someone has a duty to benefit another person if this other person has the right to receive that benefit. Obligation and permission are contrasting terms that can be defined through each other: to be obligated to do something means that one is not permitted not to do it and to be permitted to do something means that one is not obligated not to do it. Some theorists define obligations in terms of values or what is good. When used in a general sense, good contrasts with bad. When describing people and their intentions, the term evil rather than bad is often employed.
Obligations are used to assess the moral status of actions, motives, and character traits. An action is morally right if it is in tune with a person's obligations and morally wrong if it violates them. Supererogation is a special moral status that applies to cases in which the agent does more than is morally required of them. To be morally responsible for an action usually means that the person possesses and exercises certain capacities or some form of control. If a person is morally responsible then it is appropriate to respond to them in certain ways, for example, by praising or blaming them.
Realism, relativism, and nihilism
A major debate in metaethics is about the ontological status of morality, questioning whether ethical values and principles are real. It examines whether moral properties exist as objective features independent of the human mind and culture rather than as subjective constructs or expressions of personal preferences and cultural norms.
Moral realists accept the claim that there are objective moral facts. This view implies that moral values are mind-independent aspects of reality and that there is an absolute fact about whether a given action is right or wrong. A consequence of this view is that moral requirements have the same ontological status as non-moral facts: it is an objective fact whether there is an obligation to keep a promise just as it is an objective fact whether a thing is rectangular. Moral realism is often associated with the claim that there are universal ethical principles that apply equally to everyone. It implies that if two people disagree about a moral evaluation then at least one of them is wrong. This observation is sometimes taken as an argument against moral realism since moral disagreement is widespread in most fields.
Moral relativists reject the idea that morality is an objective feature of reality. They argue instead that moral principles are human inventions. This means that a behavior is not objectively right or wrong but only subjectively right or wrong relative to a certain standpoint. Moral standpoints may differ between persons, cultures, and historical periods. For example, moral statements like "Slavery is wrong" or "Suicide is permissible" may be true in one culture and false in another. Some moral relativists say that moral systems are constructed to serve certain goals such as social coordination. According to this view, different societies and different social groups within a society construct different moral systems based on their diverging purposes. Emotivism provides a different explanation, stating that morality arises from moral emotions, which are not the same for everyone.
Moral nihilists deny the existence of moral facts. They reject the existence of both objective moral facts defended by moral realism and subjective moral facts defended by moral relativism. They believe that the basic assumptions underlying moral claims are misguided. Some moral nihilists conclude from this that anything is allowed. A slightly different view emphasizes that moral nihilism is not itself a moral position about what is allowed and prohibited but the rejection of any moral position. Moral nihilism, like moral relativism, recognizes that people judge actions as right or wrong from different perspectives. However, it disagrees that this practice involves morality and sees it as just one type of human behavior.
Naturalism and non-naturalism
A central disagreement among moral realists is between naturalism and non-naturalism. Naturalism states that moral properties are natural properties accessible to empirical observation. They are similar to the natural properties investigated by the natural sciences, like color and shape. Some moral naturalists hold that moral properties are a unique and basic type of natural property. Another view states that moral properties are real but not a fundamental part of reality and can be reduced to other natural properties, such as properties describing the causes of pleasure and pain.
Non-naturalism argues that moral properties form part of reality and argues that moral features are not identical or reducible to natural properties. This view is usually motivated by the idea that moral properties are unique because they express what should be the case. Proponents of this position often emphasize this uniqueness by claiming that it is a fallacy to define ethics in terms of natural entities or to infer prescriptive from descriptive statements.
Cognitivism and non-cognitivism
The metaethical debate between cognitivism and non-cognitivism is about the meaning of moral statements and is a part of the study of semantics. According to cognitivism, moral statements like "Abortion is morally wrong" and "Going to war is never morally justified" are truth-apt, meaning that they all have a truth value: they are either true or false. Cognitivism claims that moral statements have a truth value but is not interested in which truth value they have. It is often seen as the default position since moral statements resemble other statements, like "Abortion is a medical procedure" or "Going to war is a political decision", which have a truth value.
There is a close relation between the semantic theory of cognitivism and the ontological theory of moral realism. Moral realists assert that moral facts exist. This can be used to explain why moral statements are true or false: a statement is true if it is consistent with the facts and false otherwise. As a result, philosophers who accept one theory often accept the other as well. An exception is error theory, which combines cognitivism with moral nihilism by claiming that all moral statements are false because there are no moral facts.
Non-cognitivism is the view that moral statements lack a truth value. According to this view, the statement "Murder is wrong" is neither true nor false. Some non-cognitivists claim that moral statements have no meaning at all. A different interpretation is that they have another type of meaning. Emotivism says that they articulate emotional attitudes. According to this view, the statement "Murder is wrong" expresses that the speaker has a negative moral attitude towards murder or disapproves of it. Prescriptivism, by contrast, understands moral statements as commands. According to this view, stating that "Murder is wrong" expresses a command like "Do not commit murder".
Moral knowledge
The epistemology of ethics studies whether or how one can know moral truths. Foundationalist views state that some moral beliefs are basic and do not require further justification. Ethical intuitionism is one such view that says that humans have a special cognitive faculty through which they can know right from wrong. Intuitionists often argue that general moral truths, like "Lying is wrong", are self-evident and that it is possible to know them without relying on empirical experience. A different foundationalist position focuses on particular observations rather than general intuitions. It says that if people are confronted with a concrete moral situation, they can perceive whether right or wrong conduct was involved.
In contrast to foundationalists, coherentists say that there are no basic moral beliefs. They argue that beliefs form a complex network and mutually support and justify one another. According to this view, a moral belief can only amount to knowledge if it coheres with the rest of the beliefs in the network. Moral skeptics say that people are unable to distinguish between right and wrong behavior, thereby rejecting the idea that moral knowledge is possible. A common objection by critics of moral skepticism asserts that it leads to immoral behavior.
Thought experiments are used as a method in ethics to decide between competing theories. They usually present an imagined situation involving an ethical dilemma and explore how people's intuitions of right and wrong change based on specific details in that situation. For example, in Philippa Foot's trolley problem, a person can flip a switch to redirect a trolley from one track to another, thereby sacrificing the life of one person to save five. This scenario explores how the difference between doing and allowing harm affects moral obligations. Another thought experiment, proposed by Judith Jarvis Thomson, examines the moral implications of abortion by imagining a situation in which a person gets connected without their consent to an ill violinist. In this scenario, the violinist dies if the connection is severed, similar to how a fetus dies in the case of abortion. The thought experiment explores whether it would be morally permissible to sever the connection within the next nine months.
Moral motivation
On the level of psychology, metaethics is interested in how moral beliefs and experiences affect behavior. According to motivational internalists, there is a direct link between moral judgments and action. This means that every judgment about what is right motivates the person to act accordingly. For example, Socrates defends a strong form of motivational internalism by holding that a person can only perform an evil deed if they are unaware that it is evil. Weaker forms of motivational internalism say that people can act against their own moral judgments, for example, because of the weakness of the will. Motivational externalists accept that people can judge an act to be morally required without feeling a reason to engage in it. This means that moral judgments do not always provide motivational force. A closely related question is whether moral judgments can provide motivation on their own or need to be accompanied by other mental states, such as a desire to act morally.
Applied ethics
Applied ethics, also known as practical ethics, is the branch of ethics and applied philosophy that examines concrete moral problems encountered in real-life situations. Unlike normative ethics, it is not concerned with discovering or justifying universal ethical principles. Instead, it studies how those principles can be applied to specific domains of practical life, what consequences they have in these fields, and whether additional domain-specific factors need to be considered.
One of the main challenges of applied ethics is to breach the gap between abstract universal theories and their application to concrete situations. For example, an in-depth understanding of Kantianism or utilitarianism is usually not sufficient to decide how to analyze the moral implications of a medical procedure like abortion. One reason is that it may not be clear how the Kantian requirement of respecting everyone's personhood applies to a fetus or, from a utilitarian perspective, what the long-term consequences are in terms of the greatest good for the greatest number. This difficulty is particularly relevant to applied ethicists who employ a top-down methodology by starting from universal ethical principles and applying them to particular cases within a specific domain. A different approach is to use a bottom-up methodology, known as casuistry. This method does not start from universal principles but from moral intuitions about particular cases. It seeks to arrive at moral principles relevant to a specific domain, which may not be applicable to other domains. In either case, inquiry into applied ethics is often triggered by ethical dilemmas in which a person is subject to conflicting moral requirements.
Applied ethics covers issues belonging to both the private sphere, like right conduct in the family and close relationships, and the public sphere, like moral problems posed by new technologies and duties toward future generations. Major branches include bioethics, business ethics, and professional ethics. There are many other branches, and their domains of inquiry often overlap.
Bioethics
Bioethics covers moral problems associated with living organisms and biological disciplines. A key problem in bioethics is how features such as consciousness, being able to feel pleasure and pain, rationality, and personhood affect the moral status of entities. These differences concern, for example, how to treat non-living entities like rocks and non-sentient entities like plants in contrast to animals, and whether humans have a different moral status than other animals. According to anthropocentrism, only humans have a basic moral status. This suggests that all other entities possess a derivative moral status only insofar as they impact human life. Sentientism, by contrast, extends an inherent moral status to all sentient beings. Further positions include biocentrism, which also covers non-sentient lifeforms, and ecocentrism, which states that all of nature has a basic moral status.
Bioethics is relevant to various aspects of life and many professions. It covers a wide range of moral problems associated with topics like abortion, cloning, stem cell research, euthanasia, suicide, animal testing, intensive animal farming, nuclear waste, and air pollution.
Bioethics can be divided into medical ethics, animal ethics, and environmental ethics based on whether the ethical problems relate to humans, other animals, or nature in general. Medical ethics is the oldest branch of bioethics. The Hippocratic Oath is one of the earliest texts to engage in medical ethics by establishes ethical guidelines for medical practitioners like a prohibition to harm the patient. Medical ethics often addresses issues related to the start and end of life. It examines the moral status of fetuses, for example, whether they are full-fledged persons and whether abortion is a form of murder. Ethical issues also arise about whether a person has the right to end their life in cases of terminal illness or chronic suffering and if doctors may help them do so. Other topics in medical ethics include medical confidentiality, informed consent, research on human beings, organ transplantation, and access to healthcare.
Animal ethics examines how humans should treat other animals. This field often emphasizes the importance of animal welfare while arguing that humans should avoid or minimize the harm done to animals. There is wide agreement that it is wrong to torture animals for fun. The situation is more complicated in cases where harm is inflicted on animals as a side effect of the pursuit of human interests. This happens, for example, during factory farming, when using animals as food, and for research experiments on animals. A key topic in animal ethics is the formulation of animal rights. Animal rights theorists assert that animals have a certain moral status and that humans should respect this status when interacting with them. Examples of suggested animal rights include the right to life, the right to be free from unnecessary suffering, and the right to natural behavior in a suitable environment.
Environmental ethics deals with moral problems relating to the natural environment including animals, plants, natural resources, and ecosystems. In its widest sense, it covers the whole cosmos. In the domain of agriculture, this concerns the circumstances under which the vegetation of an area may be cleared to use it for farming and the implications of planting genetically modified crops. On a wider scale, environmental ethics addresses the problem of global warming and people's responsibility on the individual and collective levels, including topics like climate justice and duties towards future generations. Environmental ethicists often promote sustainable practices and policies directed at protecting and conserving ecosystems and biodiversity.
Business and professional ethics
Business ethics examines the moral implications of business conduct and how ethical principles apply to corporations and organizations. A key topic is corporate social responsibility, which is the responsibility of corporations to act in a manner that benefits society at large. Corporate social responsibility is a complex issue since many stakeholders are directly and indirectly involved in corporate decisions, such as the CEO, the board of directors, and the shareholders. A closely related topic is the question of whether corporations themselves, and not just their stakeholders, have moral agency. Business ethics further examines the role of honesty and fairness in business practices as well as the moral implications of bribery, conflict of interest, protection of investors and consumers, worker's rights, ethical leadership, and corporate philanthropy.
Professional ethics is a closely related field that studies ethical principles applying to members of a specific profession, like engineers, medical doctors, lawyers, and teachers. It is a diverse field since different professions often have different responsibilities. Principles applying to many professions include that the professional has the required expertise for the intended work and that they have personal integrity and are trustworthy. Further principles are to serve the interest of their target group, follow client confidentiality, and respect and uphold the client's rights, such as informed consent. More precise requirements often vary between professions. A cornerstone of engineering ethics is to protect public safety, health, and well-being. Legal ethics emphasizes the importance of respect for justice, personal integrity, and confidentiality. Key factors in journalism ethics include accuracy, truthfulness, independence, and impartiality as well as proper attribution to avoid plagiarism.
Other subfields
Many other fields of applied ethics are discussed in the academic literature. Communication ethics covers moral principles of communicative conduct. Two key issues in it are freedom of speech and speech responsibility. Freedom of speech concerns the ability to articulate one's opinions and ideas without the threats of punishment and censorship. Speech responsibility is about being accountable for the consequences of communicative action and inaction. A closely related field is information ethics, which focuses on the moral implications of creating, controlling, disseminating, and using information.
The ethics of technology examines the moral issues associated with the creation and use of any artifact, from simple spears to high-tech computers and nanotechnology. Central topics in the ethics of technology include the risks associated with creating new technologies, their responsible use, and questions about human enhancement through technological means, such as performance-enhancing drugs and genetic enhancement. Important subfields include computer ethics, ethics of artificial intelligence, machine ethics, ethics of nanotechnology, and nuclear ethics.
The ethics of war investigates moral problems of war and violent conflicts. According to just war theory, waging war is morally justified if it fulfills certain conditions. These conditions are commonly divided into requirements concerning the cause to initiate violent activities, such as self-defense, and the way those violent activities are conducted, such as avoiding excessive harm to civilians in the pursuit of legitimate military targets. Military ethics is a closely related field that is interested in the conduct of military personnel. It governs questions of the circumstances under which they are permitted to kill enemies, destroy infrastructure, and put the lives of their own troops at risk. Additional topics are the recruitment, training, and discharge of military personnel.
Further fields of applied ethics include political ethics, which examines the moral dimensions of political decisions, educational ethics, which covers ethical issues related to proper teaching practices, and sexual ethics, which addresses the moral implications of sexual behavior.
Related fields
Value theory
Value theory, also called axiology, is the philosophical study of value. It examines the nature and types of value. A central distinction is between intrinsic and instrumental value. An entity has intrinsic value if it is good in itself or good for its own sake. An entity has instrumental value if it is valuable as a means to something else, for example, by causing something that has intrinsic value. Further topics include what kinds of things have value and how valuable they are. For instance, axiological hedonists say that pleasure is the only source of intrinsic value and that the magnitude of value corresponds to the degree of pleasure. Axiological pluralists, by contrast, hold that there are different sources of intrinsic value, such as happiness, knowledge, and beauty.
There are disagreements about the exact relation between value theory and ethics. Some philosophers characterize value theory as a subdiscipline of ethics while others see value theory as the broader term that encompasses other fields besides ethics, such as aesthetics and political philosophy. A different characterization sees the two disciplines as overlapping but distinct fields. The term axiological ethics is sometimes used for the discipline studying this overlap, that is, the part of ethics that studies values. The two disciplines are sometimes distinguished based on their focus: ethics is about moral behavior or what is right while value theory is about value or what is good. Some ethical theories, like consequentialism, stand very close to value theory by defining what is right in terms of what is good. But this is not true for ethics in general and deontological theories tend to reject the idea that what is good can be used to define what is right.
Moral psychology
Moral psychology explores the psychological foundations and processes involved in moral behavior. It is an empirical science that studies how humans think and act in moral contexts. It is interested in how moral reasoning and judgments take place, how moral character forms, what sensitivity people have to moral evaluations, and how people attribute and react to moral responsibility.
One of its key topics is moral development or the question of how morality develops on a psychological level from infancy to adulthood. According to Lawrence Kohlberg, children go through different stages of moral development as they understand moral principles first as fixed rules governing reward and punishment, then as conventional social norms, and later as abstract principles of what is objectively right across societies. A closely related question is whether and how people can be taught to act morally.
Evolutionary ethics, a closely related field, explores how evolutionary processes have shaped ethics. One of its key ideas is that natural selection is responsible for moral behavior and moral sensitivity. It interprets morality as an adaptation to evolutionary pressure that augments fitness by offering a selective advantage. Altruism, for example, can provide benefits to group survival by improving cooperation. Some theorists, like Mark Rowlands, argue that morality is not limited to humans, meaning that some non-human animals act based on moral emotions. Others explore evolutionary precursors to morality in non-human animals.
Descriptive ethics
Descriptive ethics, also called comparative ethics, studies existing moral codes, practices, and beliefs. It investigates and compares moral phenomena in different societies and different groups within a society. It aims to provide a value-neutral and empirical description without judging or justifying which practices are objectively right. For instance, the question of how nurses think about the ethical implications of abortion belongs to descriptive ethics. Another example is descriptive business ethics, which describes ethical standards in the context of business, including common practices, official policies, and employee opinions. Descriptive ethics also has a historical dimension by exploring how moral practices and beliefs have changed over time.
Descriptive ethics is a multidisciplinary field that is covered by disciplines such as anthropology, sociology, psychology, and history. Its empirical outlook contrasts with the philosophical inquiry into normative questions, such as which ethical principles are correct and how to justify them.
History
The history of ethics studies how moral philosophy has developed and evolved in the course of history. It has its origin in ancient civilizations. In ancient Egypt, the concept of Maat was used as an ethical principle to guide behavior and maintain order by emphasizing the importance of truth, balance, and harmony. In ancient India starting in the 2nd millennium BCE, the Vedas and later Upanishads were composed as the foundational texts of Hindu philosophy and discussed the role of duty and the consequences of one's actions. Buddhist ethics originated in ancient India between the sixth and the fifth centuries BCE and advocated compassion, non-violence, and the pursuit of enlightenment. Ancient China in the 6th century BCE saw the emergence of Confucianism, which focuses on moral conduct and self-cultivation by acting in agreement with virtues, and Daoism, which teaches that human behavior should be in harmony with the natural order of the universe.
In ancient Greece, Socrates emphasized the importance of inquiry into what a good life is by critically questioning established ideas and exploring concepts like virtue, justice, courage, and wisdom. According to Plato, to lead a good life means that the different parts of the soul are in harmony with each other. For Aristotle (384–322 BCE), a good life is associated with being happy by cultivating virtues and flourishing. Starting in the 4th century BCE, the close relation between right action and happiness was also explored by the Hellenistic schools of Epicureanism, which recommended a simple lifestyle without indulging in sensory pleasures, and Stoicism, which advocated living in tune with reason and virtue while practicing self-mastery and becoming immune to disturbing emotions.
Ethical thought in the medieval period was strongly influenced by religious teachings. Christian philosophers interpreted moral principles as divine commands originating from God. Thomas Aquinas (1224–1274 CE) developed natural law ethics by claiming that ethical behavior consists in following the laws and order of nature, which he believed were created by God. In the Islamic world, philosophers like Al-Farabi and Avicenna (980–1037 CE) synthesized ancient Greek philosophy with the ethical teachings of Islam while emphasizing the harmony between reason and faith. In medieval India, Hindu philosophers like Adi Shankara and Ramanuja (1017–1137 CE) saw the practice of spirituality to attain liberation as the highest goal of human behavior.
Moral philosophy in the modern period was characterized by a shift toward a secular approach to ethics. Thomas Hobbes (1588–1679) identified self-interest as the primary drive of humans. He concluded that it would lead to "a war of every man against every man" unless a social contract is established to avoid this outcome. David Hume (1711–1776) thought that only moral sentiments, like empathy, can motivate ethical actions while he saw reason not as a motivating factor but only as what anticipates the consequences of possible actions. Immanuel Kant (1724–1804), by contrast, saw reason as the source of morality. He formulated a deontological theory, according to which the ethical value of actions depends on their conformity with moral laws independent of their outcome. These laws take the form of categorical imperatives, which are universal requirements that apply to every situation. Georg Wilhelm Friedrich Hegel (1770–1831) saw Kant's categorical imperative on its own as an empty formalism and emphasized the role of social institutions in providing concrete content to moral duties. According to the Christian philosophy of Søren Kierkegaard (1813–1855), the demands of ethical duties are sometimes suspended when doing God's will. Friedrich Nietzsche (1844–1900) formulated criticisms of both Christian and Kantian morality. Another influential development in this period was the formulation of utilitarianism by Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873). According to the utilitarian doctrine, actions should promote happiness while reducing suffering and the right action is the one that produces the greatest good for the greatest number of people.
An important development in 20th-century ethics in analytic philosophy was the emergence of metaethics. Significant early contributions to this field were made by G. E. Moore (1873–1958), who argued that moral values are essentially different from other properties found in the natural world. R. M. Hare (1919–2002) followed this idea in formulating his prescriptivism, which states that moral statements are commands that, unlike regular judgments, are neither true nor false. J. L. Mackie (1917–1981) suggested that every moral statement is false since there are no moral facts. An influential argument for moral realism was made by Derek Parfit (1942–2017), who argued that morality concerns objective features of reality that give people reasons to act in one way or another. Bernard Williams (1929–2003) agreed with the close relation between reasons and ethics but defended a subjective view instead that sees reasons as internal mental states that may or may not reflect external reality. Another development in this period was the revival of ancient virtue ethics by philosophers like Philippa Foot (1920–2010). In the field of political philosophy, John Rawls (1921–2002) relied on Kantian ethics to analyze social justice as a form of fairness. In continental philosophy, phenomenologists such as Max Scheler (1874–1928) and Nicolai Hartmann (1882–1950) built ethical systems based on the claim that values have objective reality that can be investigated using the phenomenological method. Existentialists like Jean-Paul Sartre (1905–1980) and Simone de Beauvoir (1908–1986), by contrast, held that values are created by humans and explored the consequences of this view in relation to individual freedom, responsibility, and authenticity. This period also saw the emergence of feminist ethics, which questions traditional ethical assumptions associated with a male perspective and puts alternative concepts, like care, at the center.
See also
Index of ethics articles
Outline of ethics
Practical philosophy
Science of morality
References
Notes
Citations
Sources
External links
Branches of philosophy | 0.802633 | 0.999787 | 0.802462 |
Qualitative research | Qualitative research is a type of research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behavior. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis. Qualitative research methods have been used in sociology, anthropology, political science, psychology, communication studies, social work, folklore, educational research, information science and software engineering research.
Background
Qualitative research has been informed by several strands of philosophical thought and examines aspects of human life, including culture, expression, beliefs, morality, life stress, and imagination. Contemporary qualitative research has been influenced by a number of branches of philosophy, for example, positivism, postpositivism, critical theory, and constructivism.
The historical transitions or 'moments' in qualitative research, together with the notion of 'paradigms' (Denzin & Lincoln, 2005), have received widespread popularity over the past decades. However, some scholars have argued that the adoptions of paradigms may be counterproductive and lead to less philosophically engaged communities.
Approaches to inquiry
The use of nonquantitative material as empirical data has been growing in many areas of the social sciences, including learning sciences, development psychology and cultural psychology. Several philosophical and psychological traditions have influenced investigators' approaches to qualitative research, including phenomenology, social constructionism, symbolic interactionism, and positivism.
Philosophical traditions
Phenomenology refers to the philosophical study of the structure of an individual's consciousness and general subjective experience. Approaches to qualitative research based on constructionism, such as grounded theory, pay attention to how the subjectivity of both the researcher and the study participants can affect the theory that develops out of the research. The symbolic interactionist approach to qualitative research examines how individuals and groups develop an understanding of the world. Traditional positivist approaches to qualitative research seek a more objective understanding of the social world. Qualitative researchers have also been influenced by the sociology of knowledge and the work of Alfred Schütz, Peter L. Berger, Thomas Luckmann, and Harold Garfinkel.
Sources of data
Qualitative researchers use different sources of data to understand the topic they are studying. These data sources include interview transcripts, videos of social interactions, notes, verbal reports and artifacts such as books or works of art. The case study method exemplifies qualitative researchers' preference for depth, detail, and context. Data triangulation is also a strategy used in qualitative research. Autoethnography, the study of self, is a qualitative research method in which the researcher uses his or her personal experience to understand an issue.
Grounded theory is an inductive type of research, based on ("grounded" in) a very close look at the empirical observations a study yields. Thematic analysis involves analyzing patterns of meaning. Conversation analysis is primarily used to analyze spoken conversations. Biographical research is concerned with the reconstruction of life histories, based on biographical narratives and documents. Narrative inquiry studies the narratives that people use to describe their experience.
Data collection
Qualitative researchers may gather information through observations, note-taking, interviews, focus groups (group interviews), documents, images and artifacts.
Interviews
Research interviews are an important method of data collection in qualitative research. An interviewer is usually a professional or paid researcher, sometimes trained, who poses questions to the interviewee, in an alternating series of usually brief questions and answers, to elicit information. Compared to something like a written survey, qualitative interviews allow for a significantly higher degree of intimacy, with participants often revealing personal information to their interviewers in a real-time, face-to-face setting. As such, this technique can evoke an array of significant feelings and experiences within those being interviewed. Sociologists Bredal, Stefansen and Bjørnholt identified three "participant orientations", that they described as "telling for oneself", "telling for others" and "telling for the researcher". They also proposed that these orientations implied "different ethical contracts between the participant and researcher".
Participant observation
In participant observation ethnographers get to understand a culture by directly participating in the activities of the culture they study. Participant observation extends further than ethnography and into other fields, including psychology. For example, by training to be an EMT and becoming a participant observer in the lives of EMTs, Palmer studied how EMTs cope with the stress associated with some of the gruesome emergencies they deal with.
Recursivity
In qualitative research, the idea of recursivity refers to the emergent nature of research design. In contrast to standardized research methods, recursivity embodies the idea that the qualitative researcher can change a study's design during the data collection phase.
Recursivity in qualitative research procedures contrasts to the methods used by scientists who conduct experiments. From the perspective of the scientist, data collection, data analysis, discussion of the data in the context of the research literature, and drawing conclusions should be each undertaken once (or at most a small number of times). In qualitative research however, data are collected repeatedly until one or more specific stopping conditions are met, reflecting a nonstatic attitude to the planning and design of research activities. An example of this dynamism might be when the qualitative researcher unexpectedly changes their research focus or design midway through a study, based on their first interim data analysis. The researcher can even make further unplanned changes based on another interim data analysis. Such an approach would not be permitted in an experiment. Qualitative researchers would argue that recursivity in developing the relevant evidence enables the researcher to be more open to unexpected results and emerging new constructs.
Data analysis
Qualitative researchers have a number of analytic strategies available to them.
Coding
In general, coding refers to the act of associating meaningful ideas with the data of interest. In the context of qualitative research, interpretative aspects of the coding process are often explicitly recognized and articulated; coding helps to produce specific words or short phrases believed to be useful abstractions from the data.
Pattern thematic analysis
Data may be sorted into patterns for thematic analyses as the primary basis for organizing and reporting the study findings.
Content analysis
According to Krippendorf, "Content analysis is a research technique for making replicable and valid inference from data to their context" (p. 21). It is applied to documents and written and oral communication. Content analysis is an important building block in the conceptual analysis of qualitative data. It is frequently used in sociology. For example, content analysis has been applied to research on such diverse aspects of human life as changes in perceptions of race over time, the lifestyles of contractors, and even reviews of automobiles.
Issues
Computer-assisted qualitative data analysis software (CAQDAS)
Contemporary qualitative data analyses can be supported by computer programs (termed computer-assisted qualitative data analysis software). These programs have been employed with or without detailed hand coding or labeling. Such programs do not supplant the interpretive nature of coding. The programs are aimed at enhancing analysts' efficiency at applying, retrieving, and storing the codes generated from reading the data. Many programs enhance efficiency in editing and revising codes, which allow for more effective work sharing, peer review, data examination, and analysis of large datasets.
Common qualitative data analysis software includes:
ATLAS.ti
Dedoose (mixed methods)
MAXQDA (mixed methods)
NVivo
QDA MINER
A criticism of quantitative coding approaches is that such coding sorts qualitative data into predefined (nomothetic) categories that are reflective of the categories found in objective science. The variety, richness, and individual characteristics of the qualitative data are reduced or, even, lost.
To defend against the criticism that qualitative approaches to data are too subjective, qualitative researchers assert that by clearly articulating their definitions of the codes they use and linking those codes to the underlying data, they preserve some of the richness that might be lost if the results of their research boiled down to a list of predefined categories. Qualitative researchers also assert that their procedures are repeatable, which is an idea that is valued by quantitatively oriented researchers.
Sometimes researchers rely on computers and their software to scan and reduce large amounts of qualitative data. At their most basic level, numerical coding schemes rely on counting words and phrases within a dataset; other techniques involve the analysis of phrases and exchanges in analyses of conversations. A computerized approach to data analysis can be used to aid content analysis, especially when there is a large corpus to unpack.
Trustworthiness
A central issue in qualitative research is trustworthiness (also known as credibility or, in quantitative studies, validity). There are many ways of establishing trustworthiness, including member check, interviewer corroboration, peer debriefing, prolonged engagement, negative case analysis, auditability, confirmability, bracketing, and balance. Data triangulation and eliciting examples of interviewee accounts are two of the most commonly used methods of establishing the trustworthiness of qualitative studies.
Transferability of results has also been considered as an indicator of validity.
Limitations of qualitative research
Qualitative research is not without limitations. These limitations include participant reactivity, the potential for a qualitative investigator to over-identify with one or more study participants, "the impracticality of the Glaser-Strauss idea that hypotheses arise from data unsullied by prior expectations," the inadequacy of qualitative research for testing cause-effect hypotheses, and the Baconian character of qualitative research. Participant reactivity refers to the fact that people often behave differently when they know they are being observed. Over-identifying with participants refers to a sympathetic investigator studying a group of people and ascribing, more than is warranted, a virtue or some other characteristic to one or more participants. Compared to qualitative research, experimental research and certain types of nonexperimental research (e.g., prospective studies), although not perfect, are better means for drawing cause-effect conclusions.
Glaser and Strauss, influential members of the qualitative research community, pioneered the idea that theoretically important categories and hypotheses can emerge "naturally" from the observations a qualitative researcher collects, provided that the researcher is not guided by preconceptions. The ethologist David Katz wrote "a hungry animal divides the environment into edible and inedible things....Generally speaking, objects change...according to the needs of the animal." Karl Popper carrying forward Katz's point wrote that "objects can be classified and can become similar or dissimilar, only in this way--by being related to needs and interests. This rule applied not only to animals but also to scientists." Popper made clear that observation is always selective, based on past research and the investigators' goals and motives and that preconceptionless research is impossible.
The Baconian character of qualitative research refers to the idea that a qualitative researcher can collect enough observations such that categories and hypotheses will emerge from the data. Glaser and Strauss developed the idea of theoretical sampling by way of collecting observations until theoretical saturation is obtained and no additional observations are required to understand the character of the individuals under study. Bertrand Russell suggested that there can be no orderly arrangement of observations such that a hypothesis will jump out of those ordered observations; some provisional hypothesis usually guides the collection of observations.
In psychology
Community psychology
Autobiographical narrative research has been conducted in the field of community psychology. A selection of autobiographical narratives of community psychologists can be found in the book Six Community Psychologists Tell Their Stories: History, Contexts, and Narrative.
Educational psychology
Edwin Farrell used qualitative methods to understand the social reality of at-risk high school students. Later he used similar methods to understand the reality of successful high school students who came from the same neighborhoods as the at-risk students he wrote about in his previously mentioned book.
Health psychology
In the field of health psychology, qualitative methods have become increasingly employed in research on understanding health and illness and how health and illness are socially constructed in everyday life. Since then, a broad range of qualitative methods have been adopted by health psychologists, including discourse analysis, thematic analysis, narrative analysis, and interpretative phenomenological analysis. In 2015, the journal Health Psychology published a special issue on qualitative research.<ref>Gough, B., & Deatrick, J.A. (eds.)(2015). Qualitative research in health psychology [special issue]. Health Psychology, 34 (4).</ref>
Industrial and organizational psychology
According to Doldor and colleagues organizational psychologists extensively use qualitative research "during the design and implementation of activities like organizational change, training needs analyses, strategic reviews, and employee development plans."
Occupational health psychology
Although research in the field of occupational health psychology (OHP) has predominantly been quantitatively oriented, some OHP researchers have employed qualitative methods. Qualitative research efforts, if directed properly, can provide advantages for quantitatively oriented OHP researchers. These advantages include help with (1) theory and hypothesis development, (2) item creation for surveys and interviews, (3) the discovery of stressors and coping strategies not previously identified, (4) interpreting difficult-to-interpret quantitative findings, (5) understanding why some stress-reduction interventions fail and others succeed, and (6) providing rich descriptions of the lived lives of people at work.Schonfeld, I. S., & Farrell, E. (2010). Qualitative methods can enrich quantitative research on occupational stress: An example from one occupational group. In D. C. Ganster & P. L. Perrewé (Eds.), Research in occupational stress and wellbeing series. Vol. 8. New developments in theoretical and conceptual approaches to job stress (pp. 137-197). Bingley, UK: Emerald. Some OHP investigators have united qualitative and quantitative methods within a single study (e.g., Elfering et al., [2005]); these investigators have used qualitative methods to assess job stressors that are difficult to ascertain using standard measures and well validated standardized instruments to assess coping behaviors and dependent variables such as mood.
Social media psychology
Since the advent of social media in the early 2000s, formerly private accounts of personal experiences have become widely shared with the public by millions of people around the world. Disclosures are often made openly, which has contributed to social media's key role in movements like the #metoo movement.
The abundance of self-disclosure on social media has presented an unprecedented opportunity for qualitative and mixed methods researchers; mental health problems can now be investigated qualitatively more widely, at a lower cost, and with no intervention by the researchers. To take advantage of these data, researchers need to have mastered the tools for conducting qualitative research.
Academic journals
Consumption Markets & Culture
Journal of Consumer Research
Qualitative Inquiry Qualitative Market Research Qualitative Research The Qualitative ReportSee also
Computer-assisted qualitative data analysis software (CAQDAS)
References
Further reading
Adler, P. A. & Adler, P. (1987). : context and meaning in social inquiry / edited by Richard Jessor, Anne Colby, and Richard A. Shweder
Baškarada, S. (2014) "Qualitative Case Study Guidelines", in The Qualitative Report, 19(40): 1-25. Available from
Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed method approaches. Thousand Oaks, CA: Sage Publications.
Denzin, N. K., & Lincoln, Y. S. (2000). Handbook of qualitative research ( 2nd ed.). Thousand Oaks, CA: Sage Publications.
Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE Handbook of qualitative research ( 4th ed.). Los Angeles: Sage Publications.
DeWalt, K. M. & DeWalt, B. R. (2002). Participant observation. Walnut Creek, CA: AltaMira Press.
Fischer, C.T. (Ed.) (2005). Qualitative research methods for psychologists: Introduction through empirical studies. Academic Press. .
Franklin, M. I. (2012), "Understanding Research: Coping with the Quantitative-Qualitative Divide". London/New York. Routledge
Giddens, A. (1990). The consequences of modernity. Stanford, CA: Stanford University Press.
Gubrium, J. F. and J. A. Holstein. (2000). "The New Language of Qualitative Method." New York: Oxford University Press.
Gubrium, J. F. and J. A. Holstein (2009). "Analyzing Narrative Reality." Thousand Oaks, CA: Sage.
Gubrium, J. F. and J. A. Holstein, eds. (2000). "Institutional Selves: Troubled Identities in a Postmodern World." New York: Oxford University Press.
Hammersley, M. (2008) Questioning Qualitative Inquiry, London, Sage.
Hammersley, M. (2013) What is qualitative research?, London, Bloomsbury.
Holliday, A. R. (2007). Doing and Writing Qualitative Research, 2nd Edition. London: Sage Publications
Holstein, J. A. and J. F. Gubrium, eds. (2012). "Varieties of Narrative Analysis." Thousand Oaks, CA: Sage.
Kaminski, Marek M. (2004). Games Prisoners Play. Princeton University Press. .
Malinowski, B. (1922/1961). Argonauts of the Western Pacific. New York: E. P. Dutton.
Miles, M. B. & Huberman, A. M. (1994). Qualitative Data Analysis. Thousand Oaks, CA: Sage.
Pamela Maykut, Richard Morehouse. 1994 Beginning Qualitative Research. Falmer Press.
Pernecky, T. (2016). Epistemology and Metaphysics for Qualitative Research. London, UK: Sage Publications.
Patton, M. Q. (2002). Qualitative research & evaluation methods ( 3rd ed.). Thousand Oaks, CA: Sage Publications.
Pawluch D. & Shaffir W. & Miall C. (2005). Doing Ethnography: Studying Everyday Life. Toronto, ON Canada: Canadian Scholars' Press.
Racino, J. (1999). Policy, Program Evaluation and Research in Disability: Community Support for All." New York, NY: Haworth Press (now Routledge imprint, Francis and Taylor, 2015).
Ragin, C. C. (1994). Constructing Social Research: The Unity and Diversity of Method, Pine Forge Press,
Riessman, Catherine K. (1993). "Narrative Analysis." Thousand Oaks, CA: Sage.
Rosenthal, Gabriele (2018). Interpretive Social Research. An Introduction. Göttingen, Germany: Universitätsverlag Göttingen.
Savin-Baden, M. and Major, C. (2013). "Qualitative research: The essential guide to theory and practice." London, Rutledge.
Silverman, David, (ed), (2011), "Qualitative Research: Issues of Theory, Method and Practice". Third Edition. London, Thousand Oaks, New Delhi, Sage Publications
Stebbins, Robert A. (2001) Exploratory Research in the Social Sciences. Thousand Oaks, CA: Sage.
Taylor, Steven J., Bogdan, Robert, Introduction to Qualitative Research Methods, Wiley, 1998,
Van Maanen, J. (1988) Tales of the field: on writing ethnography, Chicago: University of Chicago Press.
Wolcott, H. F. (1995). The art of fieldwork. Walnut Creek, CA: AltaMira Press.
Wolcott, H. F. (1999). Ethnography: A way of seeing. Walnut Creek, CA: AltaMira Press.
Ziman, John (2000). Real Science: what it is, and what it means''. Cambridge, Uk: Cambridge University Press.
External links
Qualitative Philosophy
C.Wright Mills, On intellectual Craftsmanship, The Sociological Imagination,1959
Participant Observation, Qualitative research methods: a Data collector's field guide
Analyzing and Reporting Qualitative Market Research
Overview of available QDA Software
Videos
Research methods
Psychological methodology | 0.801955 | 0.997602 | 0.800032 |
Reality | Reality is the sum or aggregate of all that is real or existent within the universe, as opposed to that which is only imaginary, nonexistent or nonactual. The term is also used to refer to the ontological status of things, indicating their existence. In physical terms, reality is the totality of a system, known and unknown.
Philosophical questions about the nature of reality or existence or being are considered under the rubric of ontology, which is a major branch of metaphysics in the Western philosophical tradition. Ontological questions also feature in diverse branches of philosophy, including the philosophy of science, of religion, of mathematics, and philosophical logic. These include questions about whether only physical objects are real (i.e., physicalism), whether reality is fundamentally immaterial (e.g. idealism), whether hypothetical unobservable entities posited by scientific theories exist, whether a god or gods exist, whether numbers and other abstract objects exist, and whether possible worlds exist. Epistemology is concerned with what can be known or inferred as likely and how, whereby in the modern world emphasis is put on reason, empirical evidence and science as sources and methods to determine or investigate reality.
World views
World views and theories
A common colloquial usage would have reality mean "perceptions, beliefs, and attitudes toward reality", as in "My reality is not your reality." This is often used just as a colloquialism indicating that the parties to a conversation agree, or should agree, not to quibble over deeply different conceptions of what is real. For example, in a religious discussion between friends, one might say (attempting humor), "You might disagree, but in my reality, everyone goes to heaven."
Reality can be defined in a way that links it to worldviews or parts of them (conceptual frameworks): Reality is the totality of all things, structures (actual and conceptual), events (past and present) and phenomena, whether observable or not. It is what a world view (whether it be based on individual or shared human experience) ultimately attempts to describe or map.
Certain ideas from physics, philosophy, sociology, literary criticism, and other fields shape various theories of reality. One such theory is that there simply and literally is no reality beyond the perceptions or beliefs we each have about reality. Such attitudes are summarized in popular statements, such as "Perception is reality" or "Life is how you perceive reality" or "reality is what you can get away with" (Robert Anton Wilson), and they indicate anti-realism – that is, the view that there is no objective reality, whether acknowledged explicitly or not.
Many of the concepts of science and philosophy are often defined culturally and socially. This idea was elaborated by Thomas Kuhn in his book The Structure of Scientific Revolutions (1962). The Social Construction of Reality, a book about the sociology of knowledge written by Peter L. Berger and Thomas Luckmann, was published in 1966. It explained how knowledge is acquired and used for the comprehension of reality. Out of all the realities, the reality of everyday life is the most important one since our consciousness requires us to be completely aware and attentive to the experience of everyday life.
Related concepts
A priori and a posteriori
Potentiality and actuality
Belief
Belief studies
Western philosophy
Philosophy addresses two different aspects of the topic of reality: the nature of reality itself, and the relationship between the mind (as well as language and culture) and reality.
On the one hand, ontology is the study of being, and the central topic of the field is couched, variously, in terms of being, existence, "what is", and reality. The task in ontology is to describe the most general categories of reality and how they are interrelated. If a philosopher wanted to proffer a positive definition of the concept "reality", it would be done under this heading. As explained above, some philosophers draw a distinction between reality and existence. In fact, many analytic philosophers today tend to avoid the term "real" and "reality" in discussing ontological issues. But for those who would treat "is real" the same way they treat "exists", one of the leading questions of analytic philosophy has been whether existence (or reality) is a property of objects. It has been widely held by analytic philosophers that it is not a property at all, though this view has lost some ground in recent decades.
On the other hand, particularly in discussions of objectivity that have feet in both metaphysics and epistemology, philosophical discussions of "reality" often concern the ways in which reality is, or is not, in some way dependent upon (or, to use fashionable jargon, "constructed" out of) mental and cultural factors such as perceptions, beliefs, and other mental states, as well as cultural artifacts, such as religions and political movements, on up to the vague notion of a common cultural world view, or .
Realism
The view that there is a reality independent of any beliefs, perceptions, etc., is called realism. More specifically, philosophers are given to speaking about "realism about" this and that, such as realism about universals or realism about the external world. Generally, where one can identify any class of object, the existence or essential characteristics of which is said not to depend on perceptions, beliefs, language, or any other human artifact, one can speak of "realism about" that object.
A correspondence theory of knowledge about what exists claims that "true" knowledge of reality represents accurate correspondence of statements about and images of reality with the actual reality that the statements or images are attempting to represent. For example, the scientific method can verify that a statement is true based on the observable evidence that a thing exists. Many humans can point to the Rocky Mountains and say that this mountain range exists, and continues to exist even if no one is observing it or making statements about it.
Anti-realism
One can also speak of anti-realism about the same objects. Anti-realism is the latest in a long series of terms for views opposed to realism. Perhaps the first was idealism, so called because reality was said to be in the mind, or a product of our ideas. Berkeleyan idealism is the view, propounded by the Irish empiricist George Berkeley, that the objects of perception are actually ideas in the mind. In this view, one might be tempted to say that reality is a "mental construct"; this is not quite accurate, however, since, in Berkeley's view, perceptual ideas are created and coordinated by God. By the 20th century, views similar to Berkeley's were called phenomenalism. Phenomenalism differs from Berkeleyan idealism primarily in that Berkeley believed that minds, or souls, are not merely ideas nor made up of ideas, whereas varieties of phenomenalism, such as that advocated by Russell, tended to go farther to say that the mind itself is merely a collection of perceptions, memories, etc., and that there is no mind or soul over and above such mental events. Finally, anti-realism became a fashionable term for any view which held that the existence of some object depends upon the mind or cultural artifacts. The view that the so-called external world is really merely a social, or cultural, artifact, called social constructionism, is one variety of anti-realism. Cultural relativism is the view that social issues such as morality are not absolute, but at least partially cultural artifact.
Being
The nature of being is a perennial topic in metaphysics. For instance, Parmenides taught that reality was a single unchanging Being, whereas Heraclitus wrote that all things flow. The 20th-century philosopher Heidegger thought previous philosophers have lost sight of the question of Being (qua Being) in favour of the questions of beings (existing things), so he believed that a return to the Parmenidean approach was needed. An ontological catalogue is an attempt to list the fundamental constituents of reality. The question of whether or not existence is a predicate has been discussed since the Early Modern period, not least in relation to the ontological argument for the existence of God. Existence, that something is, has been contrasted with essence, the question of what something is.
Since existence without essence seems blank, it associated with nothingness by philosophers such as Hegel. Nihilism represents an extremely negative view of being, the absolute a positive one.
Explanations for the existence of something rather than nothing
Perception
The question of direct or "naïve" realism, as opposed to indirect or "representational" realism, arises in the philosophy of perception and of mind out of the debate over the nature of conscious experience; the epistemological question of whether the world we see around us is the real world itself or merely an internal perceptual copy of that world generated by neural processes in our brain. Naïve realism is known as direct realism when developed to counter indirect or representative realism, also known as epistemological dualism, the philosophical position that our conscious experience is not of the real world itself but of an internal representation, a miniature virtual-reality replica of the world.
Timothy Leary coined the influential term Reality Tunnel, by which he means a kind of representative realism. The theory states that, with a subconscious set of mental filters formed from their beliefs and experiences, every individual interprets the same world differently, hence "Truth is in the eye of the beholder". His ideas influenced the work of his friend Robert Anton Wilson.
Abstract objects and mathematics
The status of abstract entities, particularly numbers, is a topic of discussion in mathematics.
In the philosophy of mathematics, the best known form of realism about numbers is Platonic realism, which grants them abstract, immaterial existence. Other forms of realism identify mathematics with the concrete physical universe.
Anti-realist stances include formalism and fictionalism.
Some approaches are selectively realistic about some mathematical objects but not others. Finitism rejects infinite quantities. Ultra-finitism accepts finite quantities up to a certain amount. Constructivism and intuitionism are realistic about objects that can be explicitly constructed, but reject the use of the principle of the excluded middle to prove existence by reductio ad absurdum.
The traditional debate has focused on whether an abstract (immaterial, intelligible) realm of numbers has existed in addition to the physical (sensible, concrete) world. A recent development is the mathematical universe hypothesis, the theory that only a mathematical world exists, with the finite, physical world being an illusion within it.
An extreme form of realism about mathematics is the mathematical multiverse hypothesis advanced by Max Tegmark. Tegmark's sole postulate is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world". The hypothesis suggests that worlds corresponding to different sets of initial conditions, physical constants, or altogether different equations should be considered real. The theory can be considered a form of Platonism in that it posits the existence of mathematical entities, but can also be considered a mathematical monism in that it denies that anything exists except mathematical objects.
Properties
The problem of universals is an ancient problem in metaphysics about whether universals exist. Universals are general or abstract qualities, characteristics, properties, kinds or relations, such as being male/female, solid/liquid/gas or a certain colour, that can be predicated of individuals or particulars or that individuals or particulars can be regarded as sharing or participating in. For example, Scott, Pat, and Chris have in common the universal quality of being human or humanity.
The realist school claims that universals are real – they exist and are distinct from the particulars that instantiate them. There are various forms of realism. Two major forms are Platonic realism and Aristotelian realism. Platonic realism is the view that universals are real entities and they exist independent of particulars. Aristotelian realism, on the other hand, is the view that universals are real entities, but their existence is dependent on the particulars that exemplify them.
Nominalism and conceptualism are the main forms of anti-realism about universals.
Time and space
A traditional realist position in ontology is that time and space have existence apart from the human mind. Idealists deny or doubt the existence of objects independent of the mind. Some anti-realists whose ontological position is that objects outside the mind do exist, nevertheless doubt the independent existence of time and space.
Kant, in the Critique of Pure Reason, described time as an a priori notion that, together with other a priori notions such as space, allows us to comprehend sense experience. Kant denies that either space or time are substance, entities in themselves, or learned by experience; he holds rather that both are elements of a systematic framework we use to structure our experience. Spatial measurements are used to quantify how far apart objects are, and temporal measurements are used to quantitatively compare the interval between (or duration of) events. Although space and time are held to be transcendentally ideal in this sense, they are also empirically real, i.e. not mere illusions.
Idealist writers such as J. M. E. McTaggart in The Unreality of Time have argued that time is an illusion.
As well as differing about the reality of time as a whole, metaphysical theories of time can differ in their ascriptions of reality to the past, present and future separately.
Presentism holds that the past and future are unreal, and only an ever-changing present is real.
The block universe theory, also known as Eternalism, holds that past, present and future are all real, but the passage of time is an illusion. It is often said to have a scientific basis in relativity.
The growing block universe theory holds that past and present are real, but the future is not.
Time, and the related concepts of process and evolution are central to the system-building metaphysics of A. N. Whitehead and Charles Hartshorne.
Possible worlds
The term "possible world" goes back to Leibniz's theory of possible worlds, used to analyse necessity, possibility, and similar modal notions. Modal realism is the view, notably propounded by David Kellogg Lewis, that all possible worlds are as real as the actual world. In short: the actual world is regarded as merely one among an infinite set of logically possible worlds, some "nearer" to the actual world and some more remote. Other theorists may use the Possible World framework to express and explore problems without committing to it ontologically.
Possible world theory is related to alethic logic: a proposition is necessary if it is true in all possible worlds, and possible if it is true in at least one. The many worlds interpretation of quantum mechanics is a similar idea in science.
Theories of everything (TOE) and philosophy
The philosophical implications of a physical TOE are frequently debated. For example, if philosophical physicalism is true, a physical TOE will coincide with a philosophical theory of everything.
The "system building" style of metaphysics attempts to answer all the important questions in a coherent way, providing a complete picture of the world. Plato and Aristotle could be said to be early examples of comprehensive systems. In the early modern period (17th and 18th centuries), the system-building scope of philosophy is often linked to the rationalist method of philosophy, that is the technique of deducing the nature of the world by pure a priori reason. Examples from the early modern period include the Leibniz's Monadology, Descartes's Dualism, Spinoza's Monism. Hegel's Absolute idealism and Whitehead's Process philosophy were later systems.
Other philosophers do not believe its techniques can aim so high. Some scientists think a more mathematical approach than philosophy is needed for a TOE, for instance Stephen Hawking wrote in A Brief History of Time that even if we had a TOE, it would necessarily be a set of equations. He wrote, "What is it that breathes fire into the equations and makes a universe for them to describe?"
Phenomenology
On a much broader and more subjective level, private experiences, curiosity, inquiry, and the selectivity involved in personal interpretation of events shapes reality as seen by one and only one person and hence is called phenomenological. While this
form of reality might be common to others as well, it could at times also be so unique to oneself as to never be experienced or agreed upon by anyone else. Much of the kind of experience deemed spiritual occurs on this level of reality.
Phenomenology is a philosophical method developed in the early years of the twentieth century by Edmund Husserl (1859-1938) and a circle of followers at the universities of Göttingen and Munich in Germany. Subsequently, phenomenological themes were taken up by philosophers in France, the United States, and elsewhere, often in contexts far removed from Husserl's work.
The word phenomenology comes from the Greek phainómenon, meaning "that which appears", and lógos, meaning "study". In Husserl's conception, phenomenology is primarily concerned with making the structures of consciousness, and the phenomena which appear in acts of consciousness, objects of systematic reflection and analysis. Such reflection was to take place from a highly modified "first person" viewpoint, studying phenomena not as they appear to "my" consciousness, but to any consciousness whatsoever. Husserl believed that phenomenology could thus provide a firm basis for all human knowledge, including scientific knowledge, and could establish philosophy as a "rigorous science".
Husserl's conception of phenomenology has been criticised and developed by his student and assistant Martin Heidegger (1889-1976), by existentialists like Maurice Merleau-Ponty (1908-1961) and Jean-Paul Sartre (1905-1980), and by other philosophers, such as Paul Ricoeur (1913-2005), Emmanuel Levinas (1906-1995), and Dietrich von Hildebrand (1889-1977).
Skeptical hypotheses
Skeptical hypotheses in philosophy suggest that reality could be very different from what we think it is; or at least that we cannot prove it is not. Examples include:
The "Brain in a vat" hypothesis is cast in scientific terms. It supposes that one might be a disembodied brain kept alive in a vat, and fed false sensory signals. This hypothesis is related to the Matrix hypothesis below.
The "Dream argument" of Descartes and Zhuangzi supposes reality to be indistinguishable from a dream.
Descartes' Evil demon is a being "as clever and deceitful as he is powerful, who has directed his entire effort to misleading me."
The five minute hypothesis (or omphalos hypothesis or Last Thursdayism) suggests that the world was created recently together with records and traces indicating a greater age.
Diminished reality refers to artificially diminished reality, not due to limitations of sensory systems but via artificial filters.
The Matrix hypothesis or Simulated reality hypothesis suggest that we might be inside a computer simulation or virtual reality. Related hypotheses may also involve simulations with signals that allow the inhabitant species in virtual or simulated reality to perceive the external reality.
Non-western ancient philosophy and religion
Jain philosophy
Jain philosophy postulates that seven tattva (truths or fundamental principles) constitute reality. These seven tattva are:
Jīva – The soul which is characterized by consciousness.
Ajīva – The non-soul.
Asrava – Influx of karma.
Bandha – The bondage of karma.
Samvara – Obstruction of the inflow of karmic matter into the soul.
Nirjara – Shedding of karmas.
Moksha – Liberation or Salvation, i.e. the complete annihilation of all karmic matter (bound with any particular soul).
Physical sciences
Scientific realism
Scientific realism is, at the most general level, the view that the world (the universe) described by science (perhaps ideal science) is the real world, as it is, independent of what we might take it to be. Within philosophy of science, it is often framed as an answer to the question "how is the success of science to be explained?" The debate over what the success of science involves centers primarily on the status of entities that are not directly observable discussed by scientific theories. Generally, those who are scientific realists state that one can make reliable claims about these entities (viz., that they have the same ontological status) as directly observable entities, as opposed to instrumentalism. The most used and studied scientific theories today state more or less the truth.
Realism and locality in physics
Realism in the sense used by physicists does not equate to realism in metaphysics.
The latter is the claim that the world is mind-independent: that even if the results of a measurement do not pre-exist the act of measurement, that does not require that they are the creation of the observer. Furthermore, a mind-independent property does not have to be the value of some physical variable such as position or momentum. A property can be dispositional (or potential), i.e. it can be a tendency: in the way that glass objects tend to break, or are disposed to break, even if they do not actually break. Likewise, the mind-independent properties of quantum systems could consist of a tendency to respond to particular measurements with particular values with ascertainable probability. Such an ontology would be metaphysically realistic, without being realistic in the physicist's sense of "local realism" (which would require that a single value be produced with certainty).
A closely related term is counterfactual definiteness (CFD), used to refer to the claim that one can meaningfully speak of the definiteness of results of measurements that have not been performed (i.e. the ability to assume the existence of objects, and properties of objects, even when they have not been measured).
Local realism is a significant feature of classical mechanics, of general relativity, and of electrodynamics; but quantum mechanics has shown that quantum entanglement is possible. This was rejected by Einstein, who proposed the EPR paradox, but it was subsequently quantified by Bell's inequalities. If Bell's inequalities are violated, either local realism or counterfactual definiteness must be incorrect; but some physicists dispute that experiments have demonstrated Bell's violations, on the grounds that the sub-class of inhomogeneous Bell inequalities has not been tested or due to experimental limitations in the tests. Different interpretations of quantum mechanics violate different parts of local realism and/or counterfactual definiteness.
The transition from "possible" to "actual" is a major topic of quantum physics, with related theories including quantum darwinism.
Role of "observation" in quantum mechanics
The quantum mind–body problem refers to the philosophical discussions of the mind–body problem in the context of quantum mechanics. Since quantum mechanics involves quantum superpositions, which are not perceived by observers, some interpretations of quantum mechanics place conscious observers in a special position.
The founders of quantum mechanics debated the role of the observer, and of them, Wolfgang Pauli and Werner Heisenberg believed that it was the observer that produced collapse. This point of view, which was never fully endorsed by Niels Bohr, was denounced as mystical and anti-scientific by Albert Einstein. Pauli accepted the term, and described quantum mechanics as lucid mysticism.
Heisenberg and Bohr always described quantum mechanics in logical positivist terms. Bohr also took an active interest in the philosophical implications of quantum theories such as his complementarity, for example. He believed quantum theory offers a complete description of nature, albeit one that is simply ill-suited for everyday experiences – which are better described by classical mechanics and probability. Bohr never specified a demarcation line above which objects cease to be quantum and become classical. He believed that it was not a question of physics, but one of philosophy.
Eugene Wigner reformulated the "Schrödinger's cat" thought experiment as "Wigner's friend" and proposed that the consciousness of an observer is the demarcation line which precipitates collapse of the wave function, independent of any realist interpretation. Commonly known as "consciousness causes collapse", this controversial interpretation of quantum mechanics states that observation by a conscious observer is what makes the wave function collapse. However, this is a minority view among quantum philosophers, considering it a misunderstanding. There are other possible solutions to the "Wigner's friend" thought experiment, which do not require consciousness to be different from other physical processes. Moreover, Wigner shifted to those interpretations in his later years.
Multiverse
The multiverse is the hypothetical set of multiple possible universes (including the historical universe we consistently experience) that together comprise everything that exists: the entirety of space, time, matter, and energy as well as the physical laws and constants that describe them. The term was coined in 1895 by the American philosopher and psychologist William James. In the many-worlds interpretation (MWI), one of the mainstream interpretations of quantum mechanics, there are an infinite number of universes and every possible quantum outcome occurs in at least one universe, albeit there is a debate as to how real the (other) worlds are.
The structure of the multiverse, the nature of each universe within it and the relationship between the various constituent universes, depend on the specific multiverse hypothesis considered. Multiverses have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology and fiction, particularly in science fiction and fantasy. In these contexts, parallel universes are also called "alternative universes", "quantum universes", "interpenetrating dimensions", "parallel dimensions", "parallel worlds", "alternative realities", "alternative timelines", and "dimensional planes", among others.
Anthropic principle
Personal and collective reality
Each individual has a different view of reality, with different memories and personal history, knowledge, personality traits and experience. This system, mostly referring to the human brain, affects cognition and behavior and into this complex new knowledge, memories, information, thoughts and experiences are continuously integrated. The connectome – neural networks/wirings in brains – is thought to be a key factor in human variability in terms of cognition or the way we perceive the world (as a context) and related features or processes. Sensemaking is the process by which people give meaning to their experiences and make sense of the world they live in. Personal identity is relating to questions like how a unique individual is persisting through time.
Sensemaking and determination of reality also occurs collectively, which is investigated in social epistemology and related approaches. From the collective intelligence perspective, the intelligence of the individual human (and potentially AI entities) is substantially limited and advanced intelligence emerges when multiple entities collaborate over time. Collective memory is an important component of the social construction of reality and communication and communication-related systems, such as media systems, may also be major components .
Philosophy of perception raises questions based on the evolutionary history of humans' perceptual apparatuses, particularly or especially individuals' physiological senses, described as "[w]e don't see reality — we only see what was useful to see in the past", partly suggesting that "[o]ur species has been so successful not in spite of our inability to see reality but because of it".
Scientific theories of everything
A theory of everything (TOE) is a putative theory of theoretical physics that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle. The theory of everything is also called the final theory. Many candidate theories of everything have been proposed by theoretical physicists during the twentieth century, but none have been confirmed experimentally. The primary problem in producing a TOE is that general relativity and quantum mechanics are hard to unify. This is one of the unsolved problems in physics.
Initially, the term "theory of everything" was used with an ironic connotation to refer to various overgeneralized theories. For example, a great-grandfather of Ijon Tichy, a character from a cycle of Stanisław Lem's science fiction stories of the 1960s, was known to work on the "General Theory of Everything". Physicist John Ellis claims to have introduced the term into the technical literature in an article in Nature in 1986. Over time, the term stuck in popularizations of quantum physics to describe a theory that would unify or explain through a single model the theories of all fundamental interactions and of all particles of nature: general relativity for gravitation, and the standard model of elementary particle physics – which includes quantum mechanics – for electromagnetism, the two nuclear interactions, and the known elementary particles.
Current candidates for a theory of everything include string theory, M theory, and loop quantum gravity.
Technology
Media
Media – such as news media, social media, websites including Wikipedia, and fiction – shape individuals' and society's perception of reality (including as part of belief and attitude formation) and are partly used intentionally as means to learn about reality. Various technologies have changed society's relationship with reality such as the advent of radio and TV technologies.
Research investigates interrelations and effects, for example aspects in the social construction of reality. A major component of this shaping and representation of perceived reality is agenda, selection and prioritization – not only (or primarily) the quality, tone and types of content – which influences, for instance, the public agenda. Disproportional news attention for low-probability incidents – such as high-consequence accidents – can distort audiences' risk perceptions with harmful consequences. Various biases such as false balance, public attention dependence reactions like sensationalism and domination by "current events", as well as various interest-driven uses of media such as marketing can also have major impacts on the perception of reality. Time-use studies found that e.g. in 2018 the average U.S. American "spent around eleven hours every day looking at screens".
Filter bubbles and echo chambers
Virtual reality and cyberspace
Virtual reality (VR) is a computer-simulated environment that can simulate physical presence in places in the real world, as well as in imaginary worlds.
The virtuality continuum is a continuous scale ranging between the completely virtual, a virtuality, and the completely real: reality. The reality–virtuality continuum therefore encompasses all possible variations and compositions of real and virtual objects. It has been described as a concept in new media and computer science, but in fact it could be considered a matter of anthropology. The concept was first introduced by Paul Milgram.
The area between the two extremes, where both the real and the virtual are mixed, is the so-called mixed reality. This in turn is said to consist of both augmented reality, where the virtual augments the real, and augmented virtuality, where the real augments the virtual.
Cyberspace, the world's computer systems considered as an interconnected whole, can be thought of as a virtual reality; for instance, it is portrayed as such in the cyberpunk fiction of William Gibson and others. Second Life and MMORPGs such as World of Warcraft are examples of artificial environments or virtual worlds (falling some way short of full virtual reality) in cyberspace.
"RL" in internet culture
On the Internet, "real life" refers to life in the real world. It generally references life or consensus reality, in contrast to an environment seen as fiction or fantasy, such as virtual reality, lifelike experience, dreams, novels, or movies. Online, the acronym "IRL" stands for "in real life", with the meaning "not on the Internet". Sociologists engaged in the study of the Internet have determined that someday, a distinction between online and real-life worlds may seem "quaint", noting that certain types of online activity, such as sexual intrigues, have already made a full transition to complete legitimacy and "reality". The abbreviation "RL" stands for "real life". For example, one can speak of "meeting in RL" someone whom one has met in a chat or on an Internet forum. It may also be used to express an inability to use the Internet for a time due to "RL problems".
See also
Alternate history
Counterfactual history
Derealization
Consciousness
Extended modal realism
Hyperreality
Modal realism
Notes
References
Alt URL
Further reading
George Musser, "Virtual Reality: How close can physics bring us to a truly fundamental understanding of the world?", Scientific American, vol. 321, no. 3 (September 2019), pp. 30–35.
"Physics is ... the bedrock of the broader search for truth.... Yet [physicists] sometimes seem to be struck by a collective impostor syndrome.... Truth can be elusive even in the best-established theories. Quantum mechanics is as well tested a theory as can be, yet its interpretation remains inscrutable. [p. 30.] The deeper physicists dive into reality, the more reality seems to evaporate." [p. 34.]
External links
C.D. Broad on Reality
Phenomenology Online: Materials discussing and exemplifying phenomenological research
The Matrix as Metaphysics by David Chalmers
Concepts in metaphysics
Concepts in epistemology
Concepts in logic
Concepts in metaphilosophy
Concepts in the philosophy of language
Concepts in the philosophy of science
Ontology
Philosophy of mathematics
Philosophy of religion
Philosophy of technology
Concepts in the philosophy of mind
Concepts in social philosophy
Realism | 0.79999 | 0.998531 | 0.798814 |
Philosophical theory | A philosophical theory or philosophical position is a view that attempts to explain or account for a particular problem in philosophy. The use of the term "theory" is a statement of colloquial English and not a technical term. While any sort of thesis or opinion may be termed a position, in analytic philosophy it is thought best to reserve the word "theory" for systematic, comprehensive attempts to solve problems.
Overview
The elements that comprise a philosophical position consist of statements which are believed to be true by the thinkers who accept them, and which may or may not be empirical. The sciences have a very clear idea of what a theory is; however in the arts such as philosophy, the definition is more hazy. Philosophical positions are not necessarily scientific theories, although they may consist of both empirical and non-empirical statements.
The collective statements of all philosophical movements, schools of thought, and belief systems consist of philosophical positions. Also included among philosophical positions are many principles, dogmas, doctrines, hypotheses, rules, paradoxes, laws, as well as 'ologies, 'isms, 'sis's, and effects.
Some examples of philosophical positions include:
Metatheory; positions about the formation and content of theorems, such as Kurt Gödel's incompleteness theorem.
Political theory; positions that underlie a political philosophy, such as John Rawls' theory of justice.
Ethical theory and meta-ethics; positions about the nature and purpose of ethical statements, such as the ethical theory of Immanuel Kant.
Critical theory; in its narrow sense, a Western European body of Frankfurt School Marxist thought that aims at criticizing and transforming, rather than merely explaining, social structures. In a broader sense, "critical theory" relates to a wide variety of political, literary, and philosophical positions that take at least some of their inspiration from the Frankfurt School and its dialectic, and that typically contest the possibility of objectivity or aloofness from political positions and privileges.
Philosophical positions may also take the form of a religion, philosophy of life, ideology, world view, or life stance.
See also
Glossary of philosophy
List of philosophies
Metaphilosophy
References
Theories
fi:Uskomusjärjestelmä
sv:Trosuppfattning | 0.817997 | 0.976331 | 0.798636 |
Philosophical skepticism | Philosophical skepticism (UK spelling: scepticism; from Greek σκέψις skepsis, "inquiry") is a family of philosophical views that question the possibility of knowledge. It differs from other forms of skepticism in that it even rejects very plausible knowledge claims that belong to basic common sense. Philosophical skeptics are often classified into two general categories: Those who deny all possibility of knowledge, and those who advocate for the suspension of judgment due to the inadequacy of evidence. This distinction is modeled after the differences between the Academic skeptics and the Pyrrhonian skeptics in ancient Greek philosophy. Pyrrhonian skepticism is a practice of suspending judgement, and skepticism in this sense is understood as a way of life that helps the practitioner achieve inner peace. Some types of philosophical skepticism reject all forms of knowledge while others limit this rejection to certain fields, for example, knowledge about moral doctrines or about the external world. Some theorists criticize philosophical skepticism based on the claim that it is a self-refuting idea since its proponents seem to claim to know that there is no knowledge. Other objections focus on its implausibility and distance from regular life.
Overview
Philosophical skepticism is a doubtful attitude toward commonly accepted knowledge claims. It is an important form of skepticism. Skepticism in general is a questioning attitude toward all kinds of knowledge claims. In this wide sense, it is quite common in everyday life: many people are ordinary skeptics about parapsychology or about astrology because they doubt the claims made by proponents of these fields. But the same people are not skeptical about other knowledge claims like the ones found in regular school books. Philosophical skepticism differs from ordinary skepticism in that it even rejects knowledge claims that belong to basic common sense and seem to be very certain. For this reason, it is sometimes referred to as radical doubt. In some cases, it is even proclaimed that one does not know that "I have two hands" or that "the sun will come out tomorrow". In this regard, philosophical skepticism is not a position commonly adopted by regular people in everyday life. This denial of knowledge is usually associated with the demand that one should suspend one's beliefs about the doubted proposition. This means that one should neither believe nor disbelieve it but keep an open mind without committing oneself one way or the other. Philosophical skepticism is often based on the idea that no matter how certain one is about a given belief, one could still be wrong about it. From this observation, it is argued that the belief does not amount to knowledge. Philosophical skepticism follows from the consideration that this might be the case for most or all beliefs. Because of its wide-ranging consequences, it is of central interest to theories of knowledge since it questions their very foundations.
According to some definitions, philosophical skepticism is not just the rejection of some forms of commonly accepted knowledge but the rejection of all forms of knowledge. In this regard, we may have relatively secure beliefs in some cases but these beliefs never amount to knowledge. Weaker forms of philosophical skepticism restrict this rejection to specific fields, like the external world or moral doctrines. In some cases, knowledge per se is not rejected but it is still denied that one can ever be absolutely certain.
There are only few defenders of philosophical skepticism in the strong sense. In this regard, it is much more commonly used as a theoretical tool to test theories. On this view, it is a philosophical methodology that can be utilized to probe a theory to find its weak points, either to expose it or to modify it in order to arrive at a better version of it. However, some theorists distinguish philosophical skepticism from methodological skepticism in that philosophical skepticism is an approach that questions the possibility of certainty in knowledge, whereas methodological skepticism is an approach that subjects all knowledge claims to scrutiny with the goal of sorting out true from false claims. Similarly, scientific skepticism differs from philosophical skepticism in that scientific skepticism is an epistemological position in which one questions the veracity of claims lacking empirical evidence. In practice, the term most commonly references the examination of claims and theories that appear to be pseudoscience, rather than the routine discussions and challenges among scientists.
In ancient philosophy, skepticism was seen not just as a theory about the existence of knowledge but as a way of life. This outlook is motivated by the idea that suspending one's judgment on all kinds of issues brings with it inner peace and thereby contributes to the skeptic's happiness.
Classification
Skepticism can be classified according to its scope. Local skepticism involves being skeptical about particular areas of knowledge (e.g. moral skepticism, skepticism about the external world, or skepticism about other minds), whereas radical skepticism claims that one cannot know anything—including that one cannot know about knowing anything.
Skepticism can also be classified according to its method. Western philosophy has two basic approaches to skepticism. Cartesian skepticism—named somewhat misleadingly after René Descartes, who was not a skeptic but used some traditional skeptical arguments in his Meditations to help establish his rationalist approach to knowledge—attempts to show that any proposed knowledge claim can be doubted. Agrippan skepticism focuses on justification rather than the possibility of doubt. According to this view, none of the ways in which one might attempt to justify a claim are adequate. One can justify a claim based on other claims, but this leads to an infinite regress of justifications. One can use a dogmatic assertion, but this is not a justification. One can use circular reasoning, but this fails to justify the conclusion.
Skeptical scenarios
A skeptical scenario is a hypothetical situation which can be used in an argument for skepticism about a particular claim or class of claims. Usually the scenario posits the existence of a deceptive power that deceives our senses and undermines the justification of knowledge otherwise accepted as justified, and is proposed in order to call into question our ordinary claims to knowledge on the grounds that we cannot exclude the possibility of skeptical scenarios being true. Skeptical scenarios have received a great deal of attention in modern Western philosophy.
The first major skeptical scenario in modern Western philosophy appears in René Descartes' Meditations on First Philosophy. At the end of the first Meditation Descartes writes: "I will suppose... that some evil demon of the utmost power and cunning has employed all his energies to deceive me."
The "evil demon problem", also known as "Descartes' evil demon", was first proposed by René Descartes. It invokes the possibility of a being who could deliberately mislead one into falsely believing everything that you take to be true.
The "brain in a vat" hypothesis is cast in contemporary scientific terms. It supposes that one might be a disembodied brain kept alive in a vat and fed false sensory signals by a mad scientist. Further, it asserts that since a brain in a vat would have no way of knowing that it was a brain in a vat, you cannot prove that you are not a brain in a vat.
The "dream argument", proposed by both René Descartes and Zhuangzi, supposes reality to be indistinguishable from a dream.
The "five minute hypothesis", most notably proposed by Bertrand Russell, suggests that we cannot prove that the world was not created five minutes ago (along with false memories and false evidence suggesting that it was not only five minutes old).
The "simulated reality hypothesis" or "Matrix hypothesis" suggests that everyone, or even the entire universe, might be inside a computer simulation or virtual reality.
The "Solipsistic" theory that claims that knowledge of the world is an illusion of the Self.
Epistemological skepticism
Skepticism, as an epistemological view, calls into question whether knowledge is possible at all. This is distinct from other known skeptical practices, including Cartesian skepticism, as it targets knowledge in general instead of individual types of knowledge.
Skeptics argue that belief in something does not justify an assertion of knowledge of it. In this, skeptics oppose foundationalism, which states that there are basic positions that are self-justified or beyond justification, without reference to others. (One example of such foundationalism may be found in Spinoza's Ethics.)
Among other arguments, skeptics use the Münchhausen trilemma and the problem of the criterion to claim that no certain belief can be achieved. This position is known as "global skepticism" or "radical skepticism." Foundationalists have used the same trilemma as a justification for demanding the validity of basic beliefs. Epistemological nihilism rejects the possibility of human knowledge, but not necessarily knowledge in general.
There are two different categories of epistemological skepticism, which can be referred to as mitigated and unmitigated skepticism. The two forms are contrasting but are still true forms of skepticism. Mitigated skepticism does not accept "strong" or "strict" knowledge claims but does, however, approve specific weaker ones. These weaker claims can be assigned the title of "virtual knowledge", but must be to justified belief. Some mitigated skeptics are also fallibilists, arguing that knowledge does not require certainty. Mitigated skeptics hold that knowledge does not require certainty and that many beliefs are, in practice, certain to the point that they can be safely acted upon in order to live significant and meaningful lives. Unmitigated skepticism rejects both claims of virtual knowledge and strong knowledge. Characterising knowledge as strong, weak, virtual or genuine can be determined differently depending on a person's viewpoint as well as their characterisation of knowledge. Unmitigated skeptics believe that objective truths are unknowable and that man should live in an isolated environment in order to win mental peace. This is because everything, according to them, is changing and relative. The refusal to make judgments is of uttermost importance since there is no knowledge; only probable opinions.
Criticism
Philosophical skepticism has been criticized in various ways. Some criticisms see it as a self-refuting idea while others point out that it is implausible, psychologically impossible, or a pointless intellectual game. This position is based on the idea that philosophical skepticism not only rejects the existence of knowledge but seems to make knowledge claims itself at the same time. For example, to claim that there is no knowledge seems to be itself a knowledge claim. This problem is particularly relevant for versions of philosophical skepticism that deny any form of knowledge. So the global skeptic denies that any claim is rationally justified but then goes on to provide arguments in an attempt to rationally justify their denial. Some philosophical skeptics have responded to this objection by restricting the denial of knowledge to certain fields without denying the existence of knowledge in general. Another defense consists in understanding philosophical skepticism not as a theory but as a tool or a methodology. In this case, it may be used fruitfully to reject and improve philosophical systems despite its shortcomings as a theory.
Another criticism holds that philosophical skepticism is highly counterintuitive by pointing out how far removed it is from regular life. For example, it seems very impractical, if not psychologically impossible, to suspend all beliefs at the same time. And even if it were possible, it would not be advisable since "the complete skeptic would wind up starving to death or walking into walls or out of windows". This criticism can allow that there are some arguments that support philosophical skepticism. However, it has been claimed that they are not nearly strong enough to support such a radical conclusion. Common-sense philosophers follow this line of thought by arguing that regular common-sense beliefs are much more reliable than the skeptics' intricate arguments. George Edward Moore, for example, tried to refute skepticism about the existence of the external world, not by engaging with its complex arguments, but by using a simple observation: that he has two hands. For Moore, this observation is a reliable source of knowledge incompatible with external world skepticism since it entails that at least two physical objects exist.
A closely related objection sees philosophical skepticism as an "idle academic exercise" or a "waste of time". This is often based on the idea that, because of its initial implausibility and distance from everyday life, it has little or no practical value. In this regard, Arthur Schopenhauer compares the position of radical skepticism to a border fortress that is best ignored: it is impregnable but its garrison does not pose any threat since it never sets foot outside the fortress. One defense of philosophical skepticism is that it has had important impacts on the history of philosophy at large and not just among skeptical philosophers. This is due to its critical attitude, which remains a constant challenge to the epistemic foundations of various philosophical theories. It has often provoked creative responses from other philosophers when trying to modify the affected theory to avoid the problem of skepticism.
According to Pierre Le Morvan, there are two very common negative responses to philosophical skepticism. The first understands it as a threat to all kinds of philosophical theories and strives to disprove it. According to the second, philosophical skepticism is a useless distraction and should better be avoided altogether. Le Morvan himself proposes a positive third alternative: to use it as a philosophical tool in a few selected cases to overcome prejudices and foster practical wisdom.
History of Western skepticism
Ancient Greek skepticism
Ancient Greek skeptics were not "skeptics" in the contemporary sense of selective, localized doubt. Their concerns were epistemological, noting that truth claims could not be adequately supported, and psychotherapeutic, noting that beliefs caused mental perturbation.
The Western tradition of systematic skepticism goes back at least as far as Pyrrho of Elis (b. ) and arguably to Xenophanes (b. ). Parts of skepticism also appear among the "5th century sophists [who] develop forms of debate which are ancestors of skeptical argumentation. They take pride in arguing in a persuasive fashion for both sides of an issue."
In Hellenistic philosophy, Pyrrhonism and Academic Skepticism were the two schools of skeptical philosophy. Subsequently, the words Academic and Pyrrhonist were often used to mean skeptic.
Pyrrhonism
Like other Hellenistic philosophies, the goal of Pyrrhonism was eudaimonia, which the Pyrrhonists sought through achieving ataraxia (an untroubled state of mind), which they found could be induced by producing a state of epoché (suspension of judgment) regarding non-evident matters. Epoché could be produced by pitting one dogma against another to undermine belief, and by questioning whether a belief could be justified. In support of this questioning Pyrrhonists developed the skeptical arguments cited above (the Ten Modes of Aenesidemus and the Five Modes of Agrippa) demonstrating that beliefs cannot be justified:
Pyrrho of Elis
According to an account of Pyrrho's life by his student Timon of Phlius, Pyrrho extolled a way to become happy and tranquil:
'The things themselves are equally indifferent, and unstable, and indeterminate, and therefore neither our senses nor our opinions are either true or false. For this reason then we must not trust them, but be without opinions, and without bias, and without wavering, saying of every single thing that it no more is than is not, or both is and is not, or neither is nor is not.
Aenesidemus
Pyrrhonism faded as a movement following the death of Pyrrho's student Timon. The Academy became slowly more dogmatic such that in the first century BCE Aenesidemus denounced the Academics as "Stoics fighting against Stoics", breaking with the Academy to revive Pyrrhonism. Aenesidemus's best known contribution to skepticism was his now-lost book, Pyrrhonian Discourses, which is only known to us through Photius, Sextus Empiricus, and to a lesser extent Diogenes Laërtius. The skeptical arguments most closely associated with Aenesidemus are the ten modes described above designed to induce epoche.
Sextus Empiricus
The works of Sextus Empiricus (c. 200 CE) are the main surviving account of ancient Pyrrhonism. Long before Sextus' time, the Academy had abandoned skepticism and had been destroyed as a formal institution. Sextus compiled and further developed the Pyrrhonists' skeptical arguments, most of which were directed against the Stoics but included arguments against all of the schools of Hellenistic philosophy, including the Academic skeptics.
Sextus, as the most systematic author of the works by Hellenistic skeptics which have survived, noted that there are at least ten modes of skepticism. These modes may be broken down into three categories: one may be skeptical of the subjective perceiver, of the objective world, and the relation between perceiver and the world. His arguments are as follows.
Subjectively, the powers of the senses and reasoning may vary among different people. And since knowledge is a product of one or the other, and since neither are reliable, knowledge would seem to be in trouble. For instance, a color-blind person sees the world quite differently from everyone else. Moreover, one cannot even give preference based on the power of reason, i.e., by treating the rational animal as a carrier of greater knowledge than the irrational animal, since the irrational animal is still adept at navigating their environment, which suggests the ability to "know" about some aspects of the environment.
Secondly, the personality of the individual might also influence what they observe, since (it is argued) preferences are based on sense-impressions, differences in preferences can be attributed to differences in the way that people are affected by the object. (Empiricus:56)
Third, the perceptions of each individual sense seemingly have nothing in common with the other senses: i.e., the color "red" has little to do with the feeling of touching a red object. This is manifest when our senses "disagree" with each other: for example, a mirage presents certain visible features, but is not responsive to any other kind of sense. In that case, our other senses defeat the impressions of sight. But one may also be lacking enough powers of sense to understand the world in its entirety: if one had an extra sense, then one might know of things in a way that the present five senses are unable to advise us of. Given that our senses can be shown to be unreliable by appealing to other senses, and so our senses may be incomplete (relative to some more perfect sense that one lacks), then it follows that all of our senses may be unreliable. (Empiricus:58)
Fourth, our circumstances when one perceives anything may be either natural or unnatural, i.e., one may be either in a state of wakefulness or sleep. But it is entirely possible that things in the world really are exactly as they appear to be to those in unnatural states (i.e., if everything were an elaborate dream). (Empiricus:59)
One can have reasons for doubt that are based on the relationship between objective "facts" and subjective experience. The positions, distances, and places of objects would seem to affect how they are perceived by the person: for instance, the portico may appear tapered when viewed from one end, but symmetrical when viewed at the other; and these features are different. Because they are different features, to believe the object has both properties at the same time is to believe it has two contradictory properties. Since this is absurd, one must suspend judgment about what properties it possesses due to the contradictory experiences. (Empiricus:63)
One may also observe that the things one perceives are, in a sense, polluted by experience. Any given perception—say, of a chair—will always be perceived within some context or other (i.e., next to a table, on a mat, etc.) Since this is the case, one often only speaks of ideas as they occur in the context of the other things that are paired with it, and therefore, one can never know of the true nature of the thing, but only how it appears to us in context. (Empiricus: 64)
Along the same lines, the skeptic may insist that all things are relative, by arguing that:
Absolute appearances either differ from relative appearances, or they do not.
If absolutes do not differ from relatives, then they are themselves relative.
But if absolutes do differ from relatives, then they are relative, because all things that differ must differ from something; and to "differ" from something is to be relative to something. (Empiricus:67)
Finally, one has reason to disbelieve that one knows anything by looking at problems in understanding objects by themselves. Things, when taken individually, may appear to be very different from when they are in mass quantities: for instance, the shavings of a goat's horn are white when taken alone, yet the horn intact is black.
Skeptical arguments
The ancient Greek Pyrrhonists developed sets of arguments to demonstrate that claims about reality cannot be adequately justified. Two sets of these arguments are well known. The oldest set is known as the ten tropes of Aenesidemus—although whether he invented the tropes or just systematized them from prior Pyrrhonist works is unknown. The tropes represent reasons for epoché (suspension of judgment). These are as follows:
Different animals manifest different modes of perception;
Similar differences are seen among individual men;
For the same man, information perceived with the senses is self-contradictory
Furthermore, it varies from time to time with physical changes
In addition, this data differs according to local relations
Objects are known only indirectly through the medium of air, moisture, etc.
These objects are in a condition of perpetual change in color, temperature, size and motion
All perceptions are relative and interact one upon another
Our impressions become less critical through repetition and custom
All men are brought up with different beliefs, under different laws and social conditions
Another set are known as the five tropes of Agrippa:
Dissent – The uncertainty demonstrated by the differences of opinions among philosophers and people in general.
Progress ad infinitum – All proof rests on matters themselves in need of proof, and so on to infinity, i.e, the regress argument.
Relation – All things are changed as their relations become changed, or, as we look upon them from different points of view.
Assumption – The truth asserted is based on an unsupported assumption.
Circularity – The truth asserted involves a circularity of proofs.
According to Victor Brochard "the five tropes can be regarded as the most radical and most precise formulation of philosophical skepticism that has ever been given. In a sense, they are still irresistible today."
Academic skepticism
Pyrrho's thinking subsequently influenced the Platonic Academy, arising first in the Academic skepticism of the Middle Academy under Arcesilaus (c. 315 – 241 BCE) and then the New Academy under Carneades (c. 213–129 BCE). Clitomachus, a student of Carneades, interpreted his teacher's philosophy as suggesting an account of knowledge based on truth-likeness. The Roman politician and philosopher, Cicero, was also an adherent of the skepticism of the New Academy, even though a return to a more dogmatic orientation of the school was already beginning to take place.
Augustine on skepticism
In 386 CE, Augustine published Contra Academicos (Against the Academic Skeptics), which argued against claims made by the Academic Skeptics (266–90 BCE) on the following grounds:
Objection from Error: Through logic, Augustine argues that philosophical skepticism does not lead to happiness like the Academic Skeptics claim. His arguments is summarized as:
A wise man lives according to reason, and thus is able to be happy.
One who is searching for knowledge but never finds it is in error.
Imperfection objection: People in error are not happy, because being in error is an imperfection, and people cannot be happy with an imperfection.
Conclusion: One who is still seeking knowledge cannot be happy.
Error of Non-Assent: Augustine's argument that suspending belief does not fully prevent one from error. His argument is summarized below.
Introduction of the error: Let P be true. If a person fails to believe P due to suspension of belief in order to avoid error, the person is also committing an error.
The Anecdote of the Two Travelers: Travelers A and B are trying to reach the same destination. At a fork in the road, a poor shepherd tells them to go left. Traveler A immediately believes him and reaches the correct destination. Traveler B suspends belief, and instead believes in the advice of a well-dressed townsman to go right, because his advice seems more persuasive. However, the townsman is actually a samardocus (con man) so Traveler B never reaches the correct destination.
The Anecdote of the Adulterer: A man suspends belief that adultery is bad, and commits adultery with another man's wife because it is persuasive to him. Under Academic Skepticism, this man cannot be charged because he acted on what was persuasive to him without assenting belief.
Conclusion: Suspending belief exposes individuals to an error as defined by the Academic Skeptics.
Skepticism's revival in the sixteenth century
Francisco Sanches's That Nothing is Known (published in 1581 as Quod nihil scitur) is one of the crucial texts of Renaissance skepticism.
Michel de Montaigne (1533–1592)
The most notable figure of the Skepticism revival in the 1500s, Michel de Montaigne wrote about his studies of Academic Skepticism and Pyrrhonism through his Essais.
His most notable writings on skepticism occurred in an essay written mostly in 1575–1576, "Apologie de Raimond Sebond", when he was reading Sextus Empiricus and trying to translate Raimond Sebond's writing, including his proof of Christianity's natural existence. The reception to Montaigne's translations included some criticisms of Sebond's proof. Montaigne responded to some of them in Apologie, including a defense for Sebond's logic that is skeptical in nature and similar to Pyrrhonism. His refutation is as follows:
Critics claiming Sebond's arguments are weak show how egoistic humans believe that their logic is superior to others'.
Many animals can be observed to be superior to humans in certain respects. To argue this point, Montaigne even writes about dogs who are logical and creates their own syllogisms to understand the world around them. This was an example used in Sextus Empiricus.
Since animals also have rationality, the over-glorification of man's mental capabilities is a trap—man's folly. One man's reason cannot be assuredly better than another's as a result.
Ignorance is even recommended by religion so that an individual can reach faith through obediently following divine instructions to learn, not by one's logic.
Marin Mersenne (1588–1648)
Marin Mersenne was an author, mathematician, scientist, and philosopher. He wrote in defense of science and Christianity against atheists and Pyrrhonists before retiring to encourage development of science and the "new philosophy", which includes philosophers like Gassendi, Descartes, Galileo, and Hobbes. A major work of his in relation to Skepticism is La Verité des Sciences, in which he argues that although we may not be able to know the true nature of things, we can still formulate certain laws and rules for sense-perceptions through science.
Additionally, he points out that we do not doubt everything because:
Humans do agree about some things, for example, an ant is smaller than an elephant
There are natural laws governing our sense-perceptions, such as optics, which allow us to eliminate inaccuracies
Man created tools such as rulers and scales to measure things and eliminate doubts such as bent oars, pigeons' necks, and round towers.
A Pyrrhonist might refute these points by saying that senses deceive, and thus knowledge turns into infinite regress or circular logic. Thus Mersenne argues that this cannot be the case, since commonly agreed upon rules of thumb can be hypothesized and tested over time to ensure that they continue to hold.
Furthermore, if everything can be doubted, the doubt can also be doubted, so on and so forth. Thus, according to Mersenne, something has to be true. Finally, Mersenne writes about all the mathematical, physical, and other scientific knowledge that is true by repeated testing, and has practical use value. Notably, Mersenne was one of the few philosophers who accepted Hobbes' radical ideology—he saw it as a new science of man.
Skepticism in the seventeenth century
Thomas Hobbes (1588–1679)
During his long stay in Paris, Thomas Hobbes was actively involved in the circle of major skeptics like Gassendi and Mersenne who focus on the study of skepticism and epistemology. Unlike his fellow skeptic friends, Hobbes never treated skepticism as a main topic for discussion in his works. Nonetheless, Hobbes was still labeled as a religious skeptic by his contemporaries for raising doubts about Mosaic authorship of the Pentateuch and his political and psychological explanation of the religions. Although Hobbes himself did not go further to challenge other religious principles, his suspicion for the Mosaic authorship did significant damage to the religious traditions and paved the way for later religious skeptics like Spinoza and Isaac La Peyrère to further question some of the fundamental beliefs of the Judeo-Christian religious system. Hobbes' answer to skepticism and epistemology was innovatively political: he believed that moral knowledge and religious knowledge were in their nature relative, and there was no absolute standard of truth governing them. As a result, it was out of political reasons that certain truth standards about religions and ethics were devised and established in order to form a functioning government and stable society.
Baruch Spinoza and religious skepticism
Baruch Spinoza was among the first European philosophers who were religious skeptics. He was quite familiar with the philosophy of Descartes and unprecedentedly extended the application of the Cartesian method to the religious context by analyzing religious texts with it. Spinoza sought to dispute the knowledge-claims of the Judeo-Christian-Islamic religious system by examining its two foundations: the Scripture and the Miracles. He claimed that all Cartesian knowledge, or the rational knowledge should be accessible to the entire population. Therefore, the Scriptures, aside from those by Jesus, should not be considered the secret knowledge attained from God but just the imagination of the prophets. The Scriptures, as a result of this claim, could not serve as a base for knowledge and were reduced to simple ancient historical texts. Moreover, Spinoza also rejected the possibility for the Miracles by simply asserting that people only considered them miraculous due to their lack of understanding of the nature. By rejecting the validity of the Scriptures and the Miracles, Spinoza demolished the foundation for religious knowledge-claim and established his understanding of the Cartesian knowledge as the sole authority of knowledge-claims. Despite being deeply skeptical of the religions, Spinoza was in fact exceedingly anti-skeptical towards reason and rationality. He steadfastly confirmed the legitimacy of reason by associating it with the acknowledgement of God, and thereby skepticism with the rational approach to knowledge was not due to problems with the rational knowledge but from the fundamental lack of understanding of God. Spinoza's religious skepticism and anti-skepticism with reason thus helped him transform epistemology by separating the theological knowledge-claims and the rational knowledge-claims.
Pierre Bayle (1647–1706)
Pierre Bayle was a French philosopher in the late 17th century that was described by Richard Popkin to be a "supersceptic" who carried out the sceptic tradition to the extreme. Bayle was born in a Calvinist family in Carla-Bayle, and during the early stage of his life, he converted into Catholicism before returning to Calvinism. This conversion between religions caused him to leave France for the more religiously tolerant Holland where he stayed and worked for the rest of his life.
Bayle believed that truth cannot be obtained through reason and that all human endeavor to acquire absolute knowledge would inevitably lead to failure. Bayle's main approach was highly skeptical and destructive: he sought to examine and analyze all existing theories in all fields of human knowledge in order to show the faults in their reasoning and thus the absurdity of the theories themselves. In his magnum opus, Dictionnaire Historique et Critique (Historical and Critical Dictionary), Bayle painstakingly identified the logical flaws in several works throughout the history in order to emphasize the absolute futility of rationality. Bayle's complete nullification of reason led him to conclude that faith is the final and only way to truth.
Bayle's real intention behind his extremely destructive works remained controversial. Some described him to be a Fideist, while others speculated him to be a secret Atheist. However, no matter what his original intention was, Bayle did cast significant influence on the upcoming Age of Enlightenment with his destruction of some of the most essential theological ideas and his justification of religious tolerance Atheism in his works.
Skepticism in the Age of Enlightenment
David Hume (1711–1776)
David Hume was among the most influential proponents of philosophical skepticism during the Age of Enlightenment and one of the most notable voices of the Scottish Enlightenment and British Empiricism. He especially espoused skepticism regarding inductive reasoning, and questioned what the foundation of morality was, creating the is–ought problem. His approach to skepticism is considered even more radical than that of Descartes.
Hume argued that any coherent idea must be either a mental copy of an impression (a direct sensory perception) or copies of multiple impressions innovatively combined. Since certain human activities like religion, superstition, and metaphysics are not premised on any actual sense-impressions, their claims to knowledge are logically unjustified. Furthermore, Hume even demonstrates that science is merely a psychological phenomenon based on the association of ideas: often, specifically, an assumption of cause-and-effect relationships that is itself not grounded in any sense-impressions. Thus, even scientific knowledge is logically unjustified, being not actually objective or provable but, rather, mere conjecture flimsily based on our minds perceiving regular correlations between distinct events. Hume thus falls into extreme skepticism regarding the possibility of any certain knowledge. Ultimately, he offers that, at best, a science of human nature is the "only solid foundation for the other sciences".
Immanuel Kant (1724–1804)
Immanuel Kant (1724–1804) tried to provide a ground for empirical science against David Hume's skeptical treatment of the notion of cause and effect. Hume (1711–1776) argued that for the notion of cause and effect no analysis is possible which is also acceptable to the empiricist program primarily outlined by John Locke (1632–1704). But, Kant's attempt to give a ground to knowledge in the empirical sciences at the same time cut off the possibility of knowledge of any other knowledge, especially what Kant called "metaphysical knowledge". So, for Kant, empirical science was legitimate, but metaphysics and philosophy was mostly illegitimate. The most important exception to this demarcation of the legitimate from the illegitimate was ethics, the principles of which Kant argued can be known by pure reason without appeal to the principles required for empirical knowledge. Thus, with respect to metaphysics and philosophy in general (ethics being the exception), Kant was a skeptic. This skepticism as well as the explicit skepticism of G. E. Schulze gave rise to a robust discussion of skepticism in German idealistic philosophy, especially by Hegel. Kant's idea was that the real world (the noumenon or thing-in-itself) was inaccessible to human reason (though the empirical world of nature can be known to human understanding) and therefore we can never know anything about the ultimate reality of the world. Hegel argued against Kant that although Kant was right that using what Hegel called "finite" concepts of "the understanding" precluded knowledge of reality, we were not constrained to use only "finite" concepts and could actually acquire knowledge of reality using "infinite concepts" that arise from self-consciousness.
Skepticism in the 20th century and contemporary philosophy
G. E. Moore famously presented the "Here is one hand" argument against skepticism in his 1925 paper, "A Defence of Common Sense". Moore claimed that he could prove that the external world exists by simply presenting the following argument while holding up his hands: "Here is one hand; here is another hand; therefore, there are at least two objects; therefore, external-world skepticism fails". His argument was developed for the purpose of vindicating common sense and refuting skepticism. Ludwig Wittgenstein later argued in his On Certainty (posthumously published in 1969) that Moore's argument rested on the way that ordinary language is used, rather than on anything about knowledge.
In contemporary philosophy, Richard Popkin was a particularly influential scholar on the topic of skepticism. His account of the history of skepticism given in The History of Scepticism from Savonarola to Bayle (first edition published as The History of Scepticism From Erasmus to Descartes) was accepted as the standard for contemporary scholarship in the area for decades after its release in 1960. Barry Stroud also published a number of works on philosophical skepticism, most notably his 1984 monograph, The Significance of Philosophical Scepticism. From the mid-1990s, Stroud, alongside Richard Fumerton, put forward influential anti-externalist arguments in favour of a position called "metaepistemological scepticism". Other contemporary philosophers known for their work on skepticism include James Pryor, Keith DeRose, and Peter Klein.
History of skepticism in non-Western philosophy
Ancient Indian skepticism
Ajñana
Ajñana (literally 'non-knowledge') were the skeptical school of ancient Indian philosophy. It was a śramaṇa movement and a major rival of early Buddhism and Jainism. They have been recorded in Buddhist and Jain texts. They held that it was impossible to obtain knowledge of metaphysical nature or ascertain the truth value of philosophical propositions; and even if knowledge was possible, it was useless and disadvantageous for final salvation.
Buddhism
The historical Buddha asserted certain doctrines as true, such as the possibility of nirvana; however, he also upheld a form of skepticism with regards to certain questions which he left "un-expounded" (avyākata) and some he saw as "incomprehensible" (acinteyya). Because the Buddha saw these questions (which tend to be of metaphysical topics) as unhelpful on the path and merely leading to confusion and "a thicket of views", he promoted suspension of judgment towards them. This allowed him to carve out an epistemic middle way between what he saw as the extremes of claiming absolute objectivity (associated with the claims to omniscience of the Jain Mahavira) and extreme skepticism (associated with the Ajñana thinker Sanjaya Belatthiputta).
Later Buddhist philosophy remained highly skeptical of Indian metaphysical arguments. The Buddhist philosopher Nagarjuna in particular has been seen as the founder of the Madhyamaka school, which has been in turn compared with Greek Skepticism. Nagarjuna's statement that he has "no thesis" (pratijña) has parallels in the statements of Sextus Empiricus of having "no position". Nagarjuna famously opens his magnum opus, the Mulamadhyamakakarika, with the statement that the Buddha claimed that true happiness was found through dispelling 'vain thinking' (prapañca, also "conceptual proliferation").
According to Richard P. Hayes, the Buddhist philosopher Dignaga is also a kind of skeptic, which is in line with most early Buddhist philosophy. Hayes writes:
...in both early Buddhism and in the Skeptics one can find the view put forward that man's pursuit of happiness, the highest good, is obstructed by his tenacity in holding ungrounded and unnecessary opinions about all manner of things. Much of Buddhist philosophy, I shall argue, can be seen as an attempt to break this habit of holding on to opinions.
Scholars like Adrian Kuzminski have argued that Pyrrho of Elis (ca. 365–270) might have been influenced by Indian Buddhists during his journey with Alexander the Great.
Cārvāka philosophy
The Cārvāka (Sanskrit: चार्वाक) school of materialism, also known as Lokāyata, is a classically cited (but historically disputed) school of ancient Indian philosophy. While no texts or authoritative doctrine have survived, followers of this system are frequently mentioned in philosophical treatises of other schools, often as an initial counterpoint against which to assert their own arguments.
Cārvāka is classified as a "heterodox" (nāstika) system, characterized as a materialistic and atheistic school of thought. This school was also known for being strongly skeptical of the claims of Indian religions, such as reincarnation and karma.
Jainism
While Jain philosophy claims that is it possible to achieve omniscience, absolute knowledge (Kevala Jnana), at the moment of enlightenment, their theory of anekāntavāda or 'many sided-ness', also known as the principle of relative pluralism, allows for a practical form of skeptical thought regarding philosophical and religious doctrines (for un-enlightened beings, not all-knowing arihants).
According to this theory, the truth or the reality is perceived differently from different points of view, and that no single point of view is the complete truth. Jain doctrine states that, an object has infinite modes of existence and qualities and, as such, they cannot be completely perceived in all its aspects and manifestations, due to inherent limitations of the humans. Anekāntavāda is literally the doctrine of non-onesidedness or manifoldness; it is often translated as "non-absolutism". Syādvāda is the theory of conditioned predication which provides an expression to anekānta by recommending that epithet "Syād" be attached to every expression. Syādvāda is not only an extension of Anekānta ontology, but a separate system of logic capable of standing on its own force. As reality is complex, no single proposition can express the nature of reality fully. Thus the term "syāt" should be prefixed before each proposition giving it a conditional point of view and thus removing any dogmatism in the statement. For Jains, fully enlightened beings are able to see reality from all sides and thus have ultimate knowledge of all things. This idea of omniscience was criticized by Buddhists such as Dharmakirti.
Ancient Chinese philosophy
Zhuang Zhou (c. 369 – c. 286 BCE)
Zhuang Zhou (莊子,"Master Zhuang") was a famous ancient Chinese Taoism philosopher during the Hundred Schools of Thought period. Zhuang Zhou demonstrated his skeptical thinking through several anecdotes in the preeminent work Zhuangzi attributed to him:
"The Debate on the Joy of Fish" (知魚之樂) : In this anecdote, Zhuang Zhou argued with his fellow philosopher Hui Shi whether they knew the fish in the pond were happy or not, and Zhuang Zhou made the famous observation that "You are not I. How do you know that I do not know that the fish are happy?" (Autumn Floods 秋水篇, Zhuangzi)
"The Butterfly of the Dream"(周公夢蝶) : The paradox of "Butterfly Dream" described Zhuang Zhou's confusion after dreaming himself to be a butterfly: "But he didn't know if he was Zhuang Zhou who had dreamt he was a butterfly, or a butterfly dreaming that he was Zhuang Zhou." (Discussion on Making All Things Equal 齊物篇, Zhuangzi)
Through these anecdotes in Zhuangzi, Zhuang Zhou indicated his belief in the limitation of language and human communication and the inaccessibility of universal truth. This establishes him as a skeptic. But he was by no means a radical skeptic: he only applied skeptical methods partially, in arguments demonstrating his Taoist beliefs. He held the Taoist beliefs themselves dogmatically.
Wang Chong (27 – CE)
Wang Chong was the leading figure of the skeptic branch of the Confucianism school in China during the first century CE. He introduced a method of rational critique and applied it to the widespread dogmatism thinking of his age like phenomenology (the main contemporary Confucianism ideology that linked all natural phenomena with human ethics), state-led cults, and popular superstition. His own philosophy incorporated both Taoism and Confucianism thinkings, and it was based on a secular, rational practice of developing hypotheses based on natural events to explain the universe which exemplified a form of naturalism that resembled the philosophical idea of Epicureans like Lucretius.
Medieval Islamic philosophy
The Incoherence of the Philosophers, written by the scholar Al-Ghazali (1058–1111), marks a major turn in Islamic epistemology. His encounter with skepticism led Ghazali to embrace a form of theological occasionalism, or the belief that all causal events and interactions are not the product of material conjunctions but rather the immediate and present will of God.
In the autobiography Ghazali wrote towards the end of his life, The Deliverance From Error (Al-munqidh min al-ḍalāl ), Ghazali recounts how, once a crisis of epistemological skepticism was resolved by "a light which God Most High cast into my breast...the key to most knowledge", he studied and mastered the arguments of Kalam, Islamic philosophy, and Ismailism. Though appreciating what was valid in the first two of these, at least, he determined that all three approaches were inadequate and found ultimate value only in the mystical experience and spiritual insight he attained as a result of following Sufi practices. William James, in Varieties of Religious Experience, considered the autobiography an important document for "the purely literary student who would like to become acquainted with the inwardness of religions other than the Christian", comparing it to recorded personal religious confessions and autobiographical literature in the Christian tradition.
Aztec philosophy
Recordings of Aztec philosophy suggest that the elite classes believed in an essentially panentheistic worldview, in which teotl represents a unified, underlying universal force. Human beings cannot truly perceive teotl due to its chaotic, constantly changing nature, just the "masks"/facets it is manifested as.
See also
References
Further reading
Popkin, Richard H. 2003. The History of Scepticism from Savonarola to Bayle. New York: Oxford University Press.
Popkin, Richard H. and J. R. Maia Neto, eds. 2007. Skepticism: An Anthology. New York: Prometheus Books.
Beiser, Frederick C. 1987. The Fate of Reason: German Philosophy from Kant to Fichte. Cambridge: Harvard University Press.
Breker, Christian. 2011. Einführender Kommentar zu Sextus Empiricus' "Grundriss der pyrrhonischen Skepsis", Mainz, 2011: electr. publication, University of Mainz. available online (comment on Sextus Empiricus' "Outlines of Pyrrhonism" in German language)
di Giovanni, George and H. S. Harris, eds. 2000. Between Kant and Hegel: Texts in the Development of Post-Kantian Idealism. Translated with Introductions by George di Giovanni and H. S. Harris. Indianapolis, IN: Hackett Publishing.
Forster, Michael N. 1989. Hegel and Skepticism. Cambridge, Massachusetts: Harvard University Press.
Harris, H. S. 1985. "Skepticism, Dogmatism and Speculation in the Critical Journal". In di Giovanni and Harris 2000.
Georg Wilhelm Friedrich Hegel. 1802. "On the Relationship of Skepticism to Philosophy, Exposition of its Different Modifications and Comparison of the Latest Form with the Ancient One". Translated by H. S. Harris. In di Giovanni and Harris 2000.
Leavitt, Fred. 2021. "If Ignorance is Bliss We Should All be Ecstatic." Open Books.
Lehrer, Keith, 1971. "Why Not Scepticism?" Philosophical Forum, vol. II, pp. 283-298.
Jesús Padilla Gálvez, Scepticism as Philosophical Superlative, in: Wittgenstein and the Sceptical Tradition, António Marques & Rui Bertrand Romao (Eds.), Bern, Berlin, Bruxelles, New York, Oxford, Wien: Peter Lang, 2020, pp. 113–122.
François-Xavier de Peretti, « Stop Doubting with Descartes », dans M. Garcia-Valdecasas, J. Milburn, J.-B. Guillon (éds.), « Anti-skepticism », Topoi. An International Review of Philosophy, Springer Nature, on line 3.11.2022
François-Xavier de Peretti, « Descartes sceptique malgré lui ? », International Journal for the Study of Skepticism, 11 (3), 2021, Brill, Leyde, pp. 177-192. Online publication date: 15 octobre 2020. Doi:https://doi.org/10.1163/22105700-bja10016
Thorsrud, Harald. 2009. Ancient Scepticism. Berkeley: University of California Press.
Unger, Peter. 1975. Ignorance: A Case for Scepticism. Oxford, England: Oxford University Press. 2002.
Zeller, Eduard and Oswald J. Reichel. 1892. The Stoics, Epicureans and Sceptics. London: Longmans, Green, and Co.
External links
Ancient Greek Skepticism entry in the Internet Encyclopedia of Philosophy
Renaissance Skepticism entry in the Internet Encyclopedia of Philosophy
Contemporary Skepticism entry in the Internet Encyclopedia of Philosophy
Responses to skepticism by Keith DeRose
Article: Skepticism and Denial by Stephen Novella MD, The New England Journal of Skepticism
Classical Skepticism by Peter Suber
Review and summary of Skepticism and the Veil of Perception by Michael Huemer
Skepticism
Epistemological theories
Skepticism
Doubt
Criticism of science
Skepticism
pl:Sceptycyzm#Sceptycyzm filozoficzny | 0.801125 | 0.995879 | 0.797823 |
Education sciences | Education sciences, also known as education studies, education theory, and traditionally called pedagogy, seek to describe, understand, and prescribe education including education policy. Subfields include comparative education, educational research, instructional theory, curriculum theory and psychology, philosophy, sociology, economics, and history of education. Related are learning theory or cognitive science.
History
The earliest known attempts to understand education in Europe were by classical Greek philosophers and sophists, but there is also evidence of contemporary (or even preceding) discussions among Arabic, Indian, and Chinese scholars.
Philosophy of education
Educational thought is not necessarily concerned with the construction of theories as much as the "reflective examination of educational issues and problems from the perspective of diverse disciplines."
For example, a cultural theory of education considers how education occurs through the totality of culture, including prisons, households, and religious institutions as well as schools. Other examples are the behaviorist theory of education that comes from educational psychology and the functionalist theory of education that comes from sociology of education.
Normative theories of education
Normative theories of education provide the norms, goals, and standards of education. In contrast, descriptive theories of education provide descriptions, explanations or predictions of the processes of education.
"Normative philosophies or theories of education may make use of the results of [philosophical thought] and of factual inquiries about human beings and the psychology of learning, but in any case they propound views about what education should be, what dispositions it should cultivate, why it ought to cultivate them, how and in whom it should do so, and what forms it should take. In a full-fledged philosophical normative theory of education, besides analysis of the sorts described, there will normally be propositions of the following kinds:
1. Basic normative premises about what is good or right;
2. Basic factual premises about humanity and the world;
3. Conclusions, based on these two kinds of premises, about the dispositions education should foster;
4. Further factual premises about such things as the psychology of learning and methods of teaching; and
5. Further conclusions about such things as the methods that education should use."
Examples of the purpose of schools include: to develop reasoning about perennial questions, to master the methods of scientific inquiry, to cultivate the intellect, to create change agents, to develop spirituality, and to model a democratic society.
Common educational philosophies include: educational perennialism, educational progressivism, educational essentialism, critical pedagogy, Montessori education, Waldorf education, and democratic education.
Normative Curriculum theory
Normative theories of curriculum aim to "describe, or set norms, for conditions surrounding many of the concepts and constructs" that define curriculum. These normative propositions differ from those above in that normative curriculum theory is not necessarily untestable. A central question asked by normative curriculum theory is: given a particular educational philosophy, what is worth knowing and why? Some examples are: a deep understanding of the Great Books, direct experiences driven by student interest, a superficial understanding of a wide range knowledge (e.g. Core knowledge), social and community problems and issues, knowledge and understanding specific to cultures and their achievements (e.g. African-Centered Education).
Normative Feminist educational theory
Scholars such as Robyn Wiegman argue that, "academic feminism is perhaps the most successful institutionalizing project of its generation, with more full-time faculty positions and new doctoral degree programs emerging each year in the field it inaugurated, Women's Studies". Feminist educational theory stems from four key tenets, supported by empirical data based on surveys of feminist educators. The first tenet of feminist educational theory is, "Creation of participatory classroom communities". Participatory classroom communities often are smaller classes built around discussion and student involvement. The second tenet is, "Validation of personal experience". Classrooms in which validation of personal experience occur often are focused around students providing their own insights and experiences in group discussion, rather than relying exclusively on the insight of the educator. The third tenet is, "Encouragement of social understanding and activism". This tenet is generally actualized by classrooms discussing and reading about social and societal aspects that students may not be aware of, along with breeding student self-efficacy. The fourth and final tenet of feminist education is, "Development of critical thinking skills/open-mindedness". Classrooms actively engaging in this tenet encourage students to think for themselves and prompt them to move beyond their comfort zones, working outside the bounds of the traditional lecture-based classroom. Though these tenets at times overlap, they combine to provide the basis for modern feminist educational theory, and are supported by a majority of feminist educators.
Feminist educational theory derives from the feminist movement, particularly that of the early 1970s, which prominent feminist bell hooks describes as, "a movement to end sexism, sexist exploitation, and oppression". Academic feminist Robyn Weigman recalls that, "In the early seventies, feminism in the U.S. academy was less an organized entity than a set of practices: an ensemble of courses listed on bulletin boards often taught for free by faculty and community leaders". While feminism traditionally existed outside of the institutionalization of schools (particularly universities), feminist education has gradually taken hold in the last few decades and has gained a foothold in institutionalized educational bodies. "Once fledgling programs have become departments, and faculty have been hired and tenured with full-time commitments".
There are supporters of feminist education as well, many of whom are educators or students. Professor Becky Ropers-Huilman recounts one of her positive experiences with feminist education from the student perspective, explaining that she "...felt very 'in charge' of [her] own learning experiences," and "...was not being graded–or degraded... [while completing] the majority of the assigned work for the class (and additional work that [she] thought would add to class discussion)," all while "...[regarding] the teacher's feedback on [her] participation as one perspective, rather than the perspective". Ropers-Huilman experienced a working feminist classroom that successfully motivated students to go above and beyond, succeeding in generating self-efficacy and caring in the classroom. When Ropers-Huilman became a teacher herself, she embraced feminist educational theory, noting that, "[Teachers] have an obligation as the ones who are vested with an assumed power, even if that power is easily and regularly disrupted, to assess and address the effects that it is having in our classrooms". Ropers-Huilman firmly believes that educators have a duty to address feminist concepts such as the use and flow of power within the classroom, and strongly believes in the potential of feminist educational theory to create positive learning experiences for students and teachers as she has personally experienced.
Ropers-Huilman also celebrates the feminist classroom's inclusivity, noting that in a feminist classroom, "in which power is used to care about, for, and with others… educational participants can shape practices aimed at creating an inclusive society that discovers and utilizes the potential of its actors". Ropers-Huilman believes that a feminist classroom carries the ability to greatly influence the society as a whole, promoting understanding, caring, and inclusivity. Ropers-Huilman actively engages in feminist education in her classes, focusing on concepts such as active learning and critical thinking while attempting to demonstrate and engage in caring behavior and atypical classroom settings, similar to many other feminist educators.
Leading feminist scholar bell hooks argues for the incorporation of feminism into all aspects of society, including education, in her book Feminism is for Everybody. hooks notes that, "Everything [people] know about feminism has come into their lives thirdhand". hooks believes that education offers a counter to the, "...wrongminded notion of feminist movement which implied it was anti-male". hooks cites feminism's negative connotations as major inhibitors to the spread and adoption of feminist ideologies. However, feminist education has seen tremendous growth in adoption in the past few decades, despite the negative connotations of its parent movement.
Criticism of Feminist educational theory
Opposition to feminist educational theory comes from both those who oppose feminism in general and feminists who oppose feminist educational theory in particular. Critics of feminist educational theory argue against the four basic tenets of the theory, "...[contesting] both their legitimacy and their implementation". Lewis Lehrman particularly describes feminist educational ideology as, "...'therapeutic pedagogy' that substitutes an 'overriding' (and detrimental) value on participatory interaction for the expertise of the faculty" (Hoffman). Lehrman argues that the feminist educational tenets of participatory experience and validation of person experience hinder education by limiting and inhibiting the educator's ability to share his or her knowledge, learned through years of education and experience.
Others challenge the legitimacy of feminist educational theory, arguing that it is not unique and is instead a sect of liberatory education. Even feminist educational scholars such as Frances Hoffmann and Jayne Stake are forced to concede that, "feminist pedagogy shared intellectual and political roots with the movements comprising the liberatory education agenda of the past 30 years". These liberatory attempts at the democratization of classrooms demonstrate a growth in liberatory education philosophy that some argue feminist educational theory simply piggybacks off of.
The harshest critiques of feminist educational theory often come from feminists themselves. Feminist scholar Robyn Wiegman argues against feminist education in her article "Academic Feminism against Itself", arguing that feminist educational ideology has abandoned the intersectionality of feminism in many cases, and has also focused exclusively on present content with a singular perspective. Wiegman refers to feminist scholar James Newman's arguments, centered around the idea that, "When we fail... to challenge both students and ourselves to theorize alterity as an issue of change over time as well as of geographic distance, ethnic difference, and sexual choice, we repress... not only the 'thickness' of historical difference itself, but also... our (self) implication in a narrative of progress whose hero(in)es inhabit only the present". Newman (and Wiegman) believe that this presentist ideology imbued within modern academic feminism creates an environment breeding antifeminist ideologies, most importantly an abandonment of the study of difference, integral to feminist ideology. Wiegman believes that feminist educational theory does a great disservice to the feminist movement, while failing to instill the critical thinking and social awareness that feminist educational theory is intended to.
Educational anthropology
Philosophical anthropology is the philosophical study of human nature. In terms of learning, examples of descriptive theories of the learner are: a mind, soul, and spirit capable of emulating the Absolute Mind (Idealism); an orderly, sensing, and rational being capable of understanding the world of things (Realism), a rational being with a soul modeled after God and who comes to know God through reason and revelation (Neo-Thomism), an evolving and active being capable of interacting with the environment (Pragmatism), a fundamentally free and individual being who is capable of being authentic through the making of and taking responsibility for choices (Existentialism). Philosophical concepts for the process of education include Bildung and paideia. Educational anthropology is a sub-field of anthropology and is widely associated with the pioneering work of George Spindler. As the name would suggest, the focus of educational anthropology is obviously on education, although an anthropological approach to education tends to focus on the cultural aspects of education, including informal as well as formal education. As education involves understandings of who we are, it is not surprising that the single most recognized dictum of educational anthropology is that the field is centrally concerned with cultural transmission. Cultural transmission involves the transfer of a sense of identity between generations, sometimes known as enculturation and also transfer of identity between cultures, sometimes known as acculturation. Accordingly, thus it is also not surprising that educational anthropology has become increasingly focused on ethnic identity and ethnic change.
Descriptive Curriculum theory
Descriptive theories of curriculum explain how curricula "benefit or harm all publics it touches".
The term hidden curriculum describes that which is learned simply by being in a learning environment. For example, a student in a teacher-led classroom is learning submission. The hidden curriculum is not necessarily intentional.
Instructional theory
Instructional theories focus on the methods of instruction for teaching curricula. Theories include the methods of: autonomous learning, coyote teaching, inquiry-based instruction, lecture, maturationism, socratic method, outcome-based education, taking children seriously, transformative learning
Educational psychology
Educational psychology is an empirical science that provides descriptive theories of how people learn. Examples of theories of education in psychology are: constructivism, behaviorism, cognitivism, and motivational theory
Cognitive science
Educational neuroscience
Educational neuroscience is an emerging field that brings together researchers in diverse disciplines to explore the interactions between biological processes and education.
Sociology of education
The sociology of education is the study of how public institutions and individual experiences affect education and its outcomes. It is most concerned with the public schooling systems of modern industrial societies, including the expansion of higher, further, adult, and continuing education. Examples of theories of education from sociology include: functionalism, conflict theory, social efficiency, and social mobility.
Teaching method
Learning theories
Educational research
Educational assessment
Educational evaluation
Educational aims and objectives
Politics in education
Education economics
Comparative education
Educational theorists
List of educational psychologists
See also
Anti-schooling activism
Classical education movement
Cognitivism (learning theory)
Andragogy
Geragogy
Humanistic education
International education
Peace education
Movement in learning
Co-construction, collaborative learning
Scholarship of teaching and learning
Notes
References
Thomas, G. (2007) Education and Theory: Strangers in Paradigms. Open University Press
External links
Educational Theory (journal) | 0.808233 | 0.986535 | 0.79735 |
Rationalism | In philosophy, rationalism is the epistemological view that "regards reason as the chief source and test of knowledge" or "any view appealing to reason as a source of knowledge or justification", often in contrast to other possible sources of knowledge such as faith, tradition, or sensory experience. More formally, rationalism is defined as a methodology or a theory "in which the criterion of truth is not sensory but intellectual and deductive".
In a major philosophical debate during the Enlightenment, rationalism (sometimes here equated with innatism) was opposed to empiricism. On the one hand, the rationalists emphasized that knowledge is primarily innate and the intellect, the inner faculty of the human mind, can therefore directly grasp or derive logical truths; on the other hand, the empiricists emphasized that knowledge is not primarily innate and is best gained by careful observation of the physical world outside the mind, namely through sensory experiences. Rationalists asserted that certain principles exist in logic, mathematics, ethics, and metaphysics that are so fundamentally true that denying them causes one to fall into contradiction. The rationalists had such a high confidence in reason that empirical proof and physical evidence were regarded as unnecessary to ascertain certain truthsin other words, "there are significant ways in which our concepts and knowledge are gained independently of sense experience".
Different degrees of emphasis on this method or theory lead to a range of rationalist standpoints, from the moderate position "that reason has precedence over other ways of acquiring knowledge" to the more extreme position that reason is "the unique path to knowledge". Given a pre-modern understanding of reason, rationalism is identical to philosophy, the Socratic life of inquiry, or the zetetic (skeptical) clear interpretation of authority (open to the underlying or essential cause of things as they appear to our sense of certainty).
Background
Rationalismas an appeal to human reason as a way of obtaining knowledgehas a philosophical history dating from antiquity. The analytical nature of much of philosophical enquiry, the awareness of apparently a priori domains of knowledge such as mathematics, combined with the emphasis of obtaining knowledge through the use of rational faculties (commonly rejecting, for example, direct revelation) have made rationalist themes very prevalent in the history of philosophy.
Since the Enlightenment, rationalism is usually associated with the introduction of mathematical methods into philosophy as seen in the works of Descartes, Leibniz, and Spinoza. This is commonly called continental rationalism, because it was predominant in the continental schools of Europe, whereas in Britain empiricism dominated.
Even then, the distinction between rationalists and empiricists was drawn at a later period and would not have been recognized by the philosophers involved. Also, the distinction between the two philosophies is not as clear-cut as is sometimes suggested; for example, Descartes and Locke have similar views about the nature of human ideas.
Proponents of some varieties of rationalism argue that, starting with foundational basic principles, like the axioms of geometry, one could deductively derive the rest of all possible knowledge. Notable philosophers who held this view most clearly were Baruch Spinoza and Gottfried Leibniz, whose attempts to grapple with the epistemological and metaphysical problems raised by Descartes led to a development of the fundamental approach of rationalism. Both Spinoza and Leibniz asserted that, in principle, all knowledge, including scientific knowledge, could be gained through the use of reason alone, though they both observed that this was not possible in practice for human beings except in specific areas such as mathematics. On the other hand, Leibniz admitted in his book Monadology that "we are all mere Empirics in three fourths of our actions."
Political usage
In politics, rationalism, since the Enlightenment, historically emphasized a "politics of reason" centered upon rational choice, deontology, utilitarianism, secularism, and irreligionthe latter aspect's antitheism was later softened by the adoption of pluralistic reasoning methods practicable regardless of religious or irreligious ideology. In this regard, the philosopher John Cottingham noted how rationalism, a methodology, became socially conflated with atheism, a worldview:
Philosophical usage
Rationalism is often contrasted with empiricism. Taken very broadly, these views are not mutually exclusive, since a philosopher can be both rationalist and empiricist. Taken to extremes, the empiricist view holds that all ideas come to us a posteriori, that is to say, through experience; either through the external senses or through such inner sensations as pain and gratification. The empiricist essentially believes that knowledge is based on or derived directly from experience. The rationalist believes we come to knowledge a priorithrough the use of logic and is thus independent of sensory experience. In other words, as Galen Strawson once wrote, "you can see that it is true just lying on your couch. You don't have to get up off your couch and go outside and examine the way things are in the physical world. You don't have to do any science."
Between both philosophies, the issue at hand is the fundamental source of human knowledge and the proper techniques for verifying what we think we know. Whereas both philosophies are under the umbrella of epistemology, their argument lies in the understanding of the warrant, which is under the wider epistemic umbrella of the theory of justification. Part of epistemology, this theory attempts to understand the justification of propositions and beliefs. Epistemologists are concerned with various epistemic features of belief, which include the ideas of justification, warrant, rationality, and probability. Of these four terms, the term that has been most widely used and discussed by the early 21st century is "warrant". Loosely speaking, justification is the reason that someone (probably) holds a belief.
If A makes a claim and then B casts doubt on it, As next move would normally be to provide justification for the claim. The precise method one uses to provide justification is where the lines are drawn between rationalism and empiricism (among other philosophical views). Much of the debate in these fields are focused on analyzing the nature of knowledge and how it relates to connected notions such as truth, belief, and justification.
At its core, rationalism consists of three basic claims. For people to consider themselves rationalists, they must adopt at least one of these three claims: the intuition/deduction thesis, the innate knowledge thesis, or the innate concept thesis. In addition, a rationalist can choose to adopt the claim of Indispensability of Reason and or the claim of Superiority of Reason, although one can be a rationalist without adopting either thesis.
The indispensability of reason thesis: "The knowledge we gain in subject area, S, by intuition and deduction, as well as the ideas and instances of knowledge in S that are innate to us, could not have been gained by us through sense experience." In short, this thesis claims that experience cannot provide what we gain from reason.
The superiority of reason thesis: '"The knowledge we gain in subject area S by intuition and deduction or have innately is superior to any knowledge gained by sense experience". In other words, this thesis claims reason is superior to experience as a source for knowledge.
Rationalists often adopt similar stances on other aspects of philosophy. Most rationalists reject skepticism for the areas of knowledge they claim are knowable a priori. When you claim some truths are innately known to us, one must reject skepticism in relation to those truths. Especially for rationalists who adopt the Intuition/Deduction thesis, the idea of epistemic foundationalism tends to crop up. This is the view that we know some truths without basing our belief in them on any others and that we then use this foundational knowledge to know more truths.
Intuition/deduction thesis
Generally speaking, intuition is a priori knowledge or experiential belief characterized by its immediacy; a form of rational insight. We simply "see" something in such a way as to give us a warranted belief. Beyond that, the nature of intuition is hotly debated. In the same way, generally speaking, deduction is the process of reasoning from one or more general premises to reach a logically certain conclusion. Using valid arguments, we can deduce from intuited premises.
For example, when we combine both concepts, we can intuit that the number three is prime and that it is greater than two. We then deduce from this knowledge that there is a prime number greater than two. Thus, it can be said that intuition and deduction combined to provide us with a priori knowledgewe gained this knowledge independently of sense experience.
To argue in favor of this thesis, Gottfried Wilhelm Leibniz, a prominent German philosopher, says,
Empiricists such as David Hume have been willing to accept this thesis for describing the relationships among our own concepts. In this sense, empiricists argue that we are allowed to intuit and deduce truths from knowledge that has been obtained a posteriori.
By injecting different subjects into the Intuition/Deduction thesis, we are able to generate different arguments. Most rationalists agree mathematics is knowable by applying the intuition and deduction. Some go further to include ethical truths into the category of things knowable by intuition and deduction. Furthermore, some rationalists also claim metaphysics is knowable in this thesis. Naturally, the more subjects the rationalists claim to be knowable by the Intuition/Deduction thesis, the more certain they are of their warranted beliefs, and the more strictly they adhere to the infallibility of intuition, the more controversial their truths or claims and the more radical their rationalism.
In addition to different subjects, rationalists sometimes vary the strength of their claims by adjusting their understanding of the warrant. Some rationalists understand warranted beliefs to be beyond even the slightest doubt; others are more conservative and understand the warrant to be belief beyond a reasonable doubt.
Rationalists also have different understanding and claims involving the connection between intuition and truth. Some rationalists claim that intuition is infallible and that anything we intuit to be true is as such. More contemporary rationalists accept that intuition is not always a source of certain knowledgethus allowing for the possibility of a deceiver who might cause the rationalist to intuit a false proposition in the same way a third party could cause the rationalist to have perceptions of nonexistent objects.
Innate knowledge thesis
The Innate Knowledge thesis is similar to the Intuition/Deduction thesis in the regard that both theses claim knowledge is gained a priori. The two theses go their separate ways when describing how that knowledge is gained. As the name, and the rationale, suggests, the Innate Knowledge thesis claims knowledge is simply part of our rational nature. Experiences can trigger a process that allows this knowledge to come into our consciousness, but the experiences do not provide us with the knowledge itself. The knowledge has been with us since the beginning and the experience simply brought into focus, in the same way a photographer can bring the background of a picture into focus by changing the aperture of the lens. The background was always there, just not in focus.
This thesis targets a problem with the nature of inquiry originally postulated by Plato in Meno. Here, Plato asks about inquiry; how do we gain knowledge of a theorem in geometry? We inquire into the matter. Yet, knowledge by inquiry seems impossible. In other words, "If we already have the knowledge, there is no place for inquiry. If we lack the knowledge, we don't know what we are seeking and cannot recognize it when we find it. Either way we cannot gain knowledge of the theorem by inquiry. Yet, we do know some theorems." The Innate Knowledge thesis offers a solution to this paradox. By claiming that knowledge is already with us, either consciously or unconsciously, a rationalist claims we don't really learn things in the traditional usage of the word, but rather that we simply use words we know.
Innate concept thesis
Similar to the Innate Knowledge thesis, the Innate Concept thesis suggests that some concepts are simply part of our rational nature. These concepts are a priori in nature and sense experience is irrelevant to determining the nature of these concepts (though, sense experience can help bring the concepts to our conscious mind).
In his book Meditations on First Philosophy, René Descartes postulates three classifications for our ideas when he says, "Among my ideas, some appear to be innate, some to be adventitious, and others to have been invented by me. My understanding of what a thing is, what truth is, and what thought is, seems to derive simply from my own nature. But my hearing a noise, as I do now, or seeing the sun, or feeling the fire, comes from things which are located outside me, or so I have hitherto judged. Lastly, sirens, hippogriffs and the like are my own invention."
Adventitious ideas are those concepts that we gain through sense experiences, ideas such as the sensation of heat, because they originate from outside sources; transmitting their own likeness rather than something else and something you simply cannot will away. Ideas invented by us, such as those found in mythology, legends and fairy tales, are created by us from other ideas we possess. Lastly, innate ideas, such as our ideas of perfection, are those ideas we have as a result of mental processes that are beyond what experience can directly or indirectly provide.
Gottfried Wilhelm Leibniz defends the idea of innate concepts by suggesting the mind plays a role in determining the nature of concepts, to explain this, he likens the mind to a block of marble in the New Essays on Human Understanding,
Some philosophers, such as John Locke (who is considered one of the most influential thinkers of the Enlightenment and an empiricist), argue that the Innate Knowledge thesis and the Innate Concept thesis are the same. Other philosophers, such as Peter Carruthers, argue that the two theses are distinct from one another. As with the other theses covered under the umbrella of rationalism, the more types and greater number of concepts a philosopher claims to be innate, the more controversial and radical their position; "the more a concept seems removed from experience and the mental operations we can perform on experience the more plausibly it may be claimed to be innate. Since we do not experience perfect triangles but do experience pains, our concept of the former is a more promising candidate for being innate than our concept of the latter.
History
Rationalist philosophy in Western antiquity
Although rationalism in its modern form post-dates antiquity, philosophers from this time laid down the foundations of rationalism. In particular, the understanding that we may be aware of knowledge available only through the use of rational thought.
Pythagoras (570–495 BCE)
Pythagoras was one of the first Western philosophers to stress rationalist insight. He is often revered as a great mathematician, mystic and scientist, but he is best known for the Pythagorean theorem, which bears his name, and for discovering the mathematical relationship between the length of strings on lute and the pitches of the notes. Pythagoras "believed these harmonies reflected the ultimate nature of reality. He summed up the implied metaphysical rationalism in the words 'All is number'. It is probable that he had caught the rationalist's vision, later seen by Galileo (1564–1642), of a world governed throughout by mathematically formulable laws". It has been said that he was the first man to call himself a philosopher, or lover of wisdom.
Plato (427–347 BCE)
Plato held rational insight to a very high standard, as is seen in his works such as Meno and The Republic. He taught on the Theory of Forms (or the Theory of Ideas) which asserts that the highest and most fundamental kind of reality is not the material world of change known to us through sensation, but rather the abstract, non-material (but substantial) world of forms (or ideas). For Plato, these forms were accessible only to reason and not to sense. In fact, it is said that Plato admired reason, especially in geometry, so highly that he had the phrase "Let no one ignorant of geometry enter" inscribed over the door to his academy.
Aristotle (384–322 BCE)
Aristotle's main contribution to rationalist thinking was the use of syllogistic logic and its use in argument. Aristotle defines syllogism as "a discourse in which certain (specific) things having been supposed, something different from the things supposed results of necessity because these things are so." Despite this very general definition, Aristotle limits himself to categorical syllogisms which consist of three categorical propositions in his work Prior Analytics. These included categorical modal syllogisms.
Middle Ages
Although the three great Greek philosophers disagreed with one another on specific points, they all agreed that rational thought could bring to light knowledge that was self-evidentinformation that humans otherwise could not know without the use of reason. After Aristotle's death, Western rationalistic thought was generally characterized by its application to theology, such as in the works of Augustine, the Islamic philosopher Avicenna (Ibn Sina), Averroes (Ibn Rushd), and Jewish philosopher and theologian Maimonides. The Waldensians sect also incorporated rationalism into their movement. One notable event in the Western timeline was the philosophy of Thomas Aquinas who attempted to merge Greek rationalism and Christian revelation in the thirteenth-century. Generally, the Roman Catholic Church viewed Rationalists as a threat, labeling them as those who "while admitting revelation, reject from the word of God whatever, in their private judgment, is inconsistent with human reason."
Classical rationalism
René Descartes (1596–1650)
Descartes was the first of the modern rationalists and has been dubbed the 'Father of Modern Philosophy.' Much subsequent Western philosophy is a response to his writings, which are studied closely to this day.
Descartes thought that only knowledge of eternal truthsincluding the truths of mathematics, and the epistemological and metaphysical foundations of the sciences could be attained by reason alone; other knowledge, the knowledge of physics, required experience of the world, aided by the scientific method. He also argued that although dreams appear as real as sense experience, these dreams cannot provide persons with knowledge. Also, since conscious sense experience can be the cause of illusions, then sense experience itself can be doubtable. As a result, Descartes deduced that a rational pursuit of truth should doubt every belief about sensory reality. He elaborated these beliefs in such works as Discourse on the Method, Meditations on First Philosophy, and Principles of Philosophy. Descartes developed a method to attain truths according to which nothing that cannot be recognised by the intellect (or reason) can be classified as knowledge. These truths are gained "without any sensory experience," according to Descartes. Truths that are attained by reason are broken down into elements that intuition can grasp, which, through a purely deductive process, will result in clear truths about reality.
Descartes therefore argued, as a result of his method, that reason alone determined knowledge, and that this could be done independently of the senses. For instance, his famous dictum, cogito ergo sum or "I think, therefore I am", is a conclusion reached a priori i.e., prior to any kind of experience on the matter. The simple meaning is that doubting one's existence, in and of itself, proves that an "I" exists to do the thinking. In other words, doubting one's own doubting is absurd. This was, for Descartes, an irrefutable principle upon which to ground all forms of other knowledge. Descartes posited a metaphysical dualism, distinguishing between the substances of the human body ("res extensa") and the mind or soul ("res cogitans"). This crucial distinction would be left unresolved and lead to what is known as the mind–body problem, since the two substances in the Cartesian system are independent of each other and irreducible.
Baruch Spinoza (1632–1677)
The philosophy of Baruch Spinoza is a systematic, logical, rational philosophy developed in seventeenth-century Europe. Spinoza's philosophy is a system of ideas constructed upon basic building blocks with an internal consistency with which he tried to answer life's major questions and in which he proposed that "God exists only philosophically." He was heavily influenced by Descartes, Euclid and Thomas Hobbes, as well as theologians in the Jewish philosophical tradition such as Maimonides. But his work was in many respects a departure from the Judeo-Christian tradition. Many of Spinoza's ideas continue to vex thinkers today and many of his principles, particularly regarding the emotions, have implications for modern approaches to psychology. To this day, many important thinkers have found Spinoza's "geometrical method" difficult to comprehend: Goethe admitted that he found this concept confusing. His magnum opus, Ethics, contains unresolved obscurities and has a forbidding mathematical structure modeled on Euclid's geometry. Spinoza's philosophy attracted believers such as Albert Einstein and much intellectual attention.
Gottfried Leibniz (1646–1716)
Leibniz was the last major figure of seventeenth-century rationalism who contributed heavily to other fields such as metaphysics, epistemology, logic, mathematics, physics, jurisprudence, and the philosophy of religion; he is also considered to be one of the last "universal geniuses". He did not develop his system, however, independently of these advances. Leibniz rejected Cartesian dualism and denied the existence of a material world. In Leibniz's view there are infinitely many simple substances, which he called "monads" (which he derived directly from Proclus).
Leibniz developed his theory of monads in response to both Descartes and Spinoza, because the rejection of their visions forced him to arrive at his own solution. Monads are the fundamental unit of reality, according to Leibniz, constituting both inanimate and animate objects. These units of reality represent the universe, though they are not subject to the laws of causality or space (which he called "well-founded phenomena"). Leibniz, therefore, introduced his principle of pre-established harmony to account for apparent causality in the world.
Immanuel Kant (1724–1804)
Kant is one of the central figures of modern philosophy, and set the terms by which all subsequent thinkers have had to grapple. He argued that human perception structures natural laws, and that reason is the source of morality. His thought continues to hold a major influence in contemporary thought, especially in fields such as metaphysics, epistemology, ethics, political philosophy, and aesthetics.
Kant named his brand of epistemology "Transcendental Idealism", and he first laid out these views in his famous work The Critique of Pure Reason. In it he argued that there were fundamental problems with both rationalist and empiricist dogma. To the rationalists he argued, broadly, that pure reason is flawed when it goes beyond its limits and claims to know those things that are necessarily beyond the realm of every possible experience: the existence of God, free will, and the immortality of the human soul. Kant referred to these objects as "The Thing in Itself" and goes on to argue that their status as objects beyond all possible experience by definition means we cannot know them. To the empiricist, he argued that while it is correct that experience is fundamentally necessary for human knowledge, reason is necessary for processing that experience into coherent thought. He therefore concludes that both reason and experience are necessary for human knowledge. In the same way, Kant also argued that it was wrong to regard thought as mere analysis. "In Kant's views, a priori concepts do exist, but if they are to lead to the amplification of knowledge, they must be brought into relation with empirical data".
Contemporary rationalism
Rationalism has become a rarer label of philosophers today; rather many different kinds of specialised rationalisms are identified. For example, Robert Brandom has appropriated the terms "rationalist expressivism" and "rationalist pragmatism" as labels for aspects of his programme in Articulating Reasons, and identified "linguistic rationalism", the claim that the contents of propositions "are essentially what can serve as both premises and conclusions of inferences", as a key thesis of Wilfred Sellars.
Outside of academic philosophy, some participants in the internet communities surrounding LessWrong and Slate Star Codex have described themselves as "rationalists." The term has also been used in this way by critics such as Timnit Gebru.
Criticism
Rationalism was criticized by American psychologist William James for being out of touch with reality. James also criticized rationalism for representing the universe as a closed system, which contrasts with his view that the universe is an open system.
Proponents of emotional choice theory criticize rationalism by drawing on new findings from emotion research in psychology and neuroscience. They point out that the rationalist paradigm is generally based on the assumption that decision-making is a conscious and reflective process based on thoughts and beliefs. It presumes that people decide on the basis of calculation and deliberation. However, cumulative research in neuroscience suggests that only a small part of the brain's activities operate at the level of conscious reflection. The vast majority of its activities consist of unconscious appraisals and emotions. The significance of emotions in decision-making has generally been ignored by rationalism, according to these critics. Moreover, emotional choice theorists contend that the rationalist paradigm has difficulty incorporating emotions into its models, because it cannot account for the social nature of emotions. Even though emotions are felt by individuals, psychologists and sociologists have shown that emotions cannot be isolated from the social environment in which they arise. Emotions are inextricably intertwined with people's social norms and identities, which are typically outside the scope of standard rationalist accounts. Emotional choice theory seeks to capture not only the social but also the physiological and dynamic character of emotions. It represents a unitary action model to organize, explain, and predict the ways in which emotions shape decision-making.
See also
References
Sources
Primary
Descartes, René (1637), Discourse on the Method.
Spinoza, Baruch (1677), Ethics.
Leibniz, Gottfried (1714), Monadology.
Kant, Immanuel (1781/1787), Critique of Pure Reason.
Secondary
Audi, Robert (ed., 1999), The Cambridge Dictionary of Philosophy, Cambridge University Press, Cambridge, 1995. 2nd edition, 1999.
Blackburn, Simon (1996), The Oxford Dictionary of Philosophy, Oxford University Press, Oxford, 1994. Paperback edition with new Chronology, 1996.
Bourke, Vernon J. (1962), "Rationalism," p. 263 in Runes (1962).
Douglas, Alexander X.: Spinoza and Dutch Cartesianism: Philosophy and Theology. (Oxford: Oxford University Press, 2015)
Förster, Eckart; Melamed, Yitzhak Y. (eds.): Spinoza and German Idealism. (Cambridge: Cambridge University Press, 2012)
Fraenkel, Carlos; Perinetti, Dario; Smith, Justin E. H. (eds.): The Rationalists: Between Tradition and Innovation. (Dordrecht: Springer, 2011)
Hampshire, Stuart: Spinoza and Spinozism. (Oxford: Clarendon Press; New York: Oxford University Press, 2005)
Huenemann, Charles; Gennaro, Rocco J. (eds.): New Essays on the Rationalists. (New York: Oxford University Press, 1999)
Lacey, A.R. (1996), A Dictionary of Philosophy, 1st edition, Routledge and Kegan Paul, 1976. 2nd edition, 1986. 3rd edition, Routledge, London, 1996.
Loeb, Louis E.: From Descartes to Hume: Continental Metaphysics and the Development of Modern Philosophy. (Ithaca, New York: Cornell University Press, 1981)
Nyden-Bullock, Tammy: Spinoza's Radical Cartesian Mind. (Continuum, 2007)
Pereboom, Derk (ed.): The Rationalists: Critical Essays on Descartes, Spinoza, and Leibniz. (Lanham, MD: Rowman & Littlefield, 1999)
Phemister, Pauline: The Rationalists: Descartes, Spinoza and Leibniz. (Malden, MA: Polity Press, 2006)
Runes, Dagobert D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Strazzoni, Andrea: Dutch Cartesianism and the Birth of Philosophy of Science: A Reappraisal of the Function of Philosophy from Regius to 's Gravesande, 1640–1750. (Berlin: De Gruyter, 2018)
Verbeek, Theo: Descartes and the Dutch: Early Reactions to Cartesian Philosophy, 1637–1650. (Carbondale: Southern Illinois University Press, 1992)
External links
John F. Hurst (1867), History of Rationalism Embracing a Survey of the Present State of Protestant Theology
Epistemological theories
Philosophical schools and traditions
Reasoning | 0.79758 | 0.998485 | 0.796371 |
Metaphysics | Metaphysics is the branch of philosophy that examines the basic structure of reality. It is traditionally seen as the study of mind-independent features of the world, but some modern theorists view it as an inquiry into the fundamental categories of human understanding. It is sometimes characterized as first philosophy to suggest that it is more fundamental than other forms of philosophical inquiry.
Metaphysics encompasses a wide range of general and abstract topics. It investigates the nature of existence, the features all entities have in common, and their division into categories of being. An influential division is between particulars and universals. Particulars are individual unique entities, like a specific apple. Universals are general repeatable entities that characterize particulars, like the color red. Modal metaphysics examines what it means for something to be possible or necessary. Metaphysicians also explore the concepts of space, time, and change, and their connection to causality and the laws of nature. Other topics include how mind and matter are related, whether everything in the world is predetermined, and whether there is free will.
Metaphysicians use various methods to conduct their inquiry. Traditionally, they rely on rational intuitions and abstract reasoning but have more recently also included empirical approaches associated with scientific theories. Due to the abstract nature of its topic, metaphysics has received criticisms questioning the reliability of its methods and the meaningfulness of its theories. Metaphysics is relevant to many fields of inquiry that often implicitly rely on metaphysical concepts and assumptions.
The roots of metaphysics lie in antiquity with speculations about the nature and origin of universe, like those found in the Upanishads in ancient India, Daoism in ancient China, and pre-Socratic philosophy in ancient Greece. During the subsequent medieval period in the West, discussions about the nature of universals were influenced by the philosophies of Plato and Aristotle. The modern period saw the emergence of various comprehensive systems of metaphysics, many of which embraced idealism. In the 20th century, a "revolt against idealism" was started, metaphysics was once declared meaningless, and then revived with various criticisms of earlier theories and new approaches to metaphysical inquiry.
Definition
Metaphysics is the study of the most general features of reality, including existence, objects and their properties, possibility and necessity, space and time, change, causation, and the relation between matter and mind. It is one of the oldest branches of philosophy.
The precise nature of metaphysics is disputed and its characterization has changed in the course of history. Some approaches see metaphysics as a unified field and give a wide-sweeping definition by understanding it as the study of "fundamental questions about the nature of reality" or as an inquiry into the essences of things. Another approach doubts that the different areas of metaphysics share a set of underlying features and provides instead a fine-grained characterization by listing all the main topics investigated by metaphysicians. Some definitions are descriptive by providing an account of what metaphysicians do while others are normative and prescribe what metaphysicians ought to do.
Two historically influential definitions in ancient and medieval philosophy understand metaphysics as the science of the first causes and as the study of being qua being, that is, the topic of what all beings have in common and to what fundamental categories they belong. In the modern period, the scope of metaphysics expanded to include topics such as the distinction between mind and body and free will. Some philosophers follow Aristotle in describing metaphysics as "first philosophy", suggesting that it is the most basic inquiry upon which all other branches of philosophy depend in some way.
Metaphysics is traditionally understood as a study of mind-independent features of reality. Starting with Immanuel Kant's critical philosophy, an alternative conception gained prominence that focuses on conceptual schemes rather than external reality. Kant distinguishes transcendent metaphysics, which aims to describe the objective features of reality beyond sense experience, from critical metaphysics, which outlines the aspects and principles underlying all human thought and experience. Philosopher P. F. Strawson further explored the role of conceptual schemes, contrasting descriptive metaphysics, which articulates conceptual schemes commonly used to understand the world, with revisionary metaphysics, which aims to produce better conceptual schemes.
Metaphysics differs from the individual sciences by studying the most general and abstract aspects of reality. The individual sciences, by contrast, examine more specific and concrete features and restrict themselves to certain classes of entities, such as the focus on physical things in physics, living entities in biology, and cultures in anthropology. It is disputed to what extent this contrast is a strict dichotomy rather than a gradual continuum.
Etymology
The word metaphysics has its origin in the ancient Greek words metá (μετά, meaning , , and ) and phusiká (φυσικά), as a short form of ta metá ta phusiká, meaning . This is often interpreted to mean that metaphysics discusses topics that, due to their generality and comprehensiveness, lie beyond the realm of physics and its focus on empirical observation. Metaphysics got its name by a historical accident when Aristotle's book on this subject was published. Aristotle did not use the term metaphysics but his editor (likely Andronicus of Rhodes) may have coined it for its title to indicate that this book should be studied after Aristotle's book published on physics: literally after physics. The term entered the English language through the Latin word metaphysica.
Branches
The nature of metaphysics can also be characterized in relation to its main branches. An influential division from early modern philosophy distinguishes between general and special or specific metaphysics. General metaphysics, also called ontology, takes the widest perspective and studies the most fundamental aspects of being. It investigates the features that all entities share and how entities can be divided into different categories. Categories are the most general kinds, such as substance, property, relation, and fact. Ontologists research which categories there are, how they depend on one another, and how they form a system of categories that provides a comprehensive classification of all entities.
Special metaphysics considers being from more narrow perspectives and is divided into subdisciplines based on the perspective they take. Metaphysical cosmology examines changeable things and investigates how they are connected to form a world as a totality extending through space and time. Rational psychology focuses on metaphysical foundations and problems concerning the mind, such as its relation to matter and the freedom of the will. Natural theology studies the divine and its role as the first cause. The scope of special metaphysics overlaps with other philosophical disciplines, making it unclear whether a topic belongs to it or to areas like philosophy of mind and theology.
Applied metaphysics is a relatively young subdiscipline. It belongs to applied philosophy and studies the applications of metaphysics, both within philosophy and other fields of inquiry. In areas like ethics and philosophy of religion, it addresses topics like the ontological foundations of moral claims and religious doctrines. Beyond philosophy, its applications include the use of ontologies in artificial intelligence, economics, and sociology to classify entities. In psychiatry and medicine, it examines the metaphysical status of diseases.
Meta-metaphysics is the metatheory of metaphysics and investigates the nature and methods of metaphysics. It examines how metaphysics differs from other philosophical and scientific disciplines and assesses its relevance to them. Even though discussions of these topics have a long history in metaphysics, meta-metaphysics has only recently developed into a systematic field of inquiry.
Topics
Existence and categories of being
Metaphysicians often regard existence or being as one of the most basic and general concepts. To exist means to form part of reality, distinguishing real entities from imaginary ones. According to the orthodox view, existence is a property of properties: if an entity exists then its properties are instantiated. A different position states that existence is a property of individuals, meaning that it is similar to other properties, such as shape or size. It is controversial whether all entities have this property. According to Alexius Meinong, there are nonexistent objects, including merely possible objects like Santa Claus and Pegasus. A related question is whether existence is the same for all entities or whether there are different modes or degrees of existence. For instance, Plato held that Platonic forms, which are perfect and immutable ideas, have a higher degree of existence than matter, which can only imperfectly reflect Platonic forms.
Another key concern in metaphysics is the division of entities into distinct groups based on underlying features they share. Theories of categories provide a system of the most fundamental kinds or the highest genera of being by establishing a comprehensive inventory of everything. One of the earliest theories of categories was proposed by Aristotle, who outlined a system of 10 categories. He argued that substances (e.g. man and horse), are the most important category since all other categories like quantity (e.g. four), quality (e.g. white), and place (e.g. in Athens) are said of substances and depend on them. Kant understood categories as fundamental principles underlying human understanding and developed a system of 12 categories, divided into the four classes quantity, quality, relation, and modality. More recent theories of categories were proposed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Many philosophers rely on the contrast between concrete and abstract objects. According to a common view, concrete objects, like rocks, trees, and human beings, exist in space and time, undergo changes, and impact each other as cause and effect, whereas abstract objects, like numbers and sets, exist outside space and time, are immutable, and do not engage in causal relations.
Particulars
Particulars are individual entities and include both concrete objects, like Aristotle, the Eiffel Tower, or a specific apple, and abstract objects, like the number 2 or a specific set in mathematics. Also called individuals, they are unique, non-repeatable entities and contrast with universals, like the color red, which can at the same time exist in several places and characterize several particulars. A widely held view is that particulars instantiate universals but are not themselves instantiated by something else, meaning that they exist in themselves while universals exist in something else. Substratum theory analyzes each particular as a substratum, also called bare particular, together with various properties. The substratum confers individuality to the particular while the properties express its qualitative features or what it is like. This approach is rejected by bundle theorists, who state that particulars are only bundles of properties without an underlying substratum. Some bundle theorists include in the bundle an individual essence, called haecceity, to ensure that each bundle is unique. Another proposal for concrete particulars is that they are individuated by their space-time location.
Concrete particulars encountered in everyday life, like rocks, tables, and organisms, are complex entities composed of various parts. For example, a table is made up of a tabletop and legs, each of which is itself made up of countless particles. The relation between parts and wholes is studied by mereology. The problem of the many is about which groups of entities form mereological wholes, for instance, whether a dust particle on the tabletop is part of the table. According to mereological universalists, every collection of entities forms a whole, meaning that the parts of the table without the dust particle form one whole while they together with it form a second whole. Mereological moderatists hold that certain conditions must be met for a group of entities to compose a whole, for example, that the entities touch one another. Mereological nihilists reject the idea of wholes altogether, claiming that there are no tables and chairs but only particles that are arranged table-wise and chair-wise. A related mereological problem is whether there are simple entities that have no parts, as atomists claim, or not, as continuum theorists contend.
Universals
Universals are general entities, encompassing both properties and relations, that express what particulars are like and how they resemble one another. They are repeatable, meaning that they are not limited to a unique existent but can be instantiated by different particulars at the same time. For example, the particulars Nelson Mandela and Mahatma Gandhi instantiate the universal humanity, similar to how a strawberry and a ruby instantiate the universal red.
A topic discussed since ancient philosophy, the problem of universals consists in the challenge of characterizing the ontological status of universals. Realists argue that universals are real, mind-independent entities that exist in addition to particulars. According to Platonic realists, universals exist independently of particulars, which implies that the universal red would continue to exist even if there were no red things. A more moderate form of realism, inspired by Aristotle, states that universals depend on particulars, meaning that they are only real if they are instantiated. Nominalists reject the idea that universals exist in either form. For them, the world is composed exclusively of particulars. Conceptualists offer an intermediate position, stating that universals exist, but only as concepts in the mind used to order experience by classifying entities.
Natural and social kinds are often understood as special types of universals. Entities belonging to the same natural kind share certain fundamental features characteristic of the structure of the natural world. In this regard, natural kinds are not an artificially constructed classification but are discovered, usually by the natural sciences, and include kinds like electrons, , and tigers. Scientific realists and anti-realists disagree about whether natural kinds exist. Social kinds, like money and baseball, are studied by social metaphysics and characterized as useful social constructions that, while not purely fictional, do not reflect the fundamental structure of mind-independent reality.
Possibility and necessity
The concepts of possibility and necessity convey what can or must be the case, expressed in statements like "it is possible to find a cure for cancer" and "it is necessary that two plus two equals four". They belong to modal metaphysics, which investigates the metaphysical principles underlying them, in particular, why some modal statements are true while others are false. Some metaphysicians hold that modality is a fundamental aspect of reality, meaning that besides facts about what is the case, there are additional facts about what could or must be the case. A different view argues that modal truths are not about an independent aspect of reality but can be reduced to non-modal characteristics, for example, to facts about what properties or linguistic descriptions are compatible with each other or to fictional statements.
Borrowing a term from German philosopher Gottfried Wilhelm Leibniz's theodicy, many metaphysicians use the concept of possible worlds to analyze the meaning and ontological ramifications of modal statements. A possible world is a complete and consistent way of how things could have been. For example, the dinosaurs were wiped out in the actual world but there are possible worlds in which they are still alive. According to possible world semantics, a statement is possibly true if it is true in at least one possible world, whereas it is necessarily true if it is true in all possible worlds. Modal realists argue that possible worlds exist as concrete entities in the same sense as the actual world, with the main difference being that the actual world is the world we live in while other possible worlds are inhabited by counterparts. This view is controversial and various alternatives have been suggested, for example, that possible worlds only exist as abstract objects or are similar to stories told in works of fiction.
Space, time, and change
Space and time are dimensions that entities occupy. Spacetime realists state that space and time are fundamental aspects of reality and exist independently of the human mind. Spacetime idealists, by contrast, hold that space and time are constructs of the human mind, created to organize and make sense of reality. Spacetime absolutism or substantivalism understands spacetime as a distinct object, with some metaphysicians conceptualizing it as a container that holds all other entities within it. Spacetime relationism sees spacetime not as an object but as a network of relations between objects, such as the spatial relation of being next to and the temporal relation of coming before.
In the metaphysics of time, an important contrast is between the A-series and the B-series. According to the A-series theory, the flow of time is real, meaning that events are categorized into the past, present, and future. The present continually moves forward in time and events that are in the present now will eventually change their status and lie in the past. From the perspective of the B-series theory, time is static, and events are ordered by the temporal relations earlier-than and later-than without any essential difference between past, present, and future. Eternalism holds that past, present, and future are equally real, whereas presentism asserts that only entities in the present exist.
Material objects persist through time and change in the process, like a tree that grows or loses leaves. The main ways of conceptualizing persistence through time are endurantism and perdurantism. According to endurantism, material objects are three-dimensional entities that are wholly present at each moment. As they change, they gain or lose properties but otherwise remain the same. Perdurantists see material objects as four-dimensional entities that extend through time and are made up of different temporal parts. At each moment, only one part of the object is present, not the object as a whole. Change means that an earlier part is qualitatively different from a later part. For example, when a banana ripens, there is an unripe part followed by a ripe part.
Causality
Causality is the relation between cause and effect whereby one entity produces or affects another entity. For instance, if a person bumps a glass and spills its contents then the bump is the cause and the spill is the effect. Besides the single-case causation between particulars in this example, there is also general-case causation expressed in statements such as "smoking causes cancer". The term agent causation is used when people and their actions cause something. Causation is usually interpreted deterministically, meaning that a cause always brings about its effect. This view is rejected by probabilistic theories, which claim that the cause merely increases the probability that the effect occurs. This view can explain that smoking causes cancer even though this does not happen in every single case.
The regularity theory of causation, inspired by David Hume's philosophy, states that causation is nothing but a constant conjunction in which the mind apprehends that one phenomenon, like putting one's hand in a fire, is always followed by another phenomenon, like a feeling of pain. According to nomic regularity theories, regularities manifest as laws of nature studied by science. Counterfactual theories focus not on regularities but on how effects depend on their causes. They state that effects owe their existence to the cause and would not occur without them. According to primitivism, causation is a basic concept that cannot be analyzed in terms of non-causal concepts, such as regularities or dependence relations. One form of primitivism identifies causal powers inherent in entities as the underlying mechanism. Eliminativists reject the above theories by holding that there is no causation.
Mind and free will
Mind encompasses phenomena like thinking, perceiving, feeling, and desiring as well as the underlying faculties responsible for these phenomena. The mind–body problem is the challenge of clarifying the relation between physical and mental phenomena. According to Cartesian dualism, minds and bodies are distinct substances. They causally interact with each other in various ways but can, at least in principle, exist on their own. This view is rejected by monists, who argue that reality is made up of only one kind. According to idealism, everything is mental, including physical objects, which may be understood as ideas or perceptions of conscious minds. Materialists, by contrast, state that all reality is at its core material. Some deny that mind exists but the more common approach is to explain mind in terms of certain aspects of matter, such as brain states, behavioral dispositions, or functional roles. Neutral monists argue that reality is fundamentally neither material nor mental and suggest that matter and mind are both derivative phenomena. A key aspect of the mind–body problem is the hard problem of consciousness or how to explain that physical systems like brains can produce phenomenal consciousness.
The status of free will as the ability of a person to choose their actions is a central aspect of the mind–body problem. Metaphysicians are interested in the relation between free will and causal determinismthe view that everything in the universe, including human behavior, is determined by preceding events and laws of nature. It is controversial whether causal determinism is true, and, if so, whether this would imply that there is no free will. According to incompatibilism, free will cannot exist in a deterministic world since there is no true choice or control if everything is determined. Hard determinists infer from this that there is no free will, whereas libertarians conclude that determinism must be false. Compatibilists offer a third perspective, arguing that determinism and free will do not exclude each other, for instance, because a person can still act in tune with their motivation and choices even if they are determined by other forces. Free will plays a key role in ethics regarding the moral responsibility people have for what they do.
Others
Identity is a relation that every entity has to itself as a form of sameness. It refers to numerical identity when the very same entity is involved, as in the statement "the morning star is the evening star" (both are the planet Venus). In a slightly different sense, it encompasses qualitative identity, also called exact similarity and indiscernibility, which occurs when two distinct entities are exactly alike, such as perfect identical twins. The principle of the indiscernibility of identicals is widely accepted and holds that numerically identical entities exactly resemble one another. The converse principle, known as identity of indiscernibles or Leibniz's Law, is more controversial and states that two entities are numerically identical if they exactly resemble one another. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time, whereas diachronic identity is about the same entity at different times, as in statements like "the table I bought last year is the same as the table in my dining room now". Personal identity is a related topic in metaphysics that uses the term identity in a slightly different sense and concerns questions like what personhood is or what makes someone a person.
Various contemporary metaphysicians rely on the concepts of truth, truth-bearer, and truthmaker to conduct their inquiry. Truth is a property of being in accord with reality. Truth-bearers are entities that can be true or false, such as linguistic statements and mental representations. A truthmaker of a statement is the entity whose existence makes the statement true. For example, the statement "a tomato is red" is true because there exists a red tomato as its truthmaker. Based on this observation, it is possible to pursue metaphysical research by asking what the truthmakers of statements are, with different areas of metaphysics being dedicated to different types of statements. According to this view, modal metaphysics asks what makes statements about what is possible and necessary true while the metaphysics of time is interested in the truthmakers of temporal statements about the past, present, and future.
Methodology
Metaphysicians employ a variety of methods to develop metaphysical theories and formulate arguments for and against them. Traditionally, a priori methods have been the dominant approach. They rely on rational intuition and abstract reasoning from general principles rather than sensory experience. A posteriori approaches, by contrast, ground metaphysical theories in empirical observations and scientific theories. Some metaphysicians incorporate perspectives from fields such as physics, psychology, linguistics, and history into their inquiry. The two approaches are not mutually exclusive: it is possible to combine elements from both. The method a metaphysician chooses often depends on their understanding of the nature of metaphysics, for example, whether they see it as an inquiry into the mind-independent structure of reality, as metaphysical realists claim, or the principles underlying thought and experience, as some metaphysical anti-realists contend.
A priori approaches often rely on intuitionsnon-inferential impressions about the correctness of specific claims or general principles. For example, arguments for the A-theory of time, which states that time flows from the past through the present and into the future, often rely on pre-theoretical intuitions associated with the sense of the passage of time. Some approaches use intuitions to establish a small set of self-evident fundamental principles, known as axioms, and employ deductive reasoning to build complex metaphysical systems by drawing conclusions from these axioms. Intuition-based approaches can be combined with thought experiments, which help evoke and clarify intuitions by linking them to imagined situations. They use counterfactual thinking to assess the possible consequences of these situations. For example, to explore the relation between matter and consciousness, some theorists compare humans to philosophical zombieshypothetical creatures identical to humans but without conscious experience. A related method relies on commonly accepted beliefs instead of intuitions to formulate arguments and theories. The common-sense approach is often used to criticize metaphysical theories that deviate significantly from how the average person thinks about an issue. For example, common-sense philosophers have argued that mereological nihilism is false since it implies that commonly accepted things, like tables, do not exist.
Conceptual analysis, a method particularly prominent in analytic philosophy, aims to decompose metaphysical concepts into component parts to clarify their meaning and identify essential relations. In phenomenology, the method of eidetic variation is used to investigate essential structures underlying phenomena. This method involves imagining an object and varying its features to determine which ones are essential and cannot be changed. The transcendental method is a further approach and examines the metaphysical structure of reality by observing what entities there are and studying the conditions of possibility without which these entities could not exist.
Some approaches give less importance to a priori reasoning and view metaphysics as a practice continuous with the empirical sciences that generalizes their insights while making their underlying assumptions explicit. This approach is known as naturalized metaphysics and is closely associated with the work of Willard Van Orman Quine. He relies on the idea that true sentences from the sciences and other fields have ontological commitments, that is, they imply that certain entities exist. For example, if the sentence "some electrons are bonded to protons" is true then it can be used to justify that electrons and protons exist. Quine used this insight to argue that one can learn about metaphysics by closely analyzing scientific claims to understand what kind of metaphysical picture of the world they presuppose.
In addition to methods of conducting metaphysical inquiry, there are various methodological principles used to decide between competing theories by comparing their theoretical virtues. Ockham's Razor is a well-known principle that gives preference to simple theories, in particular, those that assume that few entities exist. Other principles consider explanatory power, theoretical usefulness, and proximity to established beliefs.
Criticism
Despite its status as one of the main branches of philosophy, metaphysics has received numerous criticisms questioning its legitimacy as a field of inquiry. One criticism argues that metaphysical inquiry is impossible because humans lack the cognitive capacities needed to access the ultimate nature of reality. This line of thought leads to skepticism about the possibility of metaphysical knowledge. Empiricists often follow this idea, like Hume, who argued that there is no good source of metaphysical knowledge since metaphysics lies outside the field of empirical knowledge and relies on dubious intuitions about the realm beyond sensory experience. A related argument favoring the unreliability of metaphysical theorizing points to the deep and lasting disagreements about metaphysical issues, suggesting a lack of overall progress.
Another criticism holds that the problem lies not with human cognitive abilities but with metaphysical statements themselves, which some claim are neither true nor false but meaningless. According to logical positivists, for instance, the meaning of a statement is given by the procedure used to verify it, usually through the observations that would confirm it. Based on this controversial assumption, they argue that metaphysical statements are meaningless since they make no testable predictions about experience.
A slightly weaker position allows metaphysical statements to have meaning while holding that metaphysical disagreements are merely verbal disputes about different ways to describe the world. According to this view, the disagreement in the metaphysics of composition about whether there are tables or only particles arranged table-wise is a trivial debate about linguistic preferences without any substantive consequences for the nature of reality. The position that metaphysical disputes have no meaning or no significant point is called metaphysical or ontological deflationism. This view is opposed by so-called serious metaphysicians, who contend that metaphysical disputes are about substantial features of the underlying structure of reality. A closely related debate between ontological realists and anti-realists concerns the question of whether there are any objective facts that determine which metaphysical theories are true. A different criticism, formulated by pragmatists, sees the fault of metaphysics not in its cognitive ambitions or the meaninglessness of its statements, but in its practical irrelevance and lack of usefulness.
Martin Heidegger criticized traditional metaphysics, saying that it fails to distinguish between individual entities and being as their ontological ground. His attempt to reveal the underlying assumptions and limitations in the history of metaphysics to "overcome metaphysics" influenced Jacques Derrida's method of deconstruction. Derrida employed this approach to criticize metaphysical texts for relying on opposing terms, like presence and absence, which he thought were inherently unstable and contradictory.
There is no consensus about the validity of these criticisms and whether they affect metaphysics as a whole or only certain issues or approaches in it. For example, it could be the case that certain metaphysical disputes are merely verbal while others are substantive.
Relation to other disciplines
Metaphysics is related to many fields of inquiry by investigating their basic concepts and relation to the fundamental structure of reality. For example, the natural sciences rely on concepts such as law of nature, causation, necessity, and spacetime to formulate their theories and predict or explain the outcomes of experiments. While scientists primarily focus on applying these concepts to specific situations, metaphysics examines their general nature and how they depend on each other. For instance, physicists formulate laws of nature, like laws of gravitation and thermodynamics, to describe how physical systems behave under various conditions. Metaphysicians, by contrast, examine what all laws of nature have in common, asking whether they merely describe contingent regularities or express necessary relations. New scientific discoveries have also influenced existing and inspired new metaphysical theories. Einstein's theory of relativity, for instance, prompted various metaphysicians to conceive space and time as a unified dimension rather than as independent dimensions. Empirically focused metaphysicians often rely on scientific theories to ground their theories about the nature of reality in empirical observations.
Similar issues arise in the social sciences where metaphysicians investigate their basic concepts and analyze their metaphysical implications. This includes questions like whether social facts emerge from non-social facts, whether social groups and institutions have mind-independent existence, and how they persist through time. Metaphysical assumptions and topics in psychology and psychiatry include the questions about the relation between body and mind, whether the nature of the human mind is historically fixed, and what the metaphysical status of diseases is.
Metaphysics is similar to both physical cosmology and theology in its exploration of the first causes and the universe as a whole. Key differences are that metaphysics relies on rational inquiry while physical cosmology gives more weight to empirical observations and theology incorporates divine revelation and other faith-based doctrines. Historically, cosmology and theology were considered subfields of metaphysics.
Metaphysics in the form of ontology plays a central role in computer science to classify objects and formally represent information about them. Unlike metaphysicians, computer scientists are usually not interested in providing a single all-encompassing characterization of reality as a whole. Instead, they employ many different ontologies, each one concerned only with a limited domain of entities. For instance, an organization may use an ontology with categories such as person, company, address, and name to represent information about clients and employees. Ontologies provide standards or conceptualizations for encoding and storing information in a structured way, enabling computational processes to use and transform their information for a variety of purposes. Some knowledge bases integrate information from various domains, which brings with it the challenge of handling data that was formulated using diverse ontologies. They address this by providing an upper ontology that defines concepts at a higher level of abstraction, applicable to all domains. Influential upper ontologies include Suggested Upper Merged Ontology and Basic Formal Ontology.
Logic as the study of correct reasoning is often used by metaphysicians as a tool to engage in their inquiry and express insights through precise logical formulas. Another relation between the two fields concerns the metaphysical assumptions associated with logical systems. Many logical systems like first-order logic rely on existential quantifiers to express existential statements. For instance, in the logical formula the existential quantifier is applied to the predicate to express that there are horses. Following Quine, various metaphysicians assume that existential quantifiers carry ontological commitments, meaning that existential statements imply that the entities over which one quantifies are part of reality.
History
The history of metaphysics examines how the inquiry into the basic structure of reality has evolved in the course of history. Metaphysics originated in the ancient period from speculations about the nature and origin of the cosmos. In ancient India, starting in the 7th century BCE, the Upanishads were written as religious and philosophical texts that examine how ultimate reality constitutes the ground of all being. They further explore the nature of the self and how it can reach liberation by understanding ultimate reality. This period also saw the emergence of Buddhism in the 6th century BCE, which denies the existence of an independent self and understands the world as a cyclic process. At about the same time in ancient China, the school of Daoism was formed and explored the natural order of the universe, known as Dao, and how it is characterized by the interplay of yin and yang as two correlated forces.
In ancient Greece, metaphysics emerged in the 6th century BCE with the pre-Socratic philosophers, who gave rational explanations of the cosmos as a whole by examining the first principles from which everything arises. Building on their work, Plato (427–347 BCE) formulated his theory of forms, which states that eternal forms or ideas possess the highest kind of reality while the material world is only an imperfect reflection of them. Aristotle (384–322 BCE) accepted Plato's idea that there are universal forms but held that they cannot exist on their own but depend on matter. He also proposed a system of categories and developed a comprehensive framework of the natural world through his theory of the four causes. Starting in the 4th century BCE, Hellenistic philosophy explored the rational order underlying the cosmos and the idea that it is made up of indivisible atoms. Neoplatonism emerged towards the end of the ancient period in the 3rd century CE and introduced the idea of "the One" as the transcendent and ineffable source of all creation.
Meanwhile, in Indian Buddhism, the Madhyamaka school developed the idea that all phenomena are inherently empty without a permanent essence. The consciousness-only doctrine of the Yogācāra school stated that experienced objects are mere transformations of consciousness and do not reflect external reality. The Hindu school of Samkhya philosophy introduced a metaphysical dualism with pure consciousness and matter as its fundamental categories. In China, the school of Xuanxue explored metaphysical problems such as the contrast between being and non-being.
Medieval Western philosophy was profoundly shaped by ancient Greek philosophy. Boethius (477–524 CE) sought to reconcile Plato's and Aristotle's theories of universals, proposing that universals can exist both in matter and mind. His theory inspired the development of nominalism and conceptualism, as in the thought of Peter Abelard (1079–1142 CE). Thomas Aquinas (1224–1274 CE) understood metaphysics as the discipline investigating different meanings of being, such as the contrast between substance and accident, and principles applying to all beings, such as the principle of identity. William of Ockham (1285–1347 CE) proposed Ockham's razor, a methodological principles to choose between competing metaphysical theories. Arabic–Persian philosophy flourished from the early 9th century CE to the late 12th century CE, integrating ancient Greek philosophies to interpret and clarify the teachings of the Quran. Avicenna (980–1037 CE) developed a comprehensive philosophical system that examined the contrast between existence and essence and distinguished between contingent and necessary existence. Medieval India saw the emergence of the monist school of Advaita Vedanta in the 8th century CE, which holds that everything is one and that the idea of many entities existing independently is an illusion. In China, Neo-Confucianism arose in the 9th century CE and explored the concept of li as the rational principle that is the ground of being and reflects the order of the universe.
In the early modern period, René Descartes (1596–1650) developed a substance dualism according to which body and mind exist as independent entities that causally interact. This idea was rejected by Baruch Spinoza (1632–1677), who formulated a monist philosophy suggesting that there is only one substance with both physical and mental attributes that develop side-by-side without interacting. Gottfried Wilhelm Leibniz (1646–1716) introduced the concept of possible worlds and articulated a metaphysical system known as monadology, which views the universe as a collection of simple substances synchronized without causal interaction. Christian Wolff (1679–1754), conceptualized the scope of metaphysics by distinguishing between general and special metaphysics. According to the idealism of George Berkeley (1685–1753), everything is mental, including material objects, which are ideas perceived by the mind. David Hume (1711–1776) made various contributions to metaphysics, including the regularity theory of causation and the idea that there are no necessary connections between distinct entities. His empiricist outlook led him to criticize metaphysical theories that seek ultimate principles inaccessible to sensory experience. This skeptical outlook was embraced by Immanuel Kant (1724–1804), who tried to reconceptualize metaphysics as an inquiry into the basic principles and categories of thought and understanding rather than seeing it as an attempt to comprehend mind-independent reality.
Many developments in the later modern period were shaped by Kant's philosophy. German idealists adopted his idealistic outlook in their attempt to find a unifying principle as the foundation of all reality. Georg Wilhelm Friedrich Hegel (1770–1831) developed a comprehensive system of philosophy that examines how absolute spirit manifests itself. He inspired the British idealism of Francis Herbert Bradley (1846–1924), who interpreted absolute spirit as the all-inclusive totality of being. Arthur Schopenhauer (1788–1860) was a strong critic of German idealism and articulated a different metaphysical vision, positing a blind and irrational will as the underlying principle of reality. Pragmatists like C. S. Peirce (1839–1914) and John Dewey (1859–1952) conceived metaphysics as an observational science of the most general features of reality and experience.
At the turn of the 20th century in analytic philosophy, philosophers such as Bertrand Russell (1872–1970) and G. E. Moore (1873–1958) led a "revolt against idealism". Logical atomists, like Russell and the early Ludwig Wittgenstein (1889–1951), conceived the world as a multitude of atomic facts, which later inspired metaphysicians such as D. M. Armstrong (1926–2014). Alfred North Whitehead (1861–1947) developed process metaphysics as an attempt to provide a holistic description of both the objective and the subjective realms.
Rudolf Carnap (1891–1970) and other logical positivists formulated a wide-ranging criticism of metaphysical statements, arguing that they are meaningless because there is no way to verify them. Other criticisms of traditional metaphysics identified misunderstandings of ordinary language as the source of many traditional metaphysical problems or challenged complex metaphysical deductions by appealing to common sense.
The decline of logical positivism led to a revival of metaphysical theorizing. Willard Van Orman Quine (1908–2000) tried to naturalize metaphysics by connecting it to the empirical sciences. His student David Lewis (1941–2001) employed the concept of possible worlds to formulate his modal realism. Saul Kripke (1940–2022) helped revive discussions of identity and essentialism, distinguishing necessity as a metaphysical notion from the epistemic notion of a priori.
In continental philosophy, Edmund Husserl (1859–1938) engaged in ontology through a phenomenological description of experience, while his student Martin Heidegger (1889–1976) developed fundamental ontology to clarify the meaning of being. Heidegger's philosophy inspired general criticisms of metaphysics by postmodern thinkers like Jacques Derrida (1930–2004). Gilles Deleuze's (1925–1995) approach to metaphysics challenged traditionally influential concepts like substance, essence, and identity by reconceptualizing the field through alternative notions such as multiplicity, event, and difference.
See also
Computational metaphysics
Doctor of Metaphysics
Enrico Berti's classification of metaphysics
Feminist metaphysics
Fundamental question of metaphysics
List of metaphysicians
Metaphysical grounding
References
Notes
Citations
Sources
External links
Metaphysics at Encyclopædia Britannica | 0.795747 | 0.999681 | 0.795494 |
Philosophy of science | Philosophy of science is the branch of philosophy concerned with the foundations, methods, and implications of science. Amongst its central questions are the difference between science and non-science, the reliability of scientific theories, and the ultimate purpose and meaning of science as a human endeavour. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of scientific practice, and overlaps with metaphysics, ontology, logic, and epistemology, for example, when it explores the relationship between science and the concept of truth. Philosophy of science is both a theoretical and empirical discipline, relying on philosophical theorising as well as meta-studies of scientific practice. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science.
Many of the central problems concerned with the philosophy of science lack contemporary consensus, including whether science can infer truth about unobservable entities and whether inductive reasoning can be justified as yielding definite scientific knowledge. Philosophers of science also consider philosophical problems within particular sciences (such as biology, physics and social sciences such as economics and psychology). Some philosophers of science also use contemporary results in science to reach conclusions about philosophy itself.
While philosophical thought pertaining to science dates back at least to the time of Aristotle, the general philosophy of science emerged as a distinct discipline only in the 20th century following the logical positivist movement, which aimed to formulate criteria for ensuring all philosophical statements' meaningfulness and objectively assessing them. Karl Popper criticized logical positivism and helped establish a modern set of standards for scientific methodology. Thomas Kuhn's 1962 book The Structure of Scientific Revolutions was also formative, challenging the view of scientific progress as the steady, cumulative acquisition of knowledge based on a fixed method of systematic experimentation and instead arguing that any progress is relative to a "paradigm", the set of questions, concepts, and practices that define a scientific discipline in a particular historical period.
Subsequently, the coherentist approach to science, in which a theory is validated if it makes sense of observations as part of a coherent whole, became prominent due to W. V. Quine and others. Some thinkers such as Stephen Jay Gould seek to ground science in axiomatic assumptions, such as the uniformity of nature. A vocal minority of philosophers, and Paul Feyerabend in particular, argue against the existence of the "scientific method", so all approaches to science should be allowed, including explicitly supernatural ones. Another approach to thinking about science involves studying how knowledge is created from a sociological perspective, an approach represented by scholars like David Bloor and Barry Barnes. Finally, a tradition in continental philosophy approaches science from the perspective of a rigorous analysis of human experience.
Philosophies of the particular sciences range from questions about the nature of time raised by Einstein's general relativity, to the implications of economics for public policy. A central theme is whether the terms of one scientific theory can be intra- or intertheoretically reduced to the terms of another. Can chemistry be reduced to physics, or can sociology be reduced to individual psychology? The general questions of philosophy of science also arise with greater specificity in some particular sciences. For instance, the question of the validity of scientific reasoning is seen in a different guise in the foundations of statistics. The question of what counts as science and what should be excluded arises as a life-or-death matter in the philosophy of medicine. Additionally, the philosophies of biology, psychology, and the social sciences explore whether the scientific studies of human nature can achieve objectivity or are inevitably shaped by values and by social relations.
Introduction
Defining science
Distinguishing between science and non-science is referred to as the demarcation problem. For example, should psychoanalysis, creation science, and historical materialism be considered pseudosciences? Karl Popper called this the central question in the philosophy of science. However, no unified account of the problem has won acceptance among philosophers, and some regard the problem as unsolvable or uninteresting. Martin Gardner has argued for the use of a Potter Stewart standard ("I know it when I see it") for recognizing pseudoscience.
Early attempts by the logical positivists grounded science in observation while non-science was non-observational and hence meaningless. Popper argued that the central property of science is falsifiability. That is, every genuinely scientific claim is capable of being proven false, at least in principle.
An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe they are doing science because their activities have the outward appearance of it but actually lack the "kind of utter honesty" that allows their results to be rigorously evaluated.
Scientific explanation
A closely related question is what counts as a good scientific explanation. In addition to providing predictions about future events, society often takes scientific theories to provide explanations for events that occur regularly or have already occurred. Philosophers have investigated the criteria by which a scientific theory can be said to have successfully explained a phenomenon, as well as what it means to say a scientific theory has explanatory power.
One early and influential account of scientific explanation is the deductive-nomological model. It says that a successful scientific explanation must deduce the occurrence of the phenomena in question from a scientific law. This view has been subjected to substantial criticism, resulting in several widely acknowledged counterexamples to the theory. It is especially challenging to characterize what is meant by an explanation when the thing to be explained cannot be deduced from any law because it is a matter of chance, or otherwise cannot be perfectly predicted from what is known. Wesley Salmon developed a model in which a good scientific explanation must be statistically relevant to the outcome to be explained. Others have argued that the key to a good explanation is unifying disparate phenomena or providing a causal mechanism.
Justifying science
Although it is often taken for granted, it is not at all clear how one can infer the validity of a general statement from a number of specific instances or infer the truth of a theory from a series of successful tests. For example, a chicken observes that each morning the farmer comes and gives it food, for hundreds of days in a row. The chicken may therefore use inductive reasoning to infer that the farmer will bring food every morning. However, one morning, the farmer comes and kills the chicken. How is scientific reasoning more trustworthy than the chicken's reasoning?
One approach is to acknowledge that induction cannot achieve certainty, but observing more instances of a general statement can at least make the general statement more probable. So the chicken would be right to conclude from all those mornings that it is likely the farmer will come with food again the next morning, even if it cannot be certain. However, there remain difficult questions about the process of interpreting any given evidence into a probability that the general statement is true. One way out of these particular difficulties is to declare that all beliefs about scientific theories are subjective, or personal, and correct reasoning is merely about how evidence should change one's subjective beliefs over time.
Some argue that what scientists do is not inductive reasoning at all but rather abductive reasoning, or inference to the best explanation. In this account, science is not about generalizing specific instances but rather about hypothesizing explanations for what is observed. As discussed in the previous section, it is not always clear what is meant by the "best explanation". Ockham's razor, which counsels choosing the simplest available explanation, thus plays an important role in some versions of this approach. To return to the example of the chicken, would it be simpler to suppose that the farmer cares about it and will continue taking care of it indefinitely or that the farmer is fattening it up for slaughter? Philosophers have tried to make this heuristic principle more precise regarding theoretical parsimony or other measures. Yet, although various measures of simplicity have been brought forward as potential candidates, it is generally accepted that there is no such thing as a theory-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are theories themselves, and the task of choosing between measures of simplicity appears to be every bit as problematic as the job of choosing between theories. Nicholas Maxwell has argued for some decades that unity rather than simplicity is the key non-empirical factor in influencing the choice of theory in science, persistent preference for unified theories in effect committing science to the acceptance of a metaphysical thesis concerning unity in nature. In order to improve this problematic thesis, it needs to be represented in the form of a hierarchy of theses, each thesis becoming more insubstantial as one goes up the hierarchy.
Observation inseparable from theory
When making observations, scientists look through telescopes, study images on electronic screens, record meter readings, and so on. Generally, on a basic level, they can agree on what they see, e.g., the thermometer shows 37.9 degrees C. But, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. For example, before Albert Einstein's general theory of relativity, observers would have likely interpreted an image of the Einstein cross as five different objects in space. In light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. Alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. Observations that cannot be separated from theoretical interpretation are said to be theory-laden.
All observation involves both perception and cognition. That is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. Therefore, observations are affected by one's underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. In this sense, it can be argued that all observation is theory-laden.
The purpose of science
Should science aim to determine ultimate truth, or are there questions that science cannot answer? Scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. Conversely, scientific anti-realists argue that science does not aim (or at least does not succeed) at truth, especially truth about unobservables like electrons or other universes. Instrumentalists argue that scientific theories should only be evaluated on whether they are useful. In their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology.
Realists often point to the success of recent scientific theories as evidence for the truth (or near truth) of current theories. Antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. Antirealists attempt to explain the success of scientific theories without reference to truth. Some antirealists claim that scientific theories aim at being accurate only about observable objects and argue that their success is primarily judged by that criterion.
Real patterns
The notion of real patterns has been propounded, notably by philosopher Daniel C. Dennett, as an intermediate position between strong realism and eliminative materialism. This concept delves into the investigation of patterns observed in scientific phenomena to ascertain whether they signify underlying truths or are mere constructs of human interpretation. Dennett provides a unique ontological account concerning real patterns, examining the extent to which these recognized patterns have predictive utility and allow for efficient compression of information.
The discourse on real patterns extends beyond philosophical circles, finding relevance in various scientific domains. For example, in biology, inquiries into real patterns seek to elucidate the nature of biological explanations, exploring how recognized patterns contribute to a comprehensive understanding of biological phenomena. Similarly, in chemistry, debates around the reality of chemical bonds as real patterns continue.
Evaluation of real patterns also holds significance in broader scientific inquiries. Researchers, like Tyler Millhouse, propose criteria for evaluating the realness of a pattern, particularly in the context of universal patterns and the human propensity to perceive patterns, even where there might be none. This evaluation is pivotal in advancing research in diverse fields, from climate change to machine learning, where recognition and validation of real patterns in scientific models play a crucial role.
Values and science
Values intersect with science in different ways. There are epistemic values that mainly guide the scientific research. The scientific enterprise is embedded in particular culture and values through individual practitioners. Values emerge from science, both as product and process and can be distributed among several cultures in the society. When it comes to the justification of science in the sense of general public participation by single practitioners, science plays the role of a mediator between evaluating the standards and policies of society and its participating individuals, wherefore science indeed falls victim to vandalism and sabotage adapting the means to the end.
If it is unclear what counts as science, how the process of confirming theories works, and what the purpose of science is, there is considerable scope for values and other social influences to shape science. Indeed, values can play a role ranging from determining which research gets funded to influencing which theories achieve scientific consensus. For example, in the 19th century, cultural values held by scientists about race shaped research on evolution, and values concerning social class influenced debates on phrenology (considered scientific at the time). Feminist philosophers of science, sociologists of science, and others explore how social values affect science.
History
Pre-modern
The origins of philosophy of science trace back to Plato and Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also analyzed reasoning by analogy. The eleventh century Arab polymath Ibn al-Haytham (known in Latin as Alhazen) conducted his research in optics by way of controlled experimental testing and applied geometry, especially in his investigations into the images resulting from the reflection and refraction of light. Roger Bacon (1214–1294), an English thinker and experimenter heavily influenced by al-Haytham, is recognized by many to be the father of modern scientific method. His view that mathematics was essential to a correct understanding of natural philosophy is considered to have been 400 years ahead of its time.
Modern
Francis Bacon (no direct relation to Roger Bacon, who lived 300 years earlier) was a seminal figure in philosophy of science at the time of the Scientific Revolution. In his work Novum Organum (1620)an allusion to Aristotle's OrganonBacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Bacon's method relied on experimental histories to eliminate alternative theories. In 1637, René Descartes established a new framework for grounding scientific knowledge in his treatise, Discourse on Method, advocating the central role of reason as opposed to sensory experience. By contrast, in 1713, the 2nd edition of Isaac Newton's Philosophiae Naturalis Principia Mathematica argued that "... hypotheses ... have no place in experimental philosophy. In this philosophy[,] propositions are deduced from the phenomena and rendered general by induction." This passage influenced a "later generation of philosophically-inclined readers to pronounce a ban on causal hypotheses in natural philosophy". In particular, later in the 18th century, David Hume would famously articulate skepticism about the ability of science to determine causality and gave a definitive formulation of the problem of induction, though both theses would be contested by the end of the 18th century by Immanuel Kant in his Critique of Pure Reason and Metaphysical Foundations of Natural Science. In 19th century Auguste Comte made a major contribution to the theory of science. The 19th century writings of John Stuart Mill are also considered important in the formation of current conceptions of the scientific method, as well as anticipating later accounts of scientific explanation.
Logical positivism
Instrumentalism became popular among physicists around the turn of the 20th century, after which logical positivism defined the field for several decades. Logical positivism accepts only testable statements as meaningful, rejects metaphysical interpretations, and embraces verificationism (a set of theories of knowledge that combines logicism, empiricism, and linguistics to ground philosophy on a basis consistent with examples from the empirical sciences). Seeking to overhaul all of philosophy and convert it to a new scientific philosophy, the Berlin Circle and the Vienna Circle propounded logical positivism in the late 1920s.
Interpreting Ludwig Wittgenstein's early philosophy of language, logical positivists identified a verifiability principle or criterion of cognitive meaningfulness. From Bertrand Russell's logicism they sought reduction of mathematics to logic. They also embraced Russell's logical atomism, Ernst Mach's phenomenalism—whereby the mind knows only actual or potential sensory experience, which is the content of all sciences, whether physics or psychology—and Percy Bridgman's operationalism. Thereby, only the verifiable was scientific and cognitively meaningful, whereas the unverifiable was unscientific, cognitively meaningless "pseudostatements"—metaphysical, emotive, or such—not worthy of further review by philosophers, who were newly tasked to organize knowledge rather than develop new knowledge.
Logical positivism is commonly portrayed as taking the extreme position that scientific language should never refer to anything unobservable—even the seemingly core notions of causality, mechanism, and principles—but that is an exaggeration. Talk of such unobservables could be allowed as metaphorical—direct observations viewed in the abstract—or at worst metaphysical or emotional. Theoretical laws would be reduced to empirical laws, while theoretical terms would garner meaning from observational terms via correspondence rules. Mathematics in physics would reduce to symbolic logic via logicism, while rational reconstruction would convert ordinary language into standardized equivalents, all networked and united by a logical syntax. A scientific theory would be stated with its method of verification, whereby a logical calculus or empirical operation could verify its falsity or truth.
In the late 1930s, logical positivists fled Germany and Austria for Britain and America. By then, many had replaced Mach's phenomenalism with Otto Neurath's physicalism, and Rudolf Carnap had sought to replace verification with simply confirmation. With World War II's close in 1945, logical positivism became milder, logical empiricism, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation as a way of identifying the logical form of explanations without any reference to the suspect notion of "causation". The logical positivist movement became a major underpinning of analytic philosophy, and dominated Anglosphere philosophy, including philosophy of science, while influencing sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly assaulted. Nevertheless, it brought about the establishment of philosophy of science as a distinct subdiscipline of philosophy, with Carl Hempel playing a key role.
Thomas Kuhn
In the 1962 book The Structure of Scientific Revolutions, Thomas Kuhn argued that the process of observation and evaluation takes place within a paradigm, a logically consistent "portrait" of the world that is consistent with observations made from its framing. A paradigm also encompasses the set of questions and practices that define a scientific discipline. He characterized normal science as the process of observation and "puzzle solving" which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift.
Kuhn denied that it is ever possible to isolate the hypothesis being tested from the influence of the theory in which the observations are grounded, and he argued that it is not possible to evaluate competing paradigms independently. More than one logically consistent construct can paint a usable likeness of the world, but there is no common ground from which to pit two against each other, theory against theory. Each paradigm has its own distinct questions, aims, and interpretations. Neither provides a standard by which the other can be judged, so there is no clear way to measure scientific progress across paradigms.
For Kuhn, the choice of paradigm was sustained by rational processes, but not ultimately determined by them. The choice between paradigms involves setting two or more "portraits" against the world and deciding which likeness is most promising. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn's position, however, is not one of relativism. According to Kuhn, a paradigm shift occurs when a significant number of observational anomalies arise in the old paradigm and a new paradigm makes sense of them. That is, the choice of a new paradigm is based on observations, even though those observations are made against the background of the old paradigm.
Current approaches
Naturalism's axiomatic assumptions
Coherentism
In contrast to the view that science rests on foundational assumptions, coherentism asserts that statements are justified by being a part of a coherent system. Or, rather, individual statements cannot be validated on their own: only coherent systems can be justified. A prediction of a transit of Venus is justified by its being coherent with broader beliefs about celestial mechanics and earlier observations. As explained above, observation is a cognitive act. That is, it relies on a pre-existing understanding, a systematic set of beliefs. An observation of a transit of Venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. If the prediction fails and a transit is not observed, that is likely to occasion an adjustment in the system, a change in some auxiliary assumption, rather than a rejection of the theoretical system.
In fact, according to the Duhem–Quine thesis, after Pierre Duhem and W.V. Quine, it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions. For example, to test Newton's Law of Gravitation in the solar system, one needs information about the masses and positions of the Sun and all the planets. Famously, the failure to predict the orbit of Uranus in the 19th century led not to the rejection of Newton's Law but rather to the rejection of the hypothesis that the solar system comprises only seven planets. The investigations that followed led to the discovery of an eighth planet, Neptune. If a test fails, something is wrong. But there is a problem in figuring out what that something is: a missing planet, badly calibrated test equipment, an unsuspected curvature of space, or something else.
One consequence of the Duhem–Quine thesis is that one can make any theory compatible with any empirical observation by the addition of a sufficient number of suitable ad hoc hypotheses. Karl Popper accepted this thesis, leading him to reject naïve falsification. Instead, he favored a "survival of the fittest" view in which the most falsifiable scientific theories are to be preferred.
Anything goes methodology
Paul Feyerabend (1924–1994) argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. He argued that "the only principle that does not inhibit progress is: anything goes".
Feyerabend said that science started as a liberating movement, but that over time it had become increasingly dogmatic and rigid and had some oppressive features, and thus had become increasingly an ideology. Because of this, he said it was impossible to come up with an unambiguous way to distinguish science from religion, magic, or mythology. He saw the exclusive dominance of science as a means of directing society as authoritarian and ungrounded. Promulgation of this epistemological anarchism earned Feyerabend the title of "the worst enemy of science" from his detractors.
Sociology of scientific knowledge methodology
According to Kuhn, science is an inherently communal activity which can only be done as part of a community. For him, the fundamental difference between science and other disciplines is the way in which the communities function. Others, especially Feyerabend and some post-modernist thinkers, have argued that there is insufficient difference between social practices in science and other disciplines to maintain this distinction. For them, social factors play an important and direct role in scientific method, but they do not serve to differentiate science from other disciplines. On this account, science is socially constructed, though this does not necessarily imply the more radical notion that reality itself is a social construct.
Michel Foucault sought to analyze and uncover how disciplines within the social sciences developed and adopted the methodologies used by their practitioners. In works like The Archaeology of Knowledge, he used the term human sciences. The human sciences do not comprise mainstream academic disciplines; they are rather an interdisciplinary space for the reflection on man who is the subject of more mainstream scientific knowledge, taken now as an object, sitting between these more conventional areas, and of course associating with disciplines such as anthropology, psychology, sociology, and even history. Rejecting the realist view of scientific inquiry, Foucault argued throughout his work that scientific discourse is not simply an objective study of phenomena, as both natural and social scientists like to believe, but is rather the product of systems of power relations struggling to construct scientific disciplines and knowledge within given societies. With the advances of scientific disciplines, such as psychology and anthropology, the need to separate, categorize, normalize and institutionalize populations into constructed social identities became a staple of the sciences. Constructions of what were considered "normal" and "abnormal" stigmatized and ostracized groups of people, like the mentally ill and sexual and gender minorities.
However, some (such as Quine) do maintain that scientific reality is a social construct:
Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer ... For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits.
The public backlash of scientists against such views, particularly in the 1990s, became known as the science wars.
A major development in recent decades has been the study of the formation, structure, and evolution of scientific communities by sociologists and anthropologists – including David Bloor, Harry Collins, Bruno Latour, Ian Hacking and Anselm Strauss. Concepts and methods (such as rational choice, social choice or game theory) from economics have also been applied for understanding the efficiency of scientific communities in the production of knowledge. This interdisciplinary field has come to be known as science and technology studies.
Here the approach to the philosophy of science is to study how scientific communities actually operate.
Continental philosophy
Philosophers in the continental philosophical tradition are not traditionally categorized as philosophers of science. However, they have much to say about science, some of which has anticipated themes in the analytical tradition. For example, in The Genealogy of Morals (1887) Friedrich Nietzsche advanced the thesis that the motive for the search for truth in sciences is a kind of ascetic ideal.
In general, continental philosophy views science from a world-historical perspective. Philosophers such as Pierre Duhem (1861–1916) and Gaston Bachelard (1884–1962) wrote their works with this world-historical approach to science, predating Kuhn's 1962 work by a generation or more. All of these approaches involve a historical and sociological turn to science, with a priority on lived experience (a kind of Husserlian "life-world"), rather than a progress-based or anti-historical approach as emphasised in the analytic tradition. One can trace this continental strand of thought through the phenomenology of Edmund Husserl (1859–1938), the late works of Merleau-Ponty (Nature: Course Notes from the Collège de France, 1956–1960), and the hermeneutics of Martin Heidegger (1889–1976).
The largest effect on the continental tradition with respect to science came from Martin Heidegger's critique of the theoretical attitude in general, which of course includes the scientific attitude. For this reason, the continental tradition has remained much more skeptical of the importance of science in human life and in philosophical inquiry. Nonetheless, there have been a number of important works: especially those of a Kuhnian precursor, Alexandre Koyré (1892–1964). Another important development was that of Michel Foucault's analysis of historical and scientific thought in The Order of Things (1966) and his study of power and corruption within the "science" of madness. Post-Heideggerian authors contributing to continental philosophy of science in the second half of the 20th century include Jürgen Habermas (e.g., Truth and Justification, 1998), Carl Friedrich von Weizsäcker (The Unity of Nature, 1980; (1971)), and Wolfgang Stegmüller (Probleme und Resultate der Wissenschaftstheorie und Analytischen Philosophie, 1973–1986).
Other topics
Reductionism
Analysis involves breaking an observation or theory down into simpler concepts in order to understand it. Reductionism can refer to one of several philosophical positions related to this approach. One type of reductionism suggests that phenomena are amenable to scientific explanation at lower levels of analysis and inquiry. Perhaps a historical event might be explained in sociological and psychological terms, which in turn might be described in terms of human physiology, which in turn might be described in terms of chemistry and physics. Daniel Dennett distinguishes legitimate reductionism from what he calls greedy reductionism, which denies real complexities and leaps too quickly to sweeping generalizations.
Social accountability
A broad issue affecting the neutrality of science concerns the areas which science chooses to explorethat is, what part of the world and of humankind are studied by science. Philip Kitcher in his Science, Truth, and Democracy
argues that scientific studies that attempt to show one segment of the population as being less intelligent, less successful, or emotionally backward compared to others have a political feedback effect which further excludes such groups from access to science. Thus such studies undermine the broad consensus required for good science by excluding certain people, and so proving themselves in the end to be unscientific.
Philosophy of particular sciences
In addition to addressing the general questions regarding science and induction, many philosophers of science are occupied by investigating foundational problems in particular sciences. They also examine the implications of particular sciences for broader philosophical questions. The late 20th and early 21st century has seen a rise in the number of practitioners of philosophy of a particular science.
Philosophy of statistics
The problem of induction discussed above is seen in another form in debates over the foundations of statistics. The standard approach to statistical hypothesis testing avoids claims about whether evidence supports a hypothesis or makes it more probable. Instead, the typical test yields a p-value, which is the probability of the evidence being such as it is, under the assumption that the hypothesis being tested is true. If the p-value is too low, the hypothesis is rejected, in a way analogous to falsification. In contrast, Bayesian inference seeks to assign probabilities to hypotheses. Related topics in philosophy of statistics include probability interpretations, overfitting, and the difference between correlation and causation.
Philosophy of mathematics
Philosophy of mathematics is concerned with the philosophical foundations and implications of mathematics. The central questions are whether numbers, triangles, and other mathematical entities exist independently of the human mind and what is the nature of mathematical propositions. Is asking whether "1 + 1 = 2" is true fundamentally different from asking whether a ball is red? Was calculus invented or discovered? A related question is whether learning mathematics requires experience or reason alone. What does it mean to prove a mathematical theorem and how does one know whether a mathematical proof is correct? Philosophers of mathematics also aim to clarify the relationships between mathematics and logic, human capabilities such as intuition, and the material universe.
Philosophy of physics
Philosophy of physics is the study of the fundamental, philosophical questions underlying modern physics, the study of matter and energy and how they interact. The main questions concern the nature of space and time, atoms and atomism. Also included are the predictions of cosmology, the interpretation of quantum mechanics, the foundations of statistical mechanics, causality, determinism, and the nature of physical laws. Classically, several of these questions were studied as part of metaphysics (for example, those about causality, determinism, and space and time).
Philosophy of chemistry
Philosophy of chemistry is the philosophical study of the methodology and content of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams. It includes research on general philosophy of science issues as applied to chemistry. For example, can all chemical phenomena be explained by quantum mechanics or is it not possible to reduce chemistry to physics? For another example, chemists have discussed the philosophy of how theories are confirmed in the context of confirming reaction mechanisms. Determining reaction mechanisms is difficult because they cannot be observed directly. Chemists can use a number of indirect measures as evidence to rule out certain mechanisms, but they are often unsure if the remaining mechanism is correct because there are many other possible mechanisms that they have not tested or even thought of. Philosophers have also sought to clarify the meaning of chemical concepts which do not refer to specific physical entities, such as chemical bonds.
Philosophy of astronomy
The philosophy of astronomy seeks to understand and analyze the methodologies and technologies used by experts in the discipline, focusing on how observations made about space and astrophysical phenomena can be studied. Given that astronomers rely and use theories and formulas from other scientific disciplines, such as chemistry and physics, the pursuit of understanding how knowledge can be obtained about the cosmos, as well as the relation in which our planet and Solar System have within our personal views of our place in the universe, philosophical insights into how facts about space can be scientifically analyzed and configure with other established knowledge is a main point of inquiry.
Philosophy of Earth sciences
The philosophy of Earth science is concerned with how humans obtain and verify knowledge of the workings of the Earth system, including the atmosphere, hydrosphere, and geosphere (solid earth). Earth scientists' ways of knowing and habits of mind share important commonalities with other sciences, but also have distinctive attributes that emerge from the complex, heterogeneous, unique, long-lived, and non-manipulatable nature of the Earth system.
Philosophy of biology
Philosophy of biology deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, Leibniz and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science began to pay increasing attention to developments in biology, from the rise of the modern synthesis in the 1930s and 1940s to the discovery of the structure of deoxyribonucleic acid (DNA) in 1953 to more recent advances in genetic engineering. Other key ideas such as the reduction of all life processes to biochemical reactions as well as the incorporation of psychology into a broader neuroscience are also addressed. Research in current philosophy of biology includes investigation of the foundations of evolutionary theory (such as Peter Godfrey-Smith's work), and the role of viruses as persistent symbionts in host genomes. As a consequence, the evolution of genetic content order is seen as the result of competent genome editors in contrast to former narratives in which error replication events (mutations) dominated.
Philosophy of medicine
Beyond medical ethics and bioethics, the philosophy of medicine is a branch of philosophy that includes the epistemology and ontology/metaphysics of medicine. Within the epistemology of medicine, evidence-based medicine (EBM) (or evidence-based practice (EBP)) has attracted attention, most notably the roles of randomisation, blinding and placebo controls. Related to these areas of investigation, ontologies of specific interest to the philosophy of medicine include Cartesian dualism, the monogenetic conception of disease and the conceptualization of 'placebos' and 'placebo effects'. There is also a growing interest in the metaphysics of medicine, particularly the idea of causation. Philosophers of medicine might not only be interested in how medical knowledge is generated, but also in the nature of such phenomena. Causation is of interest because the purpose of much medical research is to establish causal relationships, e.g. what causes disease, or what causes people to get better.
Philosophy of psychiatry
Philosophy of psychiatry explores philosophical questions relating to psychiatry and mental illness. The philosopher of science and medicine Dominic Murphy identifies three areas of exploration in the philosophy of psychiatry. The first concerns the examination of psychiatry as a science, using the tools of the philosophy of science more broadly. The second entails the examination of the concepts employed in discussion of mental illness, including the experience of mental illness, and the normative questions it raises. The third area concerns the links and discontinuities between the philosophy of mind and psychopathology.
Philosophy of psychology
Philosophy of psychology refers to issues at the theoretical foundations of modern psychology. Some of these issues are epistemological concerns about the methodology of psychological investigation. For example, is the best method for studying psychology to focus only on the response of behavior to external stimuli or should psychologists focus on mental perception and thought processes? If the latter, an important question is how the internal experiences of others can be measured. Self-reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self-deception or selective memory may affect their responses. Then even in the case of accurate self-reports, how can responses be compared across individuals? Even if two individuals respond with the same answer on a Likert scale, they may be experiencing very different things.
Other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. For example, are humans rational creatures? Is there any sense in which they have free will, and how does that relate to the experience of making choices? Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology.
Philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. In particular, neurophilosophy has just recently become its own field with the works of Paul Churchland and Patricia Churchland. Philosophy of mind, by contrast, has been a well-established discipline since before psychology was a field of study at all. It is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism.
Philosophy of social science
The philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. Philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency.
The French philosopher, Auguste Comte (1798–1857), established the epistemological perspective of positivism in The Course in Positivist Philosophy, a series of texts published between 1830 and 1842. The first three volumes of the Course dealt chiefly with the natural sciences already in existence (geoscience, astronomy, physics, chemistry, biology), whereas the latter two emphasised the inevitable coming of social science: "sociologie". For Comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. Comte offers an evolutionary system proposing that society undergoes three phases in its quest for the truth according to a general 'law of three stages'. These are (1) the theological, (2) the metaphysical, and (3) the positive.
Comte's positivism established the initial philosophical foundations for formal sociology and social research. Durkheim, Marx, and Weber are more typically cited as the fathers of contemporary social science. In psychology, a positivistic approach has historically been favoured in behaviourism. Positivism has also been espoused by 'technocrats' who believe in the inevitability of social progress through science and technology.
The positivist perspective has been associated with 'scientism'; the view that the methods of the natural sciences may be applied to all areas of investigation, be it philosophical, social scientific, or otherwise. Among most social scientists and historians, orthodox positivism has long since lost popular support. Today, practitioners of both social and physical sciences instead take into account the distorting effect of observer bias and structural limitations. This scepticism has been facilitated by a general weakening of deductivist accounts of science by philosophers such as Thomas Kuhn, and new philosophical movements such as critical realism and neopragmatism. The philosopher-sociologist Jürgen Habermas has critiqued pure instrumental rationality as meaning that scientific-thinking becomes something akin to ideology itself.
Philosophy of technology
The philosophy of technology is a sub-field of philosophy that studies the nature of technology. Specific research topics include study of the role of tacit and explicit knowledge in creating and using technology, the nature of functions in technological artifacts, the role of values in design, and ethics related to technology. Technology and engineering can both involve the application of scientific knowledge. The philosophy of engineering is an emerging sub-field of the broader philosophy of technology.
See also
Criticism of science
History and philosophy of science
List of philosophers of science
Metaphysical naturalism
Metascience
Objectivity (philosophy)
Philosophy of engineering
Science policy
References
Sources
Further reading
Bovens, L. and Hartmann, S. (2003), Bayesian Epistemology, Oxford University Press, Oxford.
Gutting, Gary (2004), Continental Philosophy of Science, Blackwell Publishers, Cambridge, MA.
Godfrey-Smith, Peter (2003), Theory and Reality: An Introduction the Philosophy of Science, University of Chicago Press.
Losee, J. (1998), A Historical Introduction to the Philosophy of Science, Oxford University Press, Oxford.
Papineau, David (2005) Problems of the Philosophy of Science. Oxford Companion to Philosophy, Oxford.
Popper, Karl, (1963) Conjectures and Refutations: The Growth of Scientific Knowledge, .
Ziman, John (2000). Real Science: what it is, and what it means. Cambridge: Cambridge University Press.
External links
Academic discipline interactions
Analytic philosophy
Historiography of science
Science
Science studies | 0.796138 | 0.99752 | 0.794164 |
Social philosophy | Social philosophy examines questions about the foundations of social institutions, behavior, power structures, and interpretations of society in terms of ethical values rather than empirical relations. Social philosophers emphasize understanding the social contexts for political, legal, moral and cultural questions, and the development of novel theoretical frameworks, from social ontology to care ethics to cosmopolitan theories of democracy, natural law, human rights, gender equity and global justice.
Subdisciplines
There is often a considerable overlap between the questions addressed by social philosophy and ethics or value theory. Other forms of social philosophy include political philosophy and jurisprudence, which are largely concerned with the societies of state and government and their functioning.
Social philosophy, ethics, and political philosophy all share intimate connections with other disciplines in the social sciences and the humanities. In turn, the social sciences themselves are of focal interest to the philosophy of social science.
Social philosophy is broadly interdisciplinary, looking at all of phenomenology, epistemology, and philosophy of language from a sociological perspective; phenomenological sociology, social epistemology and sociology of language respectively.
Relevant issues
Some social philosophy is concerned with identity, and defining strata that categorize society, for example race and gender. Other social philosophy examines agency and free will, and whether people socialized in a particular way are accountable for their actions.
It also looks at the concepts of property, rights, and authority, examining actions in terms of both ethical values and their wider social effect; it applies situational ethics to broader political concepts.
Sociology of language considers communication in the context of social relations, for example speech acts or performative utterances are social actions in themselves.
Other relevant issues considered by social philosophy are:
The will to power
Modernism postmodernism
Cultural criticism
Social philosophies
Communitarianism
Conflict theory
Conservatism
Critical theory
Individualism
Positivism
Progressivism
Structural functionalism
Social constructionism
Symbolic interactionism
Social philosophers
A list of philosophers that have concerned themselves, although most of them not exclusively, with social philosophy:
Theodor Adorno
Giorgio Agamben
Hannah Arendt
Alain Badiou
Mikhail Bakunin
Jean Baudrillard
Walter Benjamin
Jeremy Bentham
Edmund Burke
Judith Butler
Thomas Carlyle
Chanakya
Cornelius Castoriadis
Noam Chomsky
Confucius
Simone de Beauvoir
Guy Debord
Émile Durkheim
Terry Eagleton
Friedrich Engels
Julius Evola
Michel Foucault
Sigmund Freud
Erich Fromm
Giovanni Gentile
Henry George
Erving Goffman
Jürgen Habermas
G. W. F. Hegel
Martin Heidegger
Thomas Hobbes
Max Horkheimer
Ivan Illich
Carl Jung
Ibn Khaldun
Peter Kropotkin
Jacques Lacan
R. D. Laing
Henri Lefebvre
Emmanuel Levinas
John Locke
Georg Lukács
Herbert Marcuse
Karl Marx
Marshall McLuhan
John Stuart Mill
Huey P. Newton
Friedrich Nietzsche
Michael Oakeshott
Antonie Pannekoek
Plato
Karl Popper
Pierre-Joseph Proudhon
John Rawls
Wilhelm Röpke
Jean-Jacques Rousseau
John Ruskin
Bertrand Russell
Jean-Paul Sartre
Alfred Schmidt
Arthur Schopenhauer
Roger Scruton
Socrates
Pitirim A. Sorokin
Thomas Sowell
Herbert Spencer
Oswald Spengler
Charles Taylor
Alexis de Tocqueville
Max Weber
John Zerzan
Slavoj Žižek
See also
Outline of sociology
Social simulation
Social theory
Sociological theory
Sociology
Critical theory
Feminist theory
Critical race theory
References
Interdisciplinary subfields of sociology | 0.802135 | 0.989053 | 0.793354 |
Philosophy of logic | Philosophy of logic is the area of philosophy that studies the scope and nature of logic. It investigates the philosophical problems raised by logic, such as the presuppositions often implicitly at work in theories of logic and in their application. This involves questions about how logic is to be defined and how different logical systems are connected to each other. It includes the study of the nature of the fundamental concepts used by logic and the relation of logic to other disciplines. According to a common characterisation, philosophical logic is the part of the philosophy of logic that studies the application of logical methods to philosophical problems, often in the form of extended logical systems like modal logic. But other theorists draw the distinction between the philosophy of logic and philosophical logic differently or not at all. Metalogic is closely related to the philosophy of logic as the discipline investigating the properties of formal logical systems, like consistency and completeness.
Various characterizations of the nature of logic are found in the academic literature. Logic is often seen as the study of the laws of thought, correct reasoning, valid inference, or logical truth. It is a formal science that investigates how conclusions follow from premises in a topic-neutral manner, i.e. independent of the specific subject matter discussed. One form of inquiring into the nature of logic focuses on the commonalities between various logical formal systems and on how they differ from non-logical formal systems. Important considerations in this respect are whether the formal system in question is compatible with fundamental logical intuitions and whether it is complete. Different conceptions of logic can be distinguished according to whether they define logic as the study of valid inference or logical truth. A further distinction among conceptions of logic is based on whether the criteria of valid inference and logical truth are specified in terms of syntax or semantics.
Different types of logic are often distinguished. Logic is usually understood as formal logic and is treated as such for most of this article. Formal logic is only interested in the form of arguments, expressed in a formal language, and focuses on deductive inferences. Informal logic, on the other hand, addresses a much wider range of arguments found also in natural language, which include non-deductive arguments. The correctness of arguments may depend on other factors than their form, like their content or their context. Various logical formal systems or logics have been developed in the 20th century and it is the task of the philosophy of logic to classify them, to show how they are related to each other, and to address the problem of how there can be a manifold of logics in contrast to one universally true logic. These logics can be divided into classical logic, usually identified with first-order logic, extended logics, and deviant logics. Extended logics accept the basic formalism and the axioms of classical logic but extend them with new logical vocabulary. Deviant logics, on the other hand, reject certain core assumptions of classical logic and are therefore incompatible with it.
The philosophy of logic also investigates the nature and philosophical implications of the fundamental concepts of logic. This includes the problem of truth, especially of logical truth, which may be defined as truth depending only on the meanings of the logical terms used. Another question concerns the nature of premises and conclusions, i.e. whether to understand them as thoughts, propositions, or sentences, and how they are composed of simpler constituents. Together, premises and a conclusion constitute an inference, which can be either deductive and ampliative depending on whether it is necessarily truth-preserving or introduces new and possibly false information. A central concern in logic is whether a deductive inference is valid or not. Validity is often defined in terms of necessity, i.e. an inference is valid if and only if it is impossible for the premises to be true and the conclusion to be false. Incorrect inferences and arguments, on the other hand, fail to support their conclusion. They can be categorized as formal or informal fallacies depending on whether they belong to formal or informal logic. Logic has mostly been concerned with definitory rules, i.e. with the question of which rules of inference determine whether an argument is valid or not. A separate topic of inquiry concerns the strategic rules of logic: the rules governing how to reach an intended conclusion given a certain set of premises, i.e. which inferences need to be drawn to arrive there.
The metaphysics of logic is concerned with the metaphysical status of the laws and objects of logic. An important dispute in this field is between realists, who hold that logic is based on facts that have mind-independent existence, and anti-realists like conventionalists, who hold that the laws of logic are based on the conventions governing the use of language. Logic is closely related to various disciplines. A central issue in regard to ontology concerns the ontological commitments associated with the use of logic, for example, with singular terms and existential quantifiers. An important question in mathematics is whether all mathematical truths can be grounded in the axioms of logic together with set theory. Other related fields include computer science and psychology.
Definition and related disciplines
Philosophy of logic is the area of philosophy that studies the nature of logic. Like many other disciplines, logic involves various philosophical presuppositions which are addressed by the philosophy of logic. The philosophy of logic can be understood in analogy to other discipline-specific branches of philosophy: just like the philosophy of science investigates philosophical problems raised by science, so the philosophy of logic investigates philosophical problems raised by logic.
An important question studied by the philosophy of logic is how logic is to be defined, for example, in terms of valid inference or of logical truth. This includes the issue of how to distinguish logical from non-logical formal systems. It is especially relevant for clarifying the relation between the various proposed logical systems, both classical and non-classical, and for evaluating whether all of these systems actually qualify as logical systems. The philosophy of logic also investigates how to understand the most fundamental concepts of logic, like truth, premises, conclusions, inference, argument, and validity. It tries to clarify the relation between logic and other fields, such as ontology, mathematics, and psychology.
The philosophy of logic is closely related to philosophical logic but there is no general agreement about how these disciplines stand to each other. Some theorists use these two terms for the same discipline while others see them as distinct disciplines. According to the latter view, philosophical logic contrasts with the philosophy of logic in that it is usually seen as the application of logical methods to philosophical problems, often by developing deviant or extended logics. In this sense, philosophical logic is one area of inquiry within the philosophy of logic, i.e. a part of the general study of philosophical problems raised by logic. But this form of distinction is not universally accepted and some authors have proposed different characterizations. The intimate connection between logic and philosophy is also reflected in the fact that many famous logicians were also philosophers. The philosophy of logic is closely related to metalogic but not identical to it. Metalogic investigates the properties of formal logical systems, like whether a given logical system is consistent or complete. It usually includes the study of the semantics and syntax of formal languages and formal systems.
Nature of logic
The term "logic" is based on the Greek word "", which is associated with various different senses, such as reason, discourse, or language. There are many disagreements about what logic is and how it should be defined. Various characteristics are generally ascribed to logic, like that it studies the relation between premises and conclusions and that it does so in a topic-neutral manner. An important task of the philosophy of logic is to investigate the criteria according to which a formal system should count as logic. Different conceptions of logic understand it as either based on valid inference or logical truth. The criteria of valid inference and logical truth can themselves be specified in different ways: based on syntactic or semantic considerations.
General characteristics
Traditionally, logic is often understood as the discipline investigating laws of thought. One problem for this characterization is that logic is not an empirical discipline studying the regularities found in actual human thinking: this subject belongs to psychology. This is better captured by another characterization sometimes found in the literature: that logic concerns the laws of correct thinking or, more specifically, correct reasoning. This reflects the practical significance of logic as a tool to improve one's reasoning by drawing good inferences and becoming aware of possible mistakes. Logic has also been defined as the science of valid argumentation. This mirrors the definition in terms of reasoning since argumentation may be understood as an outward expression of inward reasoning.
Logic is often seen as a formal foundation of all knowledge. As a formal science, it stands in contrast to the material or empirical sciences, like physics or biology, since it is mainly concerned with entailment relations between propositions but not with whether these propositions actually are true. For example, deducing from the proposition "all moons are made of cheese" that "Earth's moon is made of cheese" is a valid inference. The error in this example is due to a false premise belonging to empirical astronomy.
A central feature of logic is that it is topic-neutral. This means that it is concerned with the validity of arguments independent of the subject matter of these arguments. In this sense, regular sciences are concerned with correct reasoning within a specific area of inquiry, for example, concerning material bodies for classical mechanics or living beings for biology, while logic is concerned with correct reasoning in general as applicable to all these disciplines. One problem with this characterization is that it is not always clear how the terms "topic-neutral" and "subject matter" are to be understood in this context. For example, it could be argued that first-order logic has individuals as its subject matter, due to its usage of singular terms and quantifiers, and is therefore not completely topic-neutral. A closely related characterization holds that logic is concerned with the form of arguments rather than their contents. On this view, the regular sciences could be seen as seeking true premises while logic studies how to draw conclusions from these or any premises. But this characterization also has its problems due to difficulties in distinguishing between form and content. For example, since temporal logic talks about time, this would lead to the implausible conclusion that time belongs to the form and not to the content of arguments. These difficulties have led some theorists to doubt that logic has a clearly specifiable scope or an essential character.
There is wide agreement that logic is a normative discipline. This means that the laws it investigates determine how people should think and that violating these laws is irrational. But there have been individual challenges to this idea. For example, Gilbert Harman claims that deductive logic investigates relations between propositions rather than correct reasoning. He argues that these relations do not directly determine how people should change their beliefs.
Logical and non-logical formal systems
One approach to determining the nature of logic is to study the different formal systems, referred to as "logics", in order to determine what is essential to all of them, i.e. what makes them logics. Formal systems of logic are systematizations of logical truths based on certain principles called axioms. As for formal logic, a central question in the philosophy of logic is what makes a formal system into a system of logic rather than a collection of mere marks together with rules for how they are to be manipulated. It has been argued that one central requirement is that the marks and how they are manipulated can be interpreted in such a way as to reflect the basic intuitions about valid arguments. This would mean, for example, that there are truth values and that the behavior of some marks corresponds to that of logical operators such as negation or conjunction. Based on this characterization, some theorists hold that certain formal systems, such as three-valued logic or fuzzy logic, stray too far from the common concept of logic to be considered logical systems. Such a position may be defended based on the idea that by rejecting some basic logical assumptions, they include a too radical departure from fundamental logical intuitions to be considered logics. It has been suggested that rejecting the principle of the bivalence of truth, i.e. that propositions are either true or false, constitutes such a case.
Metalogicians sometimes hold that logical completeness is a necessary requirement of logical systems. A formal system is complete if it is possible to derive from its axioms every theorem belonging to it. This would mean that only formal systems that are complete should be understood as constituting logical systems. One controversial argument for this approach is that incomplete theories cannot be fully formalized, which stands in contrast to the formal character of logic. On this view, first-order logic constitutes a logical system. But this would also mean that higher-order "logics" are not logics strictly speaking, due to their incompleteness.
Conceptions based on valid inference or logical truth
Logic is often defined as the study of valid or correct inferences. On this conception, it is the task of logic to provide a general account of the difference between correct and incorrect inferences. An inference is a set of premises together with a conclusion. An inference is valid if the conclusion follows from the premises, i.e. if the truth of the premises ensures the truth of the conclusion. Another way to define logic is as the study of logical truth. Logical truth is a special form of truth since it does not depend on how things are, i.e. on which possible world is actual. Instead, a logically true proposition is true in all possible worlds. Their truth is based solely on the meanings of the terms they contain, independent of any empirical matters of fact. There is an important link between these two conceptions: an inference from the premises to a conclusion is valid if the material conditional from the premises to the conclusion is logically true. For example, the inference from "roses are red and grass is green" to "roses are red" is valid since the material conditional "if roses are red and grass is green, then roses are red" is logically true.
Conceptions based on syntax or semantics
Whether logic is defined as the study of valid inference or of logical truth leaves open their exact criteria. There are two important ways of specifying these criteria: the syntactic and the semantic approach, sometimes also called the deductive-theoretic and the model-theoretic approach. In this sense, a logic can be defined as a formal language together with either a deductive-theoretic or a model-theoretic account of logical consequence. The syntactic approach tries to capture these features based only on syntactic or formal features of the premises and the conclusion. This is usually achieved by expressing them through a formal symbolism to make these features explicit and independent of the ambiguities and irregularities of natural language. In this formalism, the validity of arguments only depends on the structure of the argument, specifically on the logical constants used in the premises and the conclusion. On this view, a proposition is a logical consequence of a group of premises if and only if the proposition is deducible from these premises. This deduction happens by using rules of inference. This means that for a valid argument, it is not possible to produce true premises with a false conclusion by substituting their constituents with elements belonging to similar categories while keeping the logical constants in place. In the case of logical truths, such a substitution cannot make them false. Different sets of rules of inference constitute different deductive systems, for example, the ones associated with classical logic or with intuitionistic logic. So whether the proposition is a logical consequence depends not just on the premises but also on the deductive system used.
A problem with the syntactic approach is that the use of formal language is central to it. But the problem of logic, i.e. of valid inference and logical truth, is found not just in formal languages but also in natural languages. However, even within the scope of formal languages, the problem of truth poses a variety of problems, which often call for a richer meta-language to be properly addressed. This threatens the syntactic approach even when restricted to formal languages. Another difficulty is posed by the fact that it is often not clear how to distinguish formal from non-formal features, i.e. logical from non-logical symbols. This distinction lies at the very heart of the syntactic approach due to its role in the definition of valid inference or logical truth.
The semantic approach, on the other hand, focuses on the relation between language and reality. In logic, the study of this relationship is often termed model theory. For this reason, the semantic approach is also referred to as the model-theoretic conception of logic. It was initially conceived by Alfred Tarski and characterizes logical truth not in relation to the logical constants used in sentences, but based on set-theoretic structures that are used to interpret these sentences. The idea behind this approach is that sentences are not true or false by themselves but only true or false in relation to an interpretation. Interpretations are usually understood in set-theoretic terms as functions between symbols used in the sentence and a domain of objects. Such a function assigns individual constants to individual elements of the domain and predicates to tuples of elements of the domain. An interpretation of a sentence (or of a theory comprising various sentences) is called a model of this sentence if the sentence is true according to this interpretation. A sentence is logically true if it is true in every interpretation, i.e. if every interpretation is a model of this sentence. In this case, no matter how the interpretation-function and the domain of objects to which it points are defined, the sentence is always true. If interpretations are understood in terms of possible worlds, logically true sentences can be seen as sentences that are true in every possible world. Expressed in terms of valid arguments: an argument is valid if and only if its conclusion is true in all possible worlds in which its premises are true.
This conception avoids the problems of the syntactic approach associated with the difficulty of distinguishing between logical and non-logical symbols. But it faces other problems of its own. On the one hand, it shares the problem with the syntactic approach of being in need of a meta-language to address the problem of truth. It therefore presupposes a formal language that can be studied from a perspective outside itself. This poses problems for generalizing its insights to the logic of language in general as an all-encompassing medium. On the other hand, it ignores the relationship between language and world, since it defines truth based on the interpretation that takes place only between symbols and set-theoretic objects.
Types of logics
The problem of having to choose between a manifold of rival logical systems is rather recent. For a long time in history, Aristotelian syllogistics was treated as the canon of logic and there were very few substantial improvements to it for over two thousand years until the works of George Boole, Bernard Bolzano, Franz Brentano, Gottlob Frege, and others. These developments were often driven by a need to increase the expressive flexibility of logic and to adapt it to specific areas of usage. A central problem in the philosophy of logic, raised by the contemporary proliferation of logical systems, is to explain how these systems are related to each other. This brings with it the question of why all these formal systems deserve the title "logic". Another question is whether only one of these systems is the right one or how a multiplicity of logical systems is possible instead of just one universal logic. Monism is the thesis that only one logic is correct while pluralism allows different alternative logical systems to be correct for different areas of discourse. It has also been suggested that there may be one universal concept of logic that underlies and unifies all the different logical systems.
Formal and informal
Logic and the philosophy of logic have traditionally focused primarily on formal arguments, i.e. arguments expressed in a formal language. But they also include the study of informal arguments found in natural language. Formal logic is usually seen as the paradigmatic form of logic but various modern developments have emphasized the importance of informal logic for many practical purposes where formal logic alone is unable to solve all issues by itself. Both formal and informal logic aim at evaluating the correctness of arguments. But formal logic restricts itself concerning the factors that are used in order to provide exact criteria for this evaluation. Informal logic tries to take various additional factors into account and is therefore relevant for many arguments outside the scope of formal logic, but does so at the cost of precision and general rules. Arguments that fail this evaluation are called fallacies. Formal fallacies are fallacies within the scope of formal logic whereas informal fallacies belong to informal logic.
Formal logic is concerned with the validity of inferences or arguments based only on their form, i.e. independent of their specific content and the context in which they are used. This usually happens through abstraction by seeing particular arguments as instances of a certain form of argument. Forms of arguments are defined by how their logical constants and variables are related to each other. In this way, different arguments with very different contents may have the same logical form. Whether an argument is valid only depends on its form. An important feature of formal logic is that for a valid argument, the truth of its premises ensures the truth of its conclusion, i.e. it is impossible for the premises to be true and the conclusion to be false.
A serious problem associated with the usage of formal logic for expressing theories from various fields is that these theories have to be translated into a formal language, usually the language of first-order logic. This is necessary since formal logic is only defined for specific formal language: it is therefore not directly applicable to many arguments expressed differently. Such translations can be challenging since formal languages are often quite restrictive. For example, they frequently lack many of the informal devices found in natural language. One recurrent problem concerns the word "is" in the English language, which has a variety of meanings depending on the context, such as identity, existence, predication, class-inclusion, or location.
Informal logic, on the other hand, has a more concrete orientation in that it tries to evaluate whether a specific instance of an argument is good or bad. This brings with it the need to study not just the general form of the argument in question, but also the contents used as premises of this argument and the context in which this argument is used. This means that the same argument may be both good, when used in one context, and bad, when used in another context. For example, a strawman argument tries to overcome the opponent's position by attributing a weak position to them and then proving this position to be false. In a context where the opponent does not hold this position, the argument is bad, while it may be a good argument against an opponent who actually defends the strawman position. Arguments studied by informal logic are usually expressed in natural language.
Informal logic does not face the need to translate natural language arguments into a formal language in order to be able to evaluate them. This way, it avoids various problems associated with this translation. But this does not solve many of the problems that the usage of natural language brings with it, like ambiguities, vague expressions, or implicitly assuming premises instead of explicitly stating them. Many of the fallacies discussed in informal logic arise directly from these features. This concerns, for example, the fallacies of ambiguity and of presumption.
Classical and non-classical
Within the domain of formal logic, an important distinction is between classical and non-classical logic. The term classical logic refers primarily to propositional logic and first-order logic. It is the dominant logical system accepted and used by most theorists. But the philosophy of logic is also concerned with non-classical or alternative logics. They are sometimes divided into extended logics and deviant logics. Extended logics are extensions of classical logic, i.e. they accept the basic formalism and axioms of classical logic but extend them with new logical vocabulary, like introducing symbols for "possibility" and "necessity" in modal logic or symbols for "sometimes" and "always" in temporal logic. Deviant logics, on the other hand, reject certain core assumptions of classical logic. They use axioms different from classical logic, which are often more limiting concerning which inferences are valid. They are "deviant" in the sense that they are incompatible with classical logic and may be seen as its rivals.
Classical
The term classical logic refers primarily to propositional logic and first-order logic. It is usually treated by philosophers as the paradigmatic form of logic and is used in various fields. It is concerned with a small number of central logical concepts and specifies the role these concepts play in making valid inferences. These core notions include quantifiers, expressing ideas like "all" and "some", and propositional connectives, like "and", "or", and "if-then". Among the non-logical concepts, an important distinction is between singular terms and predicates. Singular terms stand for objects and predicates stand for properties of or relations between these objects. In this respect, first-order logic differs from traditional Aristotelian logic, which lacked predicates corresponding to relations. First-order logic allows quantification only over individuals, in contrast to higher-order logic, which allows quantification also over predicates.
Extended
Extended logics accept the axioms and the core vocabulary of classical logic. This is reflected in the fact that the theorems of classical logic are valid in them. But they go beyond classical logic by including additional new symbols and theorems. The goal of these changes is usually either to apply logical treatment to new areas or to introduce a higher level of abstraction, for example, in the form of quantification applied not just to singular terms but also to predicates or propositions, or through truth predicates. In this sense, deviant logics are usually seen as rivals to classical logic while extended logics are supplements to classical logic. Important examples of extended logics include modal logic and higher-order logic.
The term "modal logic", when understood in its widest sense, refers to a variety of extended logics, such as alethic, deontic, or temporal modal logic. In its narrow sense, it is identical with alethic modal logic. While classical logic is only concerned with what is true or false, alethic modal logic includes new symbols to express what is possibly or necessarily true or false. These symbols take the form of sentential operators. Usually, the symbols and are used to express that the sentence following them is possibly or necessarily true. Modal logics also include various new rules of inferences specifying how these new symbols figure in valid arguments. One example is the formula , i.e. that if something is necessarily true then it is also possibly true. The other forms of modal logic besides alethic modal logic apply the same principles to different fields. In deontic modal logic, the symbols and are used to express which actions are permissible or obligatory; in temporal logic, they express what is the case at some time or at every time; in epistemic logic, they express what is compatible with a person's beliefs or what this person knows.
Various rules of inference have been suggested as the basic axioms of the different modal logics but there is no general agreement on which are the right ones. An influential interpretation of modal operators, due to Saul Kripke, understands them as quantifiers over possible worlds. A possible world is a complete and consistent way how things could have been. On this view, to say that something is necessarily true is to say that it is true in all accessible possible worlds. One problem for this type of characterization is that they seem to be circular since possible worlds are themselves defined in modal terms, i.e. as ways how things could have been.
Even when restricted to alethic modal logic, there are again different types of possibility and necessity that can be meant by these terms. For example, according to physical modality, it is necessary that an object falls if dropped since this is what the laws of nature dictate. But according to logical modality, this is not necessary since the laws of nature might have been different without leading to a logical contradiction.
Higher-order logics extend classical first-order predicate logic by including new forms of quantification. In first-order logic, quantification is restricted to individuals, like in the formula (there are some apples that are sweet). Higher-order logics allow quantification not just over individuals but also over predicates, as in (there are some qualities that Mary and John share). The increased expressive power of higher-order logics is especially relevant for mathematics. For example, an infinite number of axioms is necessary for Peano arithmetic and Zermelo-Fraenkel set theory in first-order logic, while second-order logic only needs a handful of axioms to do the same job. But this increased expressive power comes at certain costs. On the one hand, higher-order theories are incomplete: it is not possible to prove every true sentence based on the axioms of this theory. For theories in first-order logic, on the other hand, this is possible. Another drawback is that higher-order logics seem to be committed to a form of Platonism since they quantify not just over individuals but also over properties and relations.
Deviant
Deviant logics are forms of logic in that they have the same goal as classical logic: to give an account of which inferences are valid. They differ from classical logic by giving a different account. Intuitionistic logic, for example, rejects the law of excluded middle, which is a valid form of inference in classical logic. This rejection is based on the idea that mathematical truth depends on verification through a proof. The law fails for cases where no such proof is possible, which exist in every sufficiently strong formal system, according to Gödel's incompleteness theorems. Free logic differs from classical logic since it has fewer existential presuppositions: it allows non-denoting expressions, i.e. individual terms that do not refer to objects within the domain. A central motivation for this type of modification is that free logic can be used to analyze discourse with empty singular terms, like in the expression "Santa Claus does not exist". Many-valued logic is a logic that allows for additional truth values besides true and false in classical logic. In this sense, it rejects the principle of the bivalence of truth. In a simple form of three-valued logic, for example, a third truth value is introduced: undefined.
Fundamental concepts
Truth
In logic, truth is usually seen as a property of propositions or sentences. It plays a central role in logic since validity is often defined in terms of truth: an inference is valid if and only if it is impossible for its premises to be true and its conclusion to be false. Theories of truth try to characterize the nature of truth. According to correspondence theories, a proposition is true if it corresponds to reality, i.e. if it represents things how they actually are. Coherence theories, on the other hand, identify truth with coherence. On this view, a proposition is true if it is a coherent part of a specified set of propositions, i.e. if these propositions are consistent with each other and provide mutual inferential support for each other. According to pragmatic theories of truth, whether a proposition is true depends on its relation to practice. Some versions claim that a proposition is true if believing it is useful, if it is the ideal result of an endless inquiry, or if it meets the standards of warranted assertibility. Deflationary theories of truth see truth as a rather empty notion that lacks an interesting nature of its own. On this view, to assert that a proposition is true is the same as asserting the proposition by itself. Other important topics in the philosophy of logic concerning truth are the value of truth, the liar paradox, and the principle of bivalence of truth.
Logical truth
Central to logic is the notion of logical truth. Logical truth is often understood in terms of the analytic-synthetic distinction: a proposition is analytically true if its truth only depends on the meanings of the terms composing it. Synthetic propositions, on the other hand, are characterized by the fact that their truth depends on non-logical or empirical factors. This is sometimes expressed by stating that analytical truths are tautologies, whose denial would imply a contradiction, while it is possible for synthetic propositions to be true or false. In this sense, the proposition "all bachelors are unmarried" is analytically true since being unmarried is part of how the term "bachelor" is defined. The proposition "some bachelors are happy", on the other hand, is synthetically true since it depends on empirical factors not included in the meaning of its terms. But whether this distinction is tenable has been put into question. For example, Willard Van Orman Quine has argued that there are no purely analytic truths, i.e. that all propositions are to some extent empirical. But others have explicitly defended the analytic-synthetic distinction against Quine's criticism.
But whether logical truths can be identified with analytical truths is not always accepted. A different approach characterizes logical truths regarding a small subset of the meanings of all terms: the so-called logical constants or syncategoremata. They include propositional connectives, like "and" or "if-then", quantifiers, like "for some" or "for all", and identity. Propositional logic is only concerned with truth in virtue of propositional connectives, while predicate logic also investigates truths based on the usage of quantifiers and identity. Extended logics introduce even more logical constants, like possibility and necessity in modal logic. A sentence is true in virtue of the logical constants alone if all non-logical terms can be freely replaced by other terms of the appropriate type without affecting any change in the truth value of the sentence. For example, the sentence "if it rains, then it rains" is true due to its logical form alone because all such replacements, like substituting the expression "Socrates is wise" for the expression "it rains", also result in true sentences. One problem with this characterization of logic is that it is not always clear how to draw the distinction between logical constants and other symbols. While there is little controversy in the paradigmatic cases, there are various borderline cases in which there seem to be no good criteria for deciding the issue.
Premises and conclusions
There are various discussions about the nature of premises and conclusions. It is widely agreed that they have to be bearers of truth, i.e. that they are either true or false. This is necessary so they can fulfill their logical role. They are traditionally understood as thoughts or propositions, i.e. as mental or abstract objects. This approach has been rejected by various philosophers since it has proven difficult to specify clear identity criteria for these types of entities. An alternative approach holds that only sentences can act as premises and conclusions. Propositions are closely related to sentences since they are the meaning of sentences: sentences express propositions. But this approach faces various problems of its own. One is due to the fact that the meaning of sentences usually is context-dependent. Because of this, it could be the case that the same inference is valid in one context and invalid in another. Another problem consists in the fact that some sentences are ambiguous, i.e. that it sometimes depends on one's interpretation whether an inference is valid or not.
An important aspect both of propositions and of sentences is that they can be either simple or complex. Complex propositions are made up of simple propositions that are linked to each other through propositional connectives. Simple propositions do not have other propositions as their parts. Still, they are usually understood as being constituted by other entities as well: by subpropositional parts like singular terms and predicates. For example, the simple proposition "Mars is red" is made of the singular term "Mars", to which the predicate "red" is applied. In contrast, the proposition "Mars is red and Venus is white" is made up of two propositions connected by the propositional connective "and". In the simplest case, these connectives are truth-functional connectives: the truth value of the complex proposition is a function of the truth values of its constituents. So the proposition "Mars is red and Venus is white" is true because the two propositions constituting it are true. The truth value of simple propositions, on the other hand, depends on their subpropositional parts. This is usually understood in terms of reference: their truth is determined by how their subpropositional parts are related to the world, i.e. to the extra-linguistic objects they refer to. This relation is studied by theories of reference, which try to specify how singular terms refer to objects and how predicates apply to these objects. In the case of singular terms, popular suggestions include that the singular term refers to its object either through a definite description or based on causal relations with it. In the former sense, the name "Aristotle" may be understood as the definite description "the pupil of Plato who taught Alexander". As for predicates, they are often seen as referring either to universals, to concepts, or to classes of objects.
Inference and argument
An inference is the process of reasoning from premises to a conclusion. The relation between the premises and the conclusion is called "entailment" or "logical consequence". An argument consists of the premises, the conclusion, and the relation between them. But the terms "inference", "argument", "entailment", and "logical consequence" are often used interchangeably. A complex argument is an argument involving several steps, in which the conclusions of earlier steps figure as the premises of the following steps. Inferences and arguments can be correct or incorrect. This depends on whether the premises actually support the conclusion or not, i.e. on whether the conclusion follows from the premises. For example, it follows from "Kelly is not both at home and at work" and "Kelly is at home" that "Kelly is not at work". But it does not follow that "Kelly is a football fan".
An important distinction among inferences is between deductive and ampliative inferences, also referred to as monotonic and non-monotonic inferences. According to Alfred Tarski, deductive inference has three central features: (1) it is formal, i.e. it depends only on the form of the premises and the conclusion; (2) it is a priori, i.e. no sense experience is needed to determine whether it obtains; (3) it is modal, i.e. that it holds by necessity for the given propositions, independent of any other circumstances. Deductive inferences are necessarily truth-preserving: the conclusion cannot be false if all the premises are true. For this reason, they are unable to introduce new information not already found in the premises and are uninformative in this sense. One problem with characterizing deductive inferences as uninformative is that this seems to suggest that they are useless, i.e. it fails to explain why someone would use or study them. This difficulty can be addressed by distinguishing between depth information and surface information. On this view, deductive logic is uninformative on the level of depth information but may still lead to surprising results on the level of surface information by presenting certain aspects in a new way.
Ampliative inferences, on the other hand, are informative by aiming to provide new information. This happens at the cost of losing the necessarily truth-preserving nature. The most prominent form of ampliative inference is induction. An inductive inference involves particular propositions as premises, which are used to infer either one more particular proposition or a generalization as the conclusion. Deductive inferences are the paradigmatic form of inference and are the main focus of logic. But many inferences drawn in the empirical sciences and in everyday discourse are ampliative inferences.
Validity and fallacies
A central problem in logic is how to distinguish correct or valid arguments from incorrect or invalid ones. The philosophy of logic investigates issues like what it means that an argument is valid. This includes the question of how this type of support is to be understood or of what the criteria are under which a premise supports a conclusion. Some logicians define valid inference or entailment in terms of logical necessity: the premises entail the conclusion if it is impossible for the premises to be true and the conclusion to be false. This can also be expressed by saying that the conjunction of the premises and the negation of the conclusion is logically impossible. This conception brings with it the principle of explosion, i.e. that anything follows from a contradiction. But valid inferences can also be characterized in terms of rules of inference. Rules of inference govern the transition from the premises to the conclusion. On this view, an inference is valid if it is in accordance with an appropriate rule of inference.
Closely related to the notion of valid inference is that of confirmation. Valid inferences belong to formal logic and is associated with deductively valid arguments. But many arguments found in the sciences and in everyday discourse support their conclusion without ensuring its truth. They fall in the purview of informal logic and can also be divided into good and bad arguments. In this sense, for example, observations may act as empirical evidence supporting a scientific hypothesis. This is often understood in terms of probability, i.e. that the evidence increases the likelihood that the hypothesis is true.
Of special interest are the so-called fallacies, i.e. incorrect arguments that appear to be correct. They are incorrect because the premises do not support the conclusion in the assumed way. Due to their misleading appearance, they can seduce people into accepting and using them. Often three factors are identified as the sources of the error: form, content, and context. The form of an argument refers to its structure, i.e. which rule of inference it employs. Errors on the level of form involve the use of invalid rules of inference. An argument that is incorrect on the level of content uses false propositions as its premises. The context of an argument refers to the situation in which it is used and the role it is supposed to play. An argument can be fallacious if it fails to play the role intended for it, as in the strawman fallacy, when the arguer attacks an overly weak position not held by the opponent.
An important distinction among fallacies can be drawn based on these sources of error: that between formal and informal fallacies. Formal fallacies pertain to formal logic and involve only errors of form by employing an invalid rule of inference. Denying the antecedent is one type of formal fallacy, for example, "If Othello is a bachelor, then he is male. Othello is not a bachelor. Therefore, Othello is not male". Informal fallacies belong to informal logic and their main source of error is found on the level of content and context. False dilemmas, for example, are based on a false disjunctive premise that oversimplifies reality by excluding viable alternatives, as in "Stacey spoke out against capitalism; therefore, she must be a communist".
Since logic evaluates arguments as good or bad, logic faces the problem of the nature and justification of the norms guiding these evaluations. This is similar to issues found in metaethics about how to justify moral norms. One approach to this issue is to characterize the norms of logic as generalizations of the inferential practices found in natural language or the sciences. This way, justification is inherited from the evaluations of good and bad inferences used in the corresponding field.
Definitory and strategic rules
An important distinction among the rules of logic is that between definitory and strategic rules. Rules of inferences are definitory rules: they govern which inferences are valid. And while it has been the main objective of logic to distinguish valid from invalid inferences, there is also a secondary objective often associated with logic: to determine which inferential steps are needed to prove or disprove a given proposition based on a set of premises. This is the domain of strategic rules. The rules of inference specify which steps are allowed but they remain silent on which steps need to be taken to reach a certain conclusion. The difference between definitory and strategic rules is found not only in logic but in various games as well. In chess, for example, the definitory rules specify that bishops may only move diagonally while strategic rules describe how the allowed moves may be used to win a game, e.g. by controlling the center or by protecting one's king. Following definitory rules determines whether one plays chess or something else while following strategic rules determines whether one is a good or a bad chess player. Both definitory and strategic rules are to be distinguished from empirical descriptive rules, which generalize how people actually draw inferences, whether correct or incorrect. In this sense, definitory rules are permissive and strategic rules are prescriptive while empirical generalizations are descriptive. Violating the definitory rules of logic results in committing fallacies. It has been argued that the almost exclusive focus of logicians on the definitory rules of logic is not justified. On this view, more emphasis should be given to strategic rules instead, since many applications of logic, like the problem of rational belief change, depend more on strategic rules than on definitory rules.
Metaphysics of logic
The philosophy of logic is in many ways closely related to the philosophy of mathematics, especially in relation to their metaphysical aspects. The metaphysics of logic is concerned with the metaphysical status of its objects and the laws governing them. The theories within the metaphysics of logic can roughly be divided into realist and non-realist positions.
Logical realists hold that the laws of logic are objective, i.e. independent of humans and their ways of thinking. On this view, the structures found in logic are structures of the world itself. According to a definition proposed by Sandra LaPointe, logical realism consists of two theses: that there are logical facts and that they are independent of our cognitive and linguistic make-up and practices. Logical realism is often interpreted from the perspective of Platonism, i.e. that there is an intelligible realm of abstract objects that includes the objects of logic. On this view, logic is not invented but discovered. An important consequence of this position is that there is a clear gap between the facts of logic themselves and our beliefs about these facts. One difficulty of this position consists in clarifying which sense of independence is meant when saying that logic is independent of humans. If it is understood in the strictest sense possible, no knowledge of it would be possible since a fully independent reality could play no part in human consciousness. Another problem is to explain the relation between the one world and the many different logical systems proposed. This would suggest that there is only one true logic and all other logical systems are either false or incomplete.
Logical realism is rejected by anti-realists, who hold that logic does not describe an objective feature of reality. Anti-realism about logic often takes the form of conceptualism or psychologism, in which the objects of logic consist in mental conceptions or the logical laws are identified with psychological laws. This can include the thesis that the laws of logic are not knowable a priori, as is often held, but that they are discovered through the methods of experimental inquiry. An argument for psychologism is based on the idea that logic is a sub-discipline of psychology: it studies not all laws of thought, but only the subset of laws corresponding to valid reasoning. Another argument focuses on the thesis that we learn about logical truths through the feeling of self-evidence, which is in turn studied by psychology. Various objections to psychologism have been raised, especially in German philosophy around the turn of the 20th century in the so-called "Psychologismus-Streit". One objection focuses on the thesis that the laws of logic are known a priori, which is not true for the empirical laws studied by psychology. Another points out that psychological laws are usually vague, whereas logic is an exact science with clear laws.
Conventionalism is another form of anti-realism, in which the logical truths depend on the meanings of the terms used, which in turn depend on linguistic conventions adopted by a group of agents. One problem for this position consists in providing a clear definition of the term "convention". Conventions are widely observed regularities. But not every widely observed regularity is a convention: conventions include a certain normative factor that distinguishes right from wrong behavior, whereas irregular behavior is not automatically wrong. Another problem concerns the fact that conventions are contingent, while logical truths are necessary. This casts doubt on the possibility of defining logical truth in terms of convention unless a plausible explanation could be given how contingent conventions can ground necessary truths.
Relation to other disciplines
Ontology
A central issue in ontology is the problem of existence, i.e. whether an entity or a certain kind of entity exists. According to some theorists, the main goal of ontology is just to determine what exists and what does not exist. The issue of existence is closely related to singular terms, like names, and existential quantifiers: it is often held that these devices carry existential presuppositions or ontological commitments with them. On this view, sentences like and involve ontological commitments to the existence of apples and of Pegasus, respectively. The most famous defender of this approach is Willard Van Orman Quine, who argues that the ontological commitments of any theory can be determined by translating it into first-order logic and reading them off from the existential quantifiers used in this translation.
One problem with this approach is that it can lead to various controversial ontological commitments. Mathematics, for example, quantifies over numbers in sentences such as "there are prime numbers between 1000 and 1010". This would mean that the ontological commitment to the existence of numbers, i.e. realism about numbers, is already built into mathematics. Another problem is due to the fact that natural language contains many names for imaginary entities, such as Pegasus or Santa Claus. But if names come with existential commitments, then sentences like "Santa Claus does not exist" would be contradictory. Within ontology, these problems are sometimes approached through Platonism or psychologism by holding that the problematic entities do exist, but only in the form of abstract or mental objects while lacking concrete or material existence. Within logic, these problems can be avoided by using certain forms of non-classical logic. Free logic, for example, allows empty singular terms, which do not denote any object in the domain and therefore carry no ontological commitments. This is often combined with an existence-predicate, which can be used to specify whether a singular term denotes an object in the domain. But talk of existence as a predicate is controversial. Opponents of this approach often point out that existence is required for an object to have any predicates at all and can therefore not be one of them.
The issue of existence brings with it its own problems in the case of higher-order logics. Second-order logic, for example, includes existential quantification not just for singular terms but also for predicates. This is often understood as entailing ontological commitments not just to regular objects but also to the properties and relations instantiated by these objects. This position is known as realism and is often rejected in contemporary philosophy due to naturalist considerations. It contrasts with nominalism, the view that only individuals exist.
Mathematics
Mathematics and logic are related in various ways. Both are considered formal sciences and in many cases, developments in these two fields happened in parallel. Propositional logic, for example, is an instance of Boolean algebra. It is often claimed that mathematics can, in principle, be grounded in only first-order logic together with set theory. Metamath is one example of such a project. It is based on 20 axioms of propositional logic, first-order predicate logic, and Zermelo–Fraenkel set theory and has already proved a significant amount of mathematical theorems based on these axioms. Closely related to this project is logicism: the thesis defended by Gottfried Wilhelm Leibniz and Gottlob Frege that arithmetic is reducible to logic alone. This would mean that any statement in arithmetic, like "2 + 2 = 4", can be expressed in purely logical terms, i.e. without using numbers or arithmetic operators like addition. In this case, all the theorems of arithmetic would be derivable from the axioms of logic. Whether this thesis is correct depends on how the term "logic" is understood. If "logic" only refers to the axioms of first-order predicate logic, it is false. But if one includes set-theory in it or higher-order logic, then arithmetic is reducible to logic.
Computer science
An important relation between logic and computer science arises from the parallels between propositional connectives of propositional logic and logic gates in computer science: they both follow the laws of Boolean algebra. Propositions are either false or true while the inputs and outputs of logic gates are termed 0 and 1. Both use truth tables to illustrate the functioning of propositional connectives and logic gates. Another important relation to logic consists in the development of logic software that can assist logicians in formulating proofs or even automate the process. Prover9 is an example of an automated theorem prover for first-order logic.
Psychology
A very close connection between psychology and logic can be drawn if logic is seen as the science of the laws of thought. One important difference between psychology and logic in the light of this characterization is that psychology is an empirical science that aims to study how humans actually think. Logic, on the other hand, has the objective of discovering the laws of correct reasoning, independently of whether actual human thinking often falls short of this ideal. The psychologist Jean Piaget applied logic to psychology by using it to identify different stages of human psychological development. On his view, the ability to reason logically only arises at a certain stage in the child's development and can be used as a criterion to distinguish it from earlier stages.
See also
Important theorists
Philosophical theories of logic
Others
References
Further reading
Fisher Jennifer, On the Philosophy of Logic, Thomson Wadworth, 2008,
Goble, Lou, ed., 2001. The Blackwell Guide to Philosophical Logic. Oxford: Blackwell. .
Grayling, A. C., 1997. An Introduction to Philosophical Logic. 3rd ed. Oxford: Blackwell. .
Haack, Susan. 1978. Philosophy of Logics. Cambridge University Press.
Jacquette, Dale, ed., 2002. A Companion to Philosophical Logic. Oxford Blackwell. .
McGinn, Colin, 2000. Logical Properties: Identity, Existence, Predication, Necessity, Truth. Oxford: Oxford University Press. .
Quine, W. V. O. 2004. Philosophy of Logic. 2nd ed. Harvard University Press.
Sainsbury, Mark, 2001. Logical Forms: An Introduction to Philosophical Logic. 2nd ed. Oxford: Blackwell. .
Alfred Tarski. 1983. The concept of truth in formalized languages, pp. 152–278, in Logic,semantics, metamathematics, papers from 1923 to 1938, ed. John Corcoran (logician), Hackett,Indianapolis 1983.
Wolfram, Sybil, 1989. Philosophical Logic: An Introduction. London: Routledge. 290 pages. ,
Journal of Philosophical Logic, Springer SBM
Logic
Philosophy by topic | 0.795279 | 0.996328 | 0.792358 |
Perspectivism | Perspectivism (; also called perspectivalism) is the epistemological principle that perception of and knowledge of something are always bound to the interpretive perspectives of those observing it. While perspectivism regard all perspectives and interpretations as being of equal truth or value, it holds that no one has access to an absolute view of the world cut off from perspective. Instead, all such occurs from some point of view which in turn affects how things are perceived. Rather than attempt to determine truth by correspondence to things outside any perspective, perspectivism thus generally seeks to determine truth by comparing and evaluating perspectives among themselves. Perspectivism may be regarded as an early form of epistemological pluralism, though in some accounts includes treatment of value theory, moral psychology, and realist metaphysics.
Early forms of perspectivism have been identified in the philosophies of Protagoras, Michel de Montaigne, and Gottfried Leibniz. However, its first major statement is considered to be Friedrich Nietzsche's development of the concept in the 19th century, influenced by Gustav Teichmüller's use of the term some years prior. For Nietzsche, perspectivism takes the form of a realist antimetaphysics while rejecting both the correspondence theory of truth and the notion that the truth-value of a belief always constitutes its ultimate worth-value. The perspectival conception of objectivity used by Nietzsche sees the deficiencies of each perspective as remediable by an asymptotic study of the differences between them. This stands in contrast to Platonic notions in which objective truth is seen to reside in a wholly non-perspectival domain. Despite this, perspectivism is often misinterpreted as a form of relativism or as a rejection of objectivity entirely. Though it is often mistaken to imply that no way of seeing the world can be taken as definitively true, perspectivism can instead be interpreted as holding certain interpretations (such as that of perspectivism itself) to be definitively true.
During the 21st century, perspectivism has led a number of developments within analytic philosophy and philosophy of science, particularly under the early influence of Ronald Giere, Jay Rosenberg, Ernest Sosa, and others. This contemporary form of perspectivism, also known as scientific perspectivism, is more narrowly focused than prior forms—centering on the perspectival limitations of scientific models, theories, observations, and focused interest, while remaining more compatible for example with Kantian philosophy and correspondence theories of truth. Furthermore, scientific perspecitivism has come to address a number of scientific fields such as physics, biology, cognitive neuroscience, and medicine, as well as interdisciplinarity and philosophy of time. Studies of perspectivism have also been introduced into contemporary anthropology, initially through the influence of Eduardo Viveiros de Castro and his research into indigenous cultures of South America.
The basic principle that things are perceived differently from different perspectives (or that perspective determines one's limited and unprivileged access to knowledge) has sometimes been accounted as a rudimentary, uncontentious form of perspectivism. The basic practice of comparing contradictory perspectives to one another may also be considered one such form of perspectivism , as may the entire philosophical problem of how true knowledge is to penetrate one's perspectival limitations.
Precursors and early developments
In Western languages, scholars have found perspectivism in the philosophies of Heraclitus ( – ), Protagoras ( – ), Michel de Montaigne (1533 – 1592 CE), and Gottfried Leibniz (1646 – 1716 CE). The origins of perspectivism have also been found to lie also within Renaissance developments in philosophy of art and its artistic notion of perspective. In Asian languages, scholars have found perspectivism in Buddhist, Jain, and Daoist texts. Anthropologists have found a kind of perspectivism in the thinking of some indigenous peoples. Some theologians believe John Calvin interpreted various scriptures in a perspectivist manner.
Ancient Greek philosophy
The Western origins of perspectivism can be found in the pre-Socratic philosophies of Heraclitus and Protagoras. In fact, a major cornerstone of Plato's philosophy is his rejection and opposition to perspectivism—this forming a principal element of his aesthetics, ethics, epistemology, and theology. The antiperspectivism of Plato made him a central target of critique for later perspectival philosophers such as Nietzsche.
Montaigne
Montaigne's philosophy presents in itself a less as a doctrinaire position than as a core philosophical approach put into practice. Inasmuch as no one can occupy a God's-eye view, Montaigne holds that no one has access to a view which is totally unbiased, which does not according to its own perspective. It is instead only the underlying psychological biases which view one's own perspective as unbiased. In a passage from his "Of Cannibals", he writes:
Nietzsche
In his works, Nietzsche makes a number of statements on perspective which at times contrast each other throughout the development of his philosophy. Nietzsche's begins by challenging the underlying notions of 'viewing from nowhere', 'viewing from everywhere', and 'viewing without interpreting' as being absurdities. Instead, all is attached to some perspective, and all viewers are limited in some sense to the perspectives at their command. In The Genealogy of Morals he writes:
In this, Nietzsche takes a contextualist approach which rejects any God's-eye view of the world. This has been further linked to his notion of the death of God and the dangers of a resulting relativism. However, Nietzsche's perspectivism itself stands in sharp contrast to any such relativism. In outlining his perspectivism, Nietzsche rejects those who claim everything to be subjective, by disassembling the notion of the subject as itself a mere invention and interpretation. He further states that, since the two are mutually dependent on each other, the collapse of the God's-eye view causes also the notion of the thing-in-itself to fall apart with it. Nietzsche views this collapse to reveal, through his genealogical project, that all that has been considered non-perspectival knowledge, the entire tradition of Western metaphysics, has itself been only a perspective. His perspectivism and genealogical project are further integrated into each other in addressing the psychological drives that underlie various philosophical programs and perspectives, as a form of critique. Here, contemporary scholar Ken Gemes views Nietzsche's perspectivism to above all be a principle of moral psychology, rejecting interpretations of it as an epistemological thesis outrightly. It is through this method of critique that the deficiencies of various perspectives can be alleviated—through a critical mediation of the differences between them rather than any appeals to the non-perspectival. In a posthumously published aphorism from The Will to Power, Nietzsche writes:
While Nietzsche does not plainly reject truth and objectivity, he does reject the notions of truth, facts, and objectivity.
Truth theory and the value of truth
Despite receiving much attention within contemporary philosophy, there is no academic consensus on Nietzsche's conception of truth. While his perspectivism presents a number of challenges regarding the nature of truth, its more controversial element lies in its questioning of the of truth. Contemporary scholars Steven D. Hales and Robert C. Welshon write that:
Later developments
In the 20th century, perspectivism was discussed separately by José Ortega y Gasset and Karl Jaspers.
Ortega
Ortega's perspectivism, replaced his previous position that "man is completely social". His reversal is prominent in his work Verdad y perspectiva ("Truth and perspective"), where he explained that "each man has a mission of truth" and that what he sees of reality no other eye sees. He explained:From different positions two people see the same surroundings. However, they do not see the same thing. Their different positions mean that the surroundings are organized in a different way: what is in the foreground for one may be in the background for another. Furthermore, as things are hidden one behind another, each person will see something that the other may not.Ortega also maintained that perspective is perfected by the multiplication of its viewpoints. He noted that war transpires due to the lack of perspective and failure to see the larger contexts of the actions among nations. Ortega also cited the importance of phenomenology in perspectivism as he argued against speculation and the importance of concrete evidence in understanding truth and reality. In this discourse, he highlighted the role of "circumstance" in finding out the truth since it allows us to understand realities beyond ourselves.
Types of Perspectivism
Contemporary types of perspectivism include:
Individualist perspectivism
Collectivist perspectivism
Transcendental perspectivism
Theological perspectivism
See also
Anekantavada, a fundamental doctrine of Jainism setting forth a pluralistic metaphysics, traceable to Mahavira (599–527 BCE)
Blind men and an elephant
Conceptual framework
Consilience, the unity of knowledge
Constructivist epistemology
Eclecticism
Fallibilism
Fusion of horizons
Integral theory (disambiguation)
Intersubjectivity
Metaphilosophy
Model-dependent realism
Moral nihilism
Moral skepticism
Multiperspectivalism, a current in Calvinist epistemology
Philosophy of Friedrich Nietzsche
Point of view (philosophy)
Rhizome (philosophy)
Standpoint theory
Value pluralism
References
Consensus reality
Epistemological theories
Philosophy of Friedrich Nietzsche
Hermeneutics
Philosophical analogies
Philosophical theories
Criticism of rationalism
Social epistemology | 0.798969 | 0.991264 | 0.79199 |
Socratic method | The Socratic method (also known as method of Elenchus or Socratic debate) is a form of argumentative dialogue between individuals, based on asking and answering questions.
In Plato's dialogue "Theaetetus", Socrates describes his method as a form of "midwifery" because it is employed to help his interlocutors develop their understanding in a way analogous to a child developing in the womb. The Socratic method begins with commonly held beliefs and scrutinizes them by way of questioning to determine their internal consistency and their coherence with other beliefs and so to bring everyone closer to the truth.
In modified forms, it is employed today in a variety of pedagogical contexts.
Development
In the second half of the 5th century BC, sophists were teachers who specialized in using the tools of philosophy and rhetoric to entertain, impress, or persuade an audience to accept the speaker's point of view. Socrates promoted an alternative method of teaching, which came to be called the Socratic method.
Socrates began to engage in such discussions with his fellow Athenians after his friend from youth, Chaerephon, visited the Oracle of Delphi, which asserted that no man in Greece was wiser than Socrates. Socrates saw this as a paradox, and began using the Socratic method to answer his conundrum. Diogenes Laërtius, however, wrote that Protagoras invented the "Socratic" method.
Plato famously formalized the Socratic elenctic style in prose—presenting Socrates as the curious questioner of some prominent Athenian interlocutor—in some of his early dialogues, such as Euthyphro and Ion, and the method is most commonly found within the so-called "Socratic dialogues", which generally portray Socrates engaging in the method and questioning his fellow citizens about moral and epistemological issues. But in his later dialogues, such as Theaetetus or Sophist, Plato had a different method to philosophical discussions, namely dialectic.
Method
Elenchus is the central technique of the Socratic method. The Latin form (plural ) is used in English as the technical philosophical term. The most common adjectival form in English is ; and are also current. This was also very important in Plato's early dialogues.
Socrates (as depicted by Plato) generally applied his method of examination to concepts such as the virtues of piety, wisdom, temperance, courage, and justice. Such an examination challenged the implicit moral beliefs of the interlocutors, bringing out inadequacies and inconsistencies in their beliefs, and usually resulting in aporia. In view of such inadequacies, Socrates himself professed ignorance. Socrates said that his awareness of his ignorance made him wiser than those who, though ignorant, still claimed knowledge. This claim was based on a reported Delphic oracular pronouncement that no man was wiser than Socrates. While this belief seems paradoxical at first glance, in fact it allowed Socrates to discover his own errors.
Socrates used this claim of wisdom as the basis of moral exhortation. He claimed that the chief goodness consists in the caring of the soul concerned with moral truth and moral understanding, that "wealth does not bring goodness, but goodness brings wealth and every other blessing, both to the individual and to the state", and that "life without examination [dialogue] is not worth living".
Socrates rarely used the method to actually develop consistent theories, and he even made frequent use of creative myths and allegories. The Parmenides dialogue shows Parmenides using the Socratic method to point out the flaws in the Platonic theory of forms, as presented by Socrates; it is not the only dialogue in which theories normally expounded by Plato's Socrates are broken down through dialectic. Instead of arriving at answers, the method breaks down the theories we hold, to go "beyond" the axioms and postulates we take for granted. Therefore, myth and the Socratic method are not meant by Plato to be incompatible; they have different purposes, and are often described as the "left hand" and "right hand" paths to good and wisdom.
Scholarly debate
In Plato's early dialogues, the elenchus is the technique Socrates uses to investigate, for example, the nature or definition of ethical concepts such as justice or virtue. According to Gregory Vlastos, it has the following steps:
Socrates' interlocutor asserts a thesis, for example "Courage is endurance of the soul".
Socrates decides whether the thesis is false and targets for refutation.
Socrates secures his interlocutor's agreement to further premises, for example "Courage is a fine thing" and "Ignorant endurance is not a fine thing".
Socrates then argues, and the interlocutor agrees, these further premises imply the contrary of the original thesis; in this case, it leads to: "courage is not endurance of the soul".
Socrates then claims he has shown his interlocutor's thesis is false and its negation is true.
One elenctic examination can lead to a new, more refined, examination of the concept being considered, in this case it invites an examination of the claim: "Courage is endurance of the soul". Most Socratic inquiries consist of a series of and typically end in puzzlement known as .
Michael Frede points out Vlastos' conclusion in step No. 5 above makes nonsense of the aporetic nature of the early dialogues. Having shown a proposed thesis is false is insufficient to conclude some other competing thesis must be true. Rather, the interlocutors have reached aporia, an improved state of still not knowing what to say about the subject under discussion.
The exact nature of the elenchus is subject to a great deal of debate, in particular concerning whether it is a positive method, leading to knowledge, or a negative method used solely to refute false claims to knowledge. Some qualitative research shows that the use of the Socratic method within a traditional Yeshiva education setting helps students succeed in law school, although it remains an open question as to whether that relationship is causal or merely correlative.
Yet, W. K. C. Guthrie in The Greek Philosophers sees it as an error to regard the Socratic method as a means by which one seeks the answer to a problem, or knowledge. Guthrie claims that the Socratic method actually aims to demonstrate one's ignorance. Socrates, unlike the Sophists, did believe that knowledge was possible, but believed that the first step to knowledge was recognition of one's ignorance. Guthrie writes, "[Socrates] was accustomed to say that he did not himself know anything, and that the only way in which he was wiser than other men was that he was conscious of his own ignorance, while they were not. The essence of the Socratic method is to convince the interlocutor that whereas he thought he knew something, in fact he does not."
Modern applications
Socratic seminar
A Socratic seminar (also known as a Socratic circle) is a pedagogical approach based on the Socratic method and uses a dialogic approach to understand information in a text. Its systematic procedure is used to examine a text through questions and answers founded on the beliefs that all new knowledge is connected to prior knowledge, that all thinking comes from asking questions, and that asking one question should lead to asking further questions. A Socratic seminar is not a debate. The goal of this activity is to have participants work together to construct meaning and arrive at an answer, not for one student or one group to "win the argument".
This approach is based on the belief that participants seek and gain deeper understanding of concepts in the text through thoughtful dialogue rather than memorizing information that has been provided for them. While Socratic seminars can differ in structure, and even in name, they typically involve a passage of text that students must read beforehand and facilitate dialogue. Sometimes, a facilitator will structure two concentric circles of students: an outer circle and an inner circle. The inner circle focuses on exploring and analysing the text through the act of questioning and answering. During this phase, the outer circle remains silent. Students in the outer circle are much like scientific observers watching and listening to the conversation of the inner circle. When the text has been fully discussed and the inner circle is finished talking, the outer circle provides feedback on the dialogue that took place. This process alternates with the inner circle students going to the outer circle for the next meeting and vice versa. The length of this process varies depending on the text used for the discussion. The teacher may decide to alternate groups within one meeting, or they may alternate at each separate meeting.
The most significant difference between this activity and most typical classroom activities involves the role of the teacher. In Socratic seminar, the students lead the discussion and questioning. The teacher's role is to ensure the discussion advances regardless of the particular direction the discussion takes.
Various approaches to Socratic seminar
Teachers use Socratic seminar in different ways. The structure it takes may look different in each classroom. While this is not an exhaustive list, teachers may use one of the following structures to administer Socratic seminar:
Inner/outer circle or fishbowl: Students need to be arranged in inner and outer circles. The inner circle engages in discussion about the text. The outer circle observes the inner circle, while taking notes. The outer circle shares their observations and questions the inner circle with guidance from the teacher/facilitator. Students use constructive criticism as opposed to making judgements. The students on the outside keep track of topics they would like to discuss as part of the debrief. Participants in the outer circle can use an observation checklist or notes form to monitor the participants in the inner circle. These tools will provide structure for listening and give the outside members specific details to discuss later in the seminar. The teacher may also sit in the circle but at the same height as the students.
Triad: Students are arranged so that each participant (called a "pilot") in the inner circle has two "co-pilots" sitting behind them on either side. Pilots are the speakers because they are in the inner circle; co-pilots are in the outer circle and only speak during consultation. The seminar proceeds as any other seminar. At a point in the seminar, the facilitator pauses the discussion and instructs the triad to talk to each other. Conversation will be about topics that need more in-depth discussion or a question posed by the leader. Sometimes triads will be asked by the facilitator to come up with a new question. Any time during a triad conversation, group members can switch seats and one of the co-pilots can sit in the pilot's seat. Only during that time is the switching of seats allowed. This structure allows for students to speak, who may not yet have the confidence to speak in the large group. This type of seminar involves all students instead of just the students in the inner and outer circles.
Simultaneous seminars: Students are arranged in multiple small groups and placed as far as possible from each other. Following the guidelines of the Socratic seminar, students engage in small group discussions. Simultaneous seminars are typically done with experienced students who need little guidance and can engage in a discussion without assistance from a teacher/facilitator. According to the literature, this type of seminar is beneficial for teachers who want students to explore a variety of texts around a main issue or topic. Each small group may have a different text to read/view and discuss. A larger Socratic seminar can then occur as a discussion about how each text corresponds with one another. Simultaneous Seminars can also be used for a particularly difficult text. Students can work through different issues and key passages from the text.
No matter what structure the teacher employs, the basic premise of the seminar/circles is to turn partial control and direction of the classroom over to the students. The seminars encourage students to work together, creating meaning from the text and to stay away from trying to find a correct interpretation. The emphasis is on critical and creative thinking.
Text selection
Socratic seminar texts
A Socratic seminar text is a tangible document that creates a thought-provoking discussion.
The text ought to be appropriate for the participants' current level of intellectual and social development. It provides the anchor for dialogue whereby the facilitator can bring the participants back to the text if they begin to digress. Furthermore, the seminar text enables the participants to create a level playing field – ensuring that the dialogical tone within the classroom remains consistent and pure to the subject or topic at hand. Some practitioners argue that "texts" do not have to be confined to printed texts, but can include artifacts such as objects, physical spaces, and the like.
Pertinent elements of an effective Socratic text
Socratic seminar texts are able to challenge participants' thinking skills by having these characteristics:
Ideas and values: The text must introduce ideas and values that are complex and difficult to summarize. Powerful discussions arise from personal connections to abstract ideas and from implications to personal values.
Complexity and challenge: The text must be rich in ideas and complexity and open to interpretation. Ideally it should require multiple readings, but should be neither far above the participants' intellectual level nor very long.
Relevance to participants' curriculum: An effective text has identifiable themes that are recognizable and pertinent to the lives of the participants. Themes in the text should relate to the curriculum.
Ambiguity: The text must be approachable from a variety of different perspectives, including perspectives that seem mutually exclusive, thus provoking critical thinking and raising important questions. The absence of right and wrong answers promotes a variety of discussion and encourages individual contributions.
Two different ways to select a text
Socratic texts can be divided into two main categories:
Print texts (e.g., short stories, poems, and essays) and non-print texts (e.g. photographs, sculptures, and maps); and
Subject area, which can draw from print or non-print artifacts. As examples, language arts can be approached through poems, history through written or oral historical speeches, science through policies on environmental issues, math through mathematical proofs, health through nutrition labels, and physical education through fitness guidelines.
Questioning methods
Socratic seminars are based upon the interaction of peers. The focus is to explore multiple perspectives on a given issue or topic. Socratic questioning is used to help students apply the activity to their learning. The pedagogy of Socratic questions is open-ended, focusing on broad, general ideas rather than specific, factual information. The questioning technique emphasizes a level of questioning and thinking where there is no single right answer.
Socratic seminars generally start with an open-ended question proposed either by the leader or by another participant. There is no designated first speaker; as individuals participate in Socratic dialogue, they gain experience that enables them to be effective in this role of initial questioner.
The leader keeps the topic focused by asking a variety of questions about the text itself, as well as questions to help clarify positions when arguments become confused. The leader also seeks to coax reluctant participants into the discussion, and to limit contributions from those who tend to dominate. She or he prompts participants to elaborate on their responses and to build on what others have said. The leader guides participants to deepen, clarify, and paraphrase, and to synthesize a variety of different views.
The participants share the responsibility with the leader to maintain the quality of the Socratic circle. They listen actively to respond effectively to what others have contributed. This teaches the participants to think and speak persuasively using the discussion to support their position. Participants must demonstrate respect for different ideas, thoughts and values, and must not interrupt each other.
Questions can be created individually or in small groups. All participants are given the opportunity to take part in the discussion. Socratic circles specify three types of questions to prepare:
Opening questions generate discussion at the beginning of the seminar in order to elicit dominant themes.
Guiding questions help deepen and elaborate the discussion, keeping contributions on topic and encouraging a positive atmosphere and consideration for others.
Closing questions lead participants to summarize their thoughts and learning and personalize what they've discussed.
Challenges and disadvantages
Scholars such as Peter Boghossian suggest that although the method improves creative and critical thinking, there is a flip side to the method. He states that the teachers who use this method wait for the students to make mistakes, thus creating negative feelings in the class, exposing the student to possible ridicule and humiliation.
Some have countered this thought by stating that the humiliation and ridicule is not caused by the method, rather it is due to the lack of knowledge of the student. Boghossian mentions that even though the questions may be perplexing, they are not originally meant for it, in fact such questions provoke the students and can be countered by employing counterexamples.
Psychotherapy
The Socratic method, in the form of Socratic questioning, has been adapted for psychotherapy, most prominently in classical Adlerian psychotherapy, logotherapy, rational emotive behavior therapy, cognitive therapy and reality therapy. It can be used to clarify meaning, feeling, and consequences, as well as to gradually unfold insight, or explore alternative actions.
The Socratic method has also recently inspired a new form of applied philosophy: Socratic dialogue, also called philosophical counseling. In Europe Gerd B. Achenbach is probably the best known practitioner, and Michel Weber has also proposed another variant of the practice.
See also
Devil's advocate
Harkness tablea teaching method based on the Socratic method
Marva Collins
Pedagogy
The Paper Chase1973 film based on a 1971 novel of the same name, dramatizing the use of the Socratic method in law school classes
Socrates Cafe
Socratic questioning
Socratic irony
References
External links
Robinson, Richard, Plato's Earlier Dialectic, 2nd edition (Clarendon Press, Oxford, 1953).
Ch. 2: Elenchus;
Ch. 3: Elenchus: Direct and Indirect
Philosopher.org – 'Tips on Starting your own Socrates Cafe', Christopher Phillips, Cecilia Phillips
Socraticmethod.net Socratic Method Research Portal
How to Use the Socratic Method
UChicago.edu – 'The Socratic Method' by Elizabeth Garrett (1998)
Teaching by Asking Instead of by Telling, an example from Rick Garlikov
Project Gutenberg: Works by Plato
Project Gutenberg: Works by Xenophon (includes some Socratic works)
Project Gutenberg: Works by Cicero (includes some works in the "Socratic dialogue" format)
The Socratic Club
Socratic and Scientific Method
Method
Debate types
Education in ancient Greece
Educational psychology
History of education
Dialectic
Inquiry
Philosophical methodology
Group problem solving methods
Rhetoric | 0.79131 | 0.999048 | 0.790557 |
Episteme | In philosophy, (; ) is knowledge or understanding. The term epistemology (the branch of philosophy concerning knowledge) is derived from .
History
Plato
Plato, following Xenophanes, contrasts with : common belief or opinion. The term is also distinguished from : a craft or applied practice. In the Protagoras, Plato's Socrates notes that and are prerequisites for prudence.
Aristotle
Aristotle distinguished between five virtues of thought: , , , , and , with translating as "craft" or "art" and as "knowledge". A full account of is given in Posterior Analytics, where Aristotle argues that knowledge of necessary, rather than contingent, truths regarding causation is foundational for . To emphasize the necessity, he uses geometry. Notably, Aristotle uses the notion of cause in a broader sense than contemporary thought. For example, understanding how geometrical axioms lead to a theorem about properties of triangles counts as understanding the cause of the proven property of the right triangle. As a result, is a virtue of thought that deals with what cannot be otherwise, while and deal with what is contingent.
Contemporary interpretations
Michel Foucault
For Foucault, an is the guiding unconsciousness of subjectivity within a given epoch – subjective parameters which form an historical a priori. He uses the term in his The Order of Things, in a specialized sense to mean the historical, non-temporal, a priori knowledge that grounds truth and discourses, thus representing the condition of their possibility within a particular epoch. In the book, Foucault describes :
In any given culture and at any given moment, there is always only one that defines the conditions of possibility of all knowledge, whether expressed in a theory or silently invested in a practice.
In subsequent writings, he makes it clear that several may co-exist and interact at the same time, being parts of various power-knowledge systems. Foucault attempts to demonstrate the constitutive limits of discourse, and in particular, the rules enabling their productivity; however, Foucault maintains that, though ideology may infiltrate and form science, it need not do so: it must be demonstrated how ideology actually forms the science in question; contradictions and lack of objectivity are not an indicator of ideology. Jean Piaget has compared Foucault's use of with Thomas Kuhn's notion of a paradigm.
See also
References
Concepts in ancient Greek epistemology
Discourse analysis
Knowledge
Michel Foucault
Philosophy of science
Theories in ancient Greek philosophy | 0.796439 | 0.99153 | 0.789693 |
Social constructivism | Social constructivism is a sociological theory of knowledge according to which human development is socially situated, and knowledge is constructed through interaction with others. Like social constructionism, social constructivism states that people work together to actively construct artifacts. But while social constructivism focuses on cognition, social constructionism focuses on the making of social reality.
A very simple example is an object like a cup. The object can be used for many things, but its shape does suggest some 'knowledge' about carrying liquids (see also Affordance). A more complex example is an online course—not only do the 'shapes' of the software tools indicate certain things about the way online courses should work, but the activities and texts produced within the group as a whole will help shape how each person behaves within that group. A person's cognitive development will also be influenced by the culture that they are involved in, such as the language, history, and social context. For a philosophical account of one possible social-constructionist ontology, see the 'Criticism' section of Representative realism.
Philosophy
Strong social constructivism as a philosophical approach tends to suggest that "the natural world has a small or non-existent role in the construction of scientific knowledge". According to Maarten Boudry and Filip Buekens, Freudian psychoanalysis is a good example of this approach in action. However, Boudry and Buekens do not claim that 'bona fide' science is completely immune from all socialisation and paradigm shifts, merely that the strong social constructivist claim that all scientific knowledge is constructed ignores the reality of scientific success.
One characteristic of social constructivism is that it rejects the role of superhuman necessity in either the invention/discovery of knowledge or its justification. In the field of invention it looks to contingency as playing an important part in the origin of knowledge, with historical interests and resourcing swaying the direction of mathematical and scientific knowledge growth. In the area of justification while acknowledging the role of logic and reason in testing, it also accepts that the criteria for acceptance vary and change over time. Thus mathematical proofs follow different standards in the present and throughout different periods in the past, as Paul Ernest argues.
Education
Social constructivism has been studied by many educational psychologists, who are concerned with its implications for teaching and learning. Social constructivism extends constructivism by incorporating the role of other actors and culture in development. In this sense it can also be contrasted with social learning theory by stressing interaction over observation. For more on the psychological dimensions of social constructivism, see the work of A. Sullivan Palincsar. Psychological tools are one of the key concepts in Lev Vygotsky's sociocultural perspective.
Studies on increasing the use of student discussion in the classroom both support and are grounded in theories of social constructivism. There is a full range of advantages that results from the implementation of discussion in the classroom. Participating in group discussion allows students to generalize and transfer their knowledge of classroom learning and builds a strong foundation for communicating ideas orally. Many studies argue that discussion plays a vital role in increasing student ability to test their ideas, synthesize the ideas of others, and build deeper understanding of what they are learning. Large and small group discussion also affords students opportunities to exercise self-regulation, self-determination, and a desire to persevere with tasks. Additionally, discussion increases student motivation, collaborative skills, and the ability to problem solve. Increasing students’ opportunity to talk with one another and discuss their ideas increases their ability to support their thinking, develop reasoning skills, and to argue their opinions persuasively and respectfully. Furthermore, the feeling of community and collaboration in classrooms increases through offering more chances for students to talk together.
Studies have found that students are not regularly accustomed to participating in academic discourse. Martin Nystrand argues that teachers rarely choose classroom discussion as an instructional format. The results of Nystrand’s (1996) three-year study focusing on 2400 students in 60 different classrooms indicate that the typical classroom teacher spends under three minutes an hour allowing students to talk about ideas with one another and the teacher. Even within those three minutes of discussion, most talk is not true discussion because it depends upon teacher-directed questions with predetermined answers. Multiple observations indicate that students in low socioeconomic schools and lower track classrooms are allowed even fewer opportunities for discussion. Discussion and interactive discourse promote learning because they afford students the opportunity to use language as a demonstration of their independent thoughts. Discussion elicits sustained responses from students that encourage meaning-making through negotiating with the ideas of others. This type of learning “promotes retention and in-depth processing associated with the cognitive manipulation of information”.
One recent branch of work exploring social constructivist perspectives on learning focuses on the role of social technologies and social media in facilitating the generation of socially constructed knowledge and understanding in online environments.
Academic writing
In a constructivist approach, the focus is on the sociocultural conventions of academic discourse such as citing evidence, hedging and boosting claims, interpreting the literature to back one's own claims, and addressing counter claims. These conventions are inherent to a constructivist approach as they place value on the communicative, interpersonal nature of academic writing with a strong focus on how the reader receives the message. The act of citing others’ work is more than accurate attribution; it is an important exercise in critical thinking in the construction of an authorial self.
See also
Constructivist epistemology
Educational psychology
Experiential learning
Learning theory
Virtual community
References
Further reading
Books
Dyson, A. H. (2004). Writing and the sea of voices: Oral language in, around, and about writing. In R.B. Ruddell, & N.J. Unrau (Eds.), Theoretical Models and Processes of Reading (pp. 146–162). Newark, DE: International Reading Association.
Paul Ernest (1998), Social Constructivism as a Philosophy of Mathematics, Albany NY: SUNY Press
Fry, H & Kettering, S & Marshall, S (Eds.) (2008). A Handbook for Teaching and Learning in Higher Education. Routledge
Glasersfeld, Ernst von (1995). Radical Constructivism: A Way of Knowing and Learning. London: RoutledgeFalmer.
Grant, Colin B. (2000). Functions and Fictions of Communication. Oxford and Bern: Peter Lang.
Grant, Colin B. (2007). Uncertainty and Communication: New Theoretical Investigations. Basingstoke: Palgrave Macmillan.
Hale, M.S. & City, E.A. (2002). “But how do you do that?”: Decision making for the seminar facilitator. In J. Holden & J.S. Schmit. Inquiry and the literary text: Constructing discussions in the English classroom / Classroom practices in teaching English, volume 32. Urbana, IL: National Council of Teachers of English.
André Kukla (2000), Social Constructivism and the Philosophy of Science, London: Routledge
Nystrand, M. (1996). Opening dialogue: Understanding the dynamics of language and learning in the English classroom. New York: Teachers College Press.
Poerksen, Bernhard (2004), The Certainty of Uncertainty: Dialogues Introducing Constructivism. Exeter: Imprint-Academic.
Schmidt, Siegfried J. (2007). Histories & Discourses: Rewriting Constructivism. Exeter: Imprint-Academic.
Vygotsky, L. (1978). Mind in Society. London: Harvard University Press.
Chapter 6, Social Constructivism in Introduction to International Relations: Theories and Approaches, Robert Jackson and Georg Sørensen, Third Edition, OUP 2006
Papers
Barab, S., Dodge, T. Thomas, M.K., Jackson, C. & Tuzun, H. (2007). Our designs and the social agendas they carry. Journal of the Learning Sciences, 16(2), 263-305.
Boudry, M & Buekens, F (2011) The Epistemic Predicament of a Pseudoscience: Social Constructivism Confronts Freudian Psychoanalysis. Theoria, 77, 159–179
Collins, H. M. (1981) Stages in the Empirical Program of Relativism - Introduction. Social Studies of Science. 11(1) 3-10
Corden, R.E. (2001). Group discussion and the importance of a shared perspective: Learning from collaborative research. Qualitative Research, 1(3), 347-367.
Paul Ernest, Social constructivism as a philosophy of mathematics: Radical constructivism rehabilitated? 1990
Mark McMahon, Social Constructivism and the World Wide Web - A Paradigm for Learning, ASCILITE 1997
Carlson, J. D., Social Constructivism, Moral Reasoning and the Liberal Peace: From Kant to Kohlberg, Paper presented at the annual meeting of The Midwest Political Science Association, Palmer House Hilton, Chicago, Illinois 2005
Glasersfeld, Ernst von, 1981. ‘An attentional model for the conceptual construction of units and number’, Journal for Research in Mathematics Education, 12:2, 83-94.
Glasersfeld, Ernst von, 1989. Cognition, construction of knowledge, and teaching, Synthese, 80, 121-40.
Matsumura, L.C., Slater, S.C., & Crosson, A. (2008). Classroom climate, rigorous instruction and curriculum, and students’ interactions in urban middle schools. The Elementary School Journal, 108(4), 294-312.
McKinley, J. (2015). Critical argument and writer identity: social constructivism as a theoretical framework for EFL academic writing. Critical Inquiry in Language Studies, 12(3), 184-207.
Reznitskaya, A., Anderson, R.C., & Kuo, L. (2007). Teaching and learning argumentation, The Elementary School Journal, 107(5), 449-472.
Ronald Elly Wanda. "The Contributions of Social Constructivism in Political Studies".
Weber, K., Maher, C., Powell, A., & Lee, H.S. (2008). Learning opportunities from group discussions: Warrants become the objects of debate. Educational Studies in Mathematics, 68 (3), 247-261.
Constructivism
Enactive cognition
Social epistemology
Epistemology of science | 0.794224 | 0.994121 | 0.789555 |
Human science | Human science (or human sciences in the plural) studies the philosophical, biological, social, justice, and cultural aspects of human life. Human science aims to expand the understanding of the human world through a broad interdisciplinary approach. It encompasses a wide range of fields - including history, philosophy, sociology, psychology, justice studies, evolutionary biology, biochemistry, neurosciences, folkloristics, and anthropology. It is the study and interpretation of the experiences, activities, constructs, and artifacts associated with human beings. The study of human sciences attempts to expand and enlighten the human being's knowledge of its existence, its interrelationship with other species and systems, and the development of artifacts to perpetuate the human expression and thought. It is the study of human phenomena. The study of the human experience is historical and current in nature. It requires the evaluation and interpretation of the historic human experience and the analysis of current human activity to gain an understanding of human phenomena and to project the outlines of human evolution. Human science is an objective, informed critique of human existence and how it relates to reality.Underlying human science is the relationship between various humanistic modes of inquiry within fields such as history, sociology, folkloristics, anthropology, and economics and advances in such things as genetics, evolutionary biology, and the social sciences for the purpose of understanding our lives in a rapidly changing world. Its use of an empirical methodology that encompasses psychological experience in contrasts with the purely positivistic approach typical of the natural sciences which exceeds all methods not based solely on sensory observations. Modern approaches in the human sciences integrate an understanding of human structure, function on and adaptation with a broader exploration of what it means to be human. The term is also used to distinguish not only the content of a field of study from that of the natural science, but also its methodology.
Meaning of 'science'
Ambiguity and confusion regarding the usage of the terms 'science', 'empirical science', and 'scientific method' have complicated the usage of the term 'human science' with respect to human activities. The term 'science' is derived from the Latin scientia, meaning 'knowledge'. 'Science' may be appropriately used to refer to any branch of knowledge or study dealing with a body of facts or truths systematically arranged to show the operation of general laws.
However, according to positivists, the only authentic knowledge is scientific knowledge, which comes from the positive affirmation of theories through strict scientific methods the application of knowledge, or mathematics. As a result of the positivist influence, the term science is frequently employed as a synonym for empirical science. Empirical science is knowledge based on the scientific method, a systematic approach to verification of knowledge first developed for dealing with natural physical phenomena and emphasizing the importance of experience based on sensory observation. However, even with regard to the natural sciences, significant differences exist among scientists and philosophers of science with regard to what constitutes valid scientific method—for example, evolutionary biology, geology and astronomy, studying events that cannot be repeated, can use the method of historical narratives. More recently, usage of the term has been extended to the study of human social phenomena. Thus, natural and social sciences are commonly classified as science, whereas the study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts are referred to as the humanities. Ambiguity with respect to the meaning of the term science is aggravated by the widespread use of the term formal science with reference to any one of several sciences that is predominantly concerned with abstract form that cannot be validated by physical experience through the senses, such as logic, mathematics, and the theoretical branches of computer science, information theory, and statistics.
History
The phrase 'human science' in English was used during the 17th-century scientific revolution, for example by Theophilus Gale, to draw a distinction between supernatural knowledge (divine science) and study by humans (human science). John Locke also uses 'human science' to mean knowledge produced by people, but without the distinction. By the 20th century, this latter meaning was used at the same time as 'sciences that make human beings the topic of research'.
Early development
The term "moral science" was used by David Hume (1711–1776) in his Enquiry concerning the Principles of Morals to refer to the systematic study of human nature and relationships. Hume wished to establish a "science of human nature" based upon empirical phenomena, and excluding all that does not arise from observation. Rejecting teleological, theological and metaphysical explanations, Hume sought to develop an essentially descriptive methodology; phenomena were to be precisely characterized. He emphasized the necessity of carefully explicating the cognitive content of ideas and vocabulary, relating these to their empirical roots and real-world significance.
A variety of early thinkers in the humanistic sciences took up Hume's direction. Adam Smith, for example, conceived of economics as a moral science in the Humean sense.
Later development
Partly in reaction to the establishment of positivist philosophy and the latter's Comtean intrusions into traditionally humanistic areas such as sociology, non-positivistic researchers in the humanistic sciences began to carefully but emphatically distinguish the methodological approach appropriate to these areas of study, for which the unique and distinguishing characteristics of phenomena are in the forefront (e.g., for the biographer), from that appropriate to the natural sciences, for which the ability to link phenomena into generalized groups is foremost. In this sense, Johann Gustav Droysen contrasted the humanistic science's need to comprehend the phenomena under consideration with natural science's need to explain phenomena, while Windelband coined the terms idiographic for a descriptive study of the individual nature of phenomena, and nomothetic for sciences that aim to defthe generalizing laws.
Wilhelm Dilthey brought nineteenth-century attempts to formulate a methodology appropriate to the humanistic sciences together with Hume's term "moral science", which he translated as Geisteswissenschaft - a term with no exact English equivalent. Dilthey attempted to articulate the entire range of the moral sciences in a comprehensive and systematic way. Meanwhile, his conception of “Geisteswissenschaften” encompasses also the abovementioned study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts. He characterized the scientific nature of a study as depending upon:
The conviction that perception gives access to reality
The self-evident nature of logical reasoning
The principle of sufficient reason
But the specific nature of the Geisteswissenschaften is based on the "inner" experience (Erleben), the "comprehension" (Verstehen) of the meaning of expressions and "understanding" in terms of the relations of the part and the whole – in contrast to the Naturwissenschaften, the "explanation" of phenomena by hypothetical laws in the "natural sciences".
Edmund Husserl, a student of Franz Brentano, articulated his phenomenological philosophy in a way that could be thought as a bthesis of Dilthey's attempt. Dilthey appreciated Husserl's Logische Untersuchungen (1900/1901, the first draft of Husserl's Phenomenology) as an “ep"epoch-making"istemological foundation of fors conception of Geisteswissenschaften.
In recent years, 'human science' has been used to refer to "a philosophy and approach to science that seeks to understand human experience in deeply subjective, personal, historical, contextual, cross-cultural, political, and spiritual terms. Human science is the science of qualities rather than of quantities and closes the subject-object split in science. In particular, it addresses the ways in which self-reflection, art, music, poetry, drama, language and imagery reveal the human condition. By being interpretive, reflective, and appreciative, human science re-opens the conversation among science, art, and philosophy."
Objective vs. subjective experiences
Since Auguste Comte, the positivistic social sciences have sought to imitate the approach of the natural sciences by emphasizing the importance of objective external observations and searching for universal laws whose operation is predicated on external initial conditions that do not take into account differences in subjective human perception and attitude. Critics argue that subjective human experience and intention plays such a central role in determining human social behavior that an objective approach to the social sciences is too confining. Rejecting the positivist influence, they argue that the scientific method can rightly be applied to subjective, as well as objective, experience. The term subjective is used in this context to refer to inner psychological experience rather than outer sensory experience. It is not used in the sense of being prejudiced by personal motives or beliefs.
Human science in universities
Since 1878, the University of Cambridge has been home to the Moral Sciences Club, with strong ties to analytic philosophy.
The Human Science degree is relatively young. It has been a degree subject at Oxford since 1969. At University College London, it was proposed in 1973 by Professor J. Z. Young and implemented two years later. His aim was to train general science graduates who would be scientifically literate, numerate and easily able to communicate across a wide range of disciplines, replacing the traditional classical training for higher-level government and management careers. Central topics include the evolution of humans, their behavior, molecular and population genetics, population growth and aging, ethnic and cultural diversity ,and human interaction with the environment, including conservation, disease ,and nutrition. The study of both biological and social disciplines, integrated within a framework of human diversity and sustainability, should enable the human scientist to develop professional competencies suited to address such multidimensional human problems.
In the United Kingdom, Human Science is offered at the degree level at several institutions which include:
University of Oxford
University College London (as Human Sciences and as Human Sciences and Evolution)
King's College London (as Anatomy, Developmental & Human Biology)
University of Exeter
Durham University (as Health and Human Sciences)
Cardiff University (as Human and Social Sciences)
In other countries:
Osaka University
Waseda University
Tokiwa University
Senshu University
Aoyama Gakuin University (As College of Community Studies)
Kobe University
Kanagawa University
Bunkyo University
Sophia University
Ghent University (in the narrow sense, as Moral sciences, "an integrated empirical and philosophical study of values, norms and world views")
See also
History of the Human Sciences (journal)
Social science
Humanism
Humanities
References
Bibliography
Flew, A. (1986). David Hume: Philosopher of Moral Science, Basil Blackwell, Oxford
Hume, David, An Enquiry Concerning the Principles of Morals
External links
Institute for Comparative Research in Human and Social Sciences (ICR) -Japan
Human Science Lab -London
Human Science(s) across Global Academies
Marxism philosophy | 0.795537 | 0.991869 | 0.789069 |
Empiricism | In philosophy, empiricism is an epistemological view which holds that true knowledge or justification comes only or primarily from sensory experience and empirical evidence. It is one of several competing views within epistemology, along with rationalism and skepticism. Empiricists argue that empiricism is a more reliable method of finding the truth than purely using logical reasoning, because humans have cognitive biases and limitations which lead to errors of judgement. Empiricism emphasizes the central role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. Empiricists may argue that traditions (or customs) arise due to relations of previous sensory experiences.
Historically, empiricism was associated with the "blank slate" concept (tabula rasa), according to which the human mind is "blank" at birth and develops its thoughts only through later experience.
Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation.
Empiricism, often used by natural scientists, believes that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification". Empirical research, including experiments and validated measurement tools, guides the scientific method.
Etymology
The English term empirical derives from the Ancient Greek word ἐμπειρία, empeiria, which is cognate with and translates to the Latin experientia, from which the words experience and experiment are derived.
Background
A central concept in science and the scientific method is that conclusions must be empirically based on the evidence of the senses. Both natural and social sciences use working hypotheses that are testable by observation and experiment. The term semi-empirical is sometimes used to describe theoretical methods that make use of basic axioms, established scientific laws, and previous experimental results to engage in reasoned model building and theoretical inquiry.
Philosophical empiricists hold no knowledge to be properly inferred or deduced unless it is derived from one's sense-based experience. In epistemology (theory of knowledge) empiricism is typically contrasted with rationalism, which holds that knowledge may be derived from reason independently of the senses, and in the philosophy of mind it is often contrasted with innatism, which holds that some knowledge and ideas are already present in the mind at birth. However, many Enlightenment rationalists and empiricists still made concessions to each other. For example, the empiricist John Locke admitted that some knowledge (e.g. knowledge of God's existence) could be arrived at through intuition and reasoning alone. Similarly, Robert Boyle, a prominent advocate of the experimental method, held that we also have innate ideas. At the same time, the main continental rationalists (Descartes, Spinoza, and Leibniz) were also advocates of the empirical "scientific method".
History
Early empiricism
Between 600 and 200 BCE, the Vaisheshika school of Hindu philosophy, founded by the ancient Indian philosopher Kanada, accepted perception and inference as the only two reliable sources of knowledge. This is enumerated in his work Vaiśeṣika Sūtra. The Charvaka school held similar beliefs, asserting that perception is the only reliable source of knowledge while inference obtains knowledge with uncertainty.
The earliest Western proto-empiricists were the empiric school of ancient Greek medical practitioners, founded in 330 BCE. Its members rejected the doctrines of the dogmatic school, preferring to rely on the observation of phantasiai (i.e., phenomena, the appearances). The Empiric school was closely allied with the Pyrrhonist school of philosophy, which made the philosophical case for their proto-empiricism.
The notion of tabula rasa ("clean slate" or "blank tablet") connotes a view of the mind as an originally blank or empty recorder (Locke used the words "white paper") on which experience leaves marks. This denies that humans have innate ideas. The notion dates back to Aristotle, :
Aristotle's explanation of how this was possible was not strictly empiricist in a modern sense, but rather based on his theory of potentiality and actuality, and experience of sense perceptions still requires the help of the active nous. These notions contrasted with Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body on Earth (see Plato's Phaedo and Apology, as well as others). Aristotle was considered to give a more important position to sense perception than Plato, and commentators in the Middle Ages summarized one of his positions as "nihil in intellectu nisi prius fuerit in sensu" (Latin for "nothing in the intellect without first being in the senses").
This idea was later developed in ancient philosophy by the Stoic school, from about 330 BCE. Stoic epistemology generally emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon."
Islamic Golden Age and Pre-Renaissance (5th to 15th centuries CE)
During the Middle Ages (from the 5th to the 15th century CE) Aristotle's theory of tabula rasa was developed by Islamic philosophers starting with Al Farabi, developing into an elaborate theory by Avicenna (c. 980 – 1037 CE) and demonstrated as a thought experiment by Ibn Tufail. For Avicenna (Ibn Sina), for example, the tabula rasa is a pure potentiality that is actualized through education, and knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" developed through a "syllogistic method of reasoning in which observations lead to propositional statements which when compounded lead to further abstract concepts". The intellect itself develops from a material intellect (al-'aql al-hayulani), which is a potentiality "that can acquire knowledge to the active intellect (al-'aql al-fa'il), the state of the human intellect in conjunction with the perfect source of knowledge". So the immaterial "active intellect", separate from any individual person, is still essential for understanding to occur.
In the 12th century CE, the Andalusian Muslim philosopher and novelist Abu Bakr Ibn Tufail (known as "Abubacer" or "Ebu Tophail" in the West) included the theory of tabula rasa as a thought experiment in his Arabic philosophical novel, Hayy ibn Yaqdhan in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding.
A similar Islamic theological novel, Theologus Autodidactus, was written by the Arab theologian and physician Ibn al-Nafis in the 13th century. It also dealt with the theme of empiricism through the story of a feral child on a desert island, but departed from its predecessor by depicting the development of the protagonist's mind through contact with society rather than in isolation from society.
During the 13th century Thomas Aquinas adopted into scholasticism the Aristotelian position that the senses are essential to the mind. Bonaventure (1221–1274), one of Aquinas' strongest intellectual opponents, offered some of the strongest arguments in favour of the Platonic idea of the mind.
Renaissance Italy
In the late renaissance various writers began to question the medieval and classical understanding of knowledge acquisition in a more fundamental way. In political and historical writing Niccolò Machiavelli and his friend Francesco Guicciardini initiated a new realistic style of writing. Machiavelli in particular was scornful of writers on politics who judged everything in comparison to mental ideals and demanded that people should study the "effectual truth" instead. Their contemporary, Leonardo da Vinci (1452–1519) said, "If you find from your own experience that something is a fact and it contradicts what some authority has written down, then you must abandon the authority and base your reasoning on your own findings."
Significantly, an empirical metaphysical system was developed by the Italian philosopher Bernardino Telesio which had an enormous impact on the development of later Italian thinkers, including Telesio's students Antonio Persio and Sertorio Quattromani, his contemporaries Thomas Campanella and Giordano Bruno, and later British philosophers such as Francis Bacon, who regarded Telesio as "the first of the moderns". Telesio's influence can also be seen on the French philosophers René Descartes and Pierre Gassendi.
The decidedly anti-Aristotelian and anti-clerical music theorist Vincenzo Galilei (c. 1520 – 1591), father of Galileo and the inventor of monody, made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition, by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian word he used for "experiment" was esperimento. It is known that he was the essential pedagogical influence upon the young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), arguably one of the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at the heart of the misunderstood myth of 'Pythagoras' hammers' (the square of the numbers concerned yielded those musical intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated the fallibility of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded "experience and demonstration" as the sine qua non of valid rational enquiry.
British empiricism
British empiricism, a retrospective characterization, emerged during the 17th century as an approach to early modern philosophy and modern science. Although both integral to this overarching transition, Francis Bacon, in England, first advocated for empiricism in 1620, whereas René Descartes, in France, laid the main groundwork upholding rationalism around 1640. (Bacon's natural philosophy was influenced by Italian philosopher Bernardino Telesio and by Swiss physician Paracelsus.) Contributing later in the 17th century, Thomas Hobbes and Baruch Spinoza are retrospectively identified likewise as an empiricist and a rationalist, respectively. In the Enlightenment of the late 17th century, John Locke in England, and in the 18th century, both George Berkeley in Ireland and David Hume in Scotland, all became leading exponents of empiricism, hence the dominance of empiricism in British philosophy. The distinction between rationalism and empiricism was not formally made until Immanuel Kant, in Germany, around 1780, who sought to merge the two views.
In response to the early-to-mid-17th-century "continental rationalism", John Locke (1632–1704) proposed in An Essay Concerning Human Understanding (1689) a very influential view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed with holding the proposition that the human mind is a tabula rasa, a "blank tablet", in Locke's words "white paper", on which the experiences derived from sense impressions as a person's life proceeds are written.
There are two sources of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The former are unanalysable, and are broken down into primary and secondary qualities. Primary qualities are essential for the object in question to be what it is. Without specific primary qualities, an object would not be what it is. For example, an apple is an apple because of the arrangement of its atomic structure. If an apple were structured differently, it would cease to be an apple. Secondary qualities are the sensory information we can perceive from its primary qualities. For example, an apple can be perceived in various colours, sizes, and textures but it is still identified as an apple. Therefore, its primary qualities dictate what the object essentially is, while its secondary qualities define its attributes. Complex ideas combine simple ones, and divide into substances, modes, and relations. According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each other, which is very different from the quest for certainty of Descartes.
A generation later, the Irish Anglican bishop George Berkeley (1685–1753) determined that Locke's view immediately opened a door that would lead to eventual atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) an important challenge to empiricism in which things only exist either as a result of their being perceived, or by virtue of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving whenever humans are not around to do it.) In his text Alciphron, Berkeley maintained that any order humans may see in nature is the language or handwriting of God. Berkeley's approach to empiricism would later come to be called subjective idealism.
Scottish philosopher David Hume (1711–1776) responded to Berkeley's criticisms of Locke, as well as other differences between early modern philosophers, and moved empiricism to a new level of skepticism. Hume argued in keeping with the empiricist view that all knowledge derives from sense experience, but he accepted that this has implications not normally acceptable to philosophers. He wrote for example, "Locke divides all arguments into demonstrative and probable. On this view, we must say that it is only probable that all men must die or that the sun will rise to-morrow, because neither of these can be demonstrated. But to conform our language more to common use, we ought to divide arguments into demonstrations, proofs, and probabilities—by ‘proofs’ meaning arguments from experience that leave no room for doubt or opposition." And,
Hume divided all of human knowledge into two categories: relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East") are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an "impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to have an "idea". Ideas are therefore the faint copies of sensations.
Hume maintained that no knowledge, even the most basic beliefs about the natural world, can be conclusively established by reason. Rather, he maintained, our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among his many arguments Hume also added another important slant to the debate about scientific method—that of the problem of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive reasoning, and therefore the justification for inductive reasoning is a circular argument. Among Hume's conclusions regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past.
Hume concluded that such things as belief in an external world and belief in the existence of the self were not rationally justifiable. According to Hume these beliefs were to be accepted nonetheless because of their profound basis in instinct and custom. Hume's lasting legacy, however, was the doubt that his skeptical arguments cast on the legitimacy of inductive reasoning, allowing many skeptics who followed to cast similar doubt.
Phenomenalism
Most of Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable, contending that Hume's own principles implicitly contained the rational justification for such a belief, that is, beyond being content to let the issue rest on human instinct, custom and habit. According to an extreme empiricist theory known as phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is a kind of construction out of our experiences.
Phenomenalism is the view that physical objects, properties, events (whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties, events, exist—hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences. This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the "permanent possibility of sensation".
Mill's empiricism went a significant step beyond Hume in still another respect: in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W. Hamlin:
Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference from direct experience. The problems other philosophers have had with Mill's position center around the following issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating only between actual and possible sensations. This misses some key discussion concerning conditions under which such "groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the phenomenalists, including Mill, essentially left the question unanswered.
In the end, lacking an acknowledgement of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these terms. Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely possibilities and not actualities at all". Thirdly, Mill's position, by calling mathematics merely another species of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical science, the products of which are arrived at through an internally consistent deductive set of procedures which do not, either today or at the time Mill wrote, fall under the agreed meaning of induction.
The phenomenalist phase of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical things could not be translated into statements about actual and possible sense data. If a physical object statement is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it came to be realized that there is no finite set of statements about actual and possible sense-data from which we can deduce even a single physical-object statement. The translating or paraphrasing statement must be couched in terms of normal observers in normal conditions of observation.
There is, however, no finite set of statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But, of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer to a third doctor, and so on (also see the third man).
Logical empiricism
Logical empiricism (also logical positivism or neopositivism) was an early 20th-century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick and the rest of the Vienna Circle, along with A. J. Ayer, Rudolf Carnap and Hans Reichenbach.
The neopositivists subscribed to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences. They saw in the logical symbolism elaborated by Frege (1848–1925) and Bertrand Russell (1872–1970) a powerful instrument that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold classification of all propositions: the "analytic" (a priori) and the "synthetic" (a posteriori). On this basis, they formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called "verification principle". Any sentence that is not purely logical, or is unverifiable, is devoid of meaning. As a result, most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems.
In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions. In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example, "X at location Y and at time T observes such and such". The central theses of logical positivism (verificationism, the analytic–synthetic distinction, reductionism, etc.) came under sharp attack after World War II by thinkers such as Nelson Goodman, W. V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident to most philosophers that the movement had pretty much run its course, though its influence is still significant among contemporary analytic philosophers such as Michael Dummett and other anti-realists.
Pragmatism
In the late 19th and early 20th century, several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed mainly from discussions between Charles Sanders Peirce and William James when both men were at Harvard in the 1870s. James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism". Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based) and rational (concept-based) thinking.
Charles Peirce (1839–1914) was highly influential in laying the groundwork for today's empirical scientific method. Although Peirce severely criticized many elements of Descartes' peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view.
Among Peirce's major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before. To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth".
In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these, he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and perception itself can be seen as a species of abductive inference, its difference being that it is beyond control and hence beyond critique—in a word, incorrigible. This in no way conflicts with the fallibility and revisability of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness"—what the Scholastics called its haecceity—that stands beyond control and correction. Scientific concepts, on the other hand, are general in nature, and transient sensations do in another sense find correction within them. This notion of perception as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently for instance with the work of Irvin Rock on indirect perception.
Around the beginning of the 20th century, William James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he argued could be dealt with separately from his pragmatism—though in fact the two concepts are intertwined in James's published lectures. James maintained that the empirically observed "directly apprehended universe needs ... no extraneous trans-empirical connective support", by which he meant to rule out the perception that there can be any value added by seeking supernatural explanations for natural phenomena. James' "radical empiricism" is thus not radical in the context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". His method of argument in arriving at this view, however, still readily encounters debate within philosophy even today.
John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else is interrelated. Dewey's basic thought, in accordance with empiricism, was that reality is determined by past experience. Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values of such experience. The value of such experience is measured experientially and scientifically, and the results of such tests generate ideas that serve as instruments for future experimentation, in physical sciences as in ethics. Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori.
See also
Endnotes
References
Achinstein, Peter, and Barker, Stephen F. (1969), The Legacy of Logical Positivism: Studies in the Philosophy of Science, Johns Hopkins University Press, Baltimore, MD.
Aristotle, "On the Soul" (De Anima), W. S. Hett (trans.), pp. 1–203 in Aristotle, Volume 8, Loeb Classical Library, William Heinemann, London, UK, 1936.
Aristotle, Posterior Analytics.
Barone, Francesco (1986), Il neopositivismo logico, Laterza, Roma Bari
Berlin, Isaiah (2004), The Refutation of Phenomenalism, Isaiah Berlin Virtual Library.
Bolender, John (1998), "Factual Phenomenalism: A Supervenience Theory"', Sorites, no. 9, pp. 16–31.
Chisolm, R. (1948), "The Problem of Empiricism", Journal of Philosophy 45, 512–17.
Dewey, John (1906), Studies in Logical Theory.
Encyclopædia Britannica, "Empiricism", vol. 4, p. 480.
Hume, D., A Treatise of Human Nature, L.A. Selby-Bigge (ed.), Oxford University Press, London, UK, 1975.
Hume, David. "An Enquiry Concerning Human Understanding", in Enquiries Concerning the Human Understanding and Concerning the Principles of Morals, 2nd edition, L.A. Selby-Bigge (ed.), Oxford University Press, Oxford, UK, 1902. Gutenberg press full-text
James, William (1911), The Meaning of Truth.
Keeton, Morris T. (1962), "Empiricism", pp. 89–90 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Leftow, Brian (ed., 2006), Aquinas: Summa Theologiae, Questions on God, pp. vii et seq.
Macmillan Encyclopedia of Philosophy (1969), "Development of Aristotle's Thought", vol. 1, pp. 153ff.
Macmillan Encyclopedia of Philosophy (1969), "George Berkeley", vol. 1, p. 297.
Macmillan Encyclopedia of Philosophy (1969), "Empiricism", vol. 2, p. 503.
Macmillan Encyclopedia of Philosophy (1969), "Mathematics, Foundations of", vol. 5, pp. 188–89.
Macmillan Encyclopedia of Philosophy (1969), "Axiomatic Method", vol. 5, pp. 192ff.
Macmillan Encyclopedia of Philosophy (1969), "Epistemological Discussion", subsections on "A Priori Knowledge" and "Axioms".
Macmillan Encyclopedia of Philosophy (1969), "Phenomenalism", vol. 6, p. 131.
Macmillan Encyclopedia of Philosophy (1969), "Thomas Aquinas", subsection on "Theory of Knowledge", vol. 8, pp. 106–07.
Marconi, Diego (2004), "Fenomenismo"', in Gianni Vattimo and Gaetano Chiurazzi (eds.), L'Enciclopedia Garzanti di Filosofia, 3rd edition, Garzanti, Milan, Italy.
Markie, P. (2004), "Rationalism vs. Empiricism" in Edward D. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
Maxwell, Nicholas (1998), The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford.
Mill, J.S., "An Examination of Sir William Rowan Hamilton's Philosophy", in A.J. Ayer and Ramond Winch (eds.), British Empirical Philosophers, Simon and Schuster, New York, NY, 1968.
Morick, H. (1980), Challenges to Empiricism, Hackett Publishing, Indianapolis, IN.
Peirce, C.S., "Lectures on Pragmatism", Cambridge, Massachusetts, March 26 – May 17, 1903. Reprinted in part, Collected Papers, CP 5.14–212. Published in full with editor's introduction and commentary, Patricia Ann Turisi (ed.), Pragmatism as a Principle and Method of Right Thinking: The 1903 Harvard "Lectures on Pragmatism", State University of New York Press, Albany, NY, 1997. Reprinted, pp. 133–241, Peirce Edition Project (eds.), The Essential Peirce, Selected Philosophical Writings, Volume 2 (1893–1913), Indiana University Press, Bloomington, IN, 1998.
Rescher, Nicholas (1985), The Heritage of Logical Positivism, University Press of America, Lanham, MD.
Rock, Irvin (1983), The Logic of Perception, MIT Press, Cambridge, Massachusetts.
Rock, Irvin, (1997) Indirect Perception, MIT Press, Cambridge, Massachusetts.
Runes, D.D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Sini, Carlo (2004), "Empirismo", in Gianni Vattimo et al. (eds.), Enciclopedia Garzanti della Filosofia.
Solomon, Robert C., and Higgins, Kathleen M. (1996), A Short History of Philosophy, pp. 68–74.
Sorabji, Richard (1972), Aristotle on Memory.
Thornton, Stephen (1987), Berkeley's Theory of Reality, Eprint
Vanzo, Alberto (2014), "From Empirics to Empiricists", Intellectual History Review, 2014, Eprint available here and here.
Ward, Teddy (n.d.), "Empiricism", Eprint.
Wilson, Fred (2005), "John Stuart Mill", in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
External links
Empiricist Man
History of science
Justification (epistemology)
Philosophical methodology
Internalism and externalism
Philosophy of science
Epistemological schools and traditions | 0.789726 | 0.999029 | 0.78896 |
Morality | Morality is the categorization of intentions, decisions and actions into those that are proper, or right, and those that are improper, or wrong. Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion or culture, or it can derive from a standard that is understood to be universal. Morality may also be specifically synonymous with "goodness", "appropriateness" or "rightness".
Moral philosophy includes meta-ethics, which studies abstract issues such as moral ontology and moral epistemology, and normative ethics, which studies more concrete systems of moral decision-making such as deontological ethics and consequentialism. An example of normative ethical philosophy is the Golden Rule, which states: "One should treat others as one would like others to treat oneself."
Immorality is the active opposition to morality (i.e. opposition to that which is good or right), while amorality is variously defined as an unawareness of, indifference toward, or disbelief in any particular set of moral standards and/or principles.
History
Ethics
Ethics (also known as moral philosophy) is the branch of philosophy which addresses questions of morality. The word "ethics" is "commonly used interchangeably with 'morality'... and sometimes it is used more narrowly to mean the moral principles of a particular tradition, group, or individual." Likewise, certain types of ethical theories, especially deontological ethics, sometimes distinguish between ethics and morality.
Philosopher Simon Blackburn writes that "Although the morality of people and their ethics amounts to the same thing, there is a usage that restricts morality to systems such as that of Immanuel Kant, based on notions such as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning, based on the notion of a virtue, and generally avoiding the separation of 'moral' considerations from other practical considerations."
Descriptive and normative
In its descriptive sense, "morality" refers to personal or cultural values, codes of conduct or social mores from a society that provides these codes of conduct in which it applies and is accepted by an individual. It does not connote objective claims of right or wrong, but only refers to that which is considered right or wrong. Descriptive ethics is the branch of philosophy which studies morality in this sense.
In its normative sense, "morality" refers to whatever (if anything) is actually right or wrong, which may be independent of the values or mores held by any particular peoples or cultures. Normative ethics is the branch of philosophy which studies morality in this sense.
Realism and anti-realism
Philosophical theories on the nature and origins of morality (that is, theories of meta-ethics) are broadly divided into two classes:
Moral realism is the class of theories which hold that there are true moral statements that report objective moral facts. For example, while they might concede that forces of social conformity significantly shape individuals' "moral" decisions, they deny that those cultural norms and customs define morally right behavior. This may be the philosophical view propounded by ethical naturalists, but not all moral realists accept that position (e.g. ethical non-naturalists).
Moral anti-realism, on the other hand, holds that moral statements either fail or do not even attempt to report objective moral facts. Instead, they hold that moral sentences are either categorically false claims of objective moral facts (error theory); claims about subjective attitudes rather than objective facts (ethical subjectivism); or else do not attempt to describe the world at all but rather something else, like an expression of an emotion or the issuance of a command (non-cognitivism).
Some forms of non-cognitivism and ethical subjectivism, while considered anti-realist in the robust sense used here, are considered realist in the sense synonymous with moral universalism. For example, universal prescriptivism is a universalist form of non-cognitivism which claims that morality is derived from reasoning about implied imperatives, and divine command theory and ideal observer theory are universalist forms of ethical subjectivism which claim that morality is derived from the edicts of a god or the hypothetical decrees of a perfectly rational being, respectively.
Anthropology
Morality with practical reasoning
Practical reason is necessary for the moral agency but it is not a sufficient condition for moral agency. Real life issues that need solutions do need both rationality and emotion to be sufficiently moral. One uses rationality as a pathway to the ultimate decision, but the environment and emotions towards the environment at the moment must be a factor for the result to be truly moral, as morality is subject to culture. Something can only be morally acceptable if the culture as a whole has accepted this to be true. Both practical reason and relevant emotional factors are acknowledged as significant in determining the morality of a decision.
Tribal and territorial
Celia Green made a distinction between tribal and territorial morality. She characterizes the latter as predominantly negative and proscriptive: it defines a person's territory, including his or her property and dependents, which is not to be damaged or interfered with. Apart from these proscriptions, territorial morality is permissive, allowing the individual whatever behaviour does not interfere with the territory of another. By contrast, tribal morality is prescriptive, imposing the norms of the collective on the individual. These norms will be arbitrary, culturally dependent and 'flexible', whereas territorial morality aims at rules which are universal and absolute, such as Kant's 'categorical imperative' and Geisler's graded absolutism. Green relates the development of territorial morality to the rise of the concept of private property, and the ascendancy of contract over status.
In-group and out-group
Some observers hold that individuals apply distinct sets of moral rules to people depending on their membership of an "in-group" (the individual and those they believe to be of the same group) or an "out-group" (people not entitled to be treated according to the same rules). Some biologists, anthropologists and evolutionary psychologists believe this in-group/out-group discrimination has evolved because it enhances group survival. This belief has been confirmed by simple computational models of evolution. In simulations this discrimination can result in both unexpected cooperation towards the in-group and irrational hostility towards the out-group. Gary R. Johnson and V.S. Falger have argued that nationalism and patriotism are forms of this in-group/out-group boundary. Jonathan Haidt has noted that experimental observation indicating an in-group criterion provides one moral foundation substantially used by conservatives, but far less so by liberals.
In-group preference is also helpful at the individual level for the passing on of one's genes. For example, a mother who favors her own children more highly than the children of other people will give greater resources to her children than she will to strangers', thus heightening her children's chances of survival and her own gene's chances of being perpetuated. Due to this, within a population, there is substantial selection pressure exerted toward this kind of self-interest, such that eventually, all parents wind up favoring their own children (the in-group) over other children (the out-group).
Comparing cultures
Peterson and Seligman approach the anthropological view looking across cultures, geo-cultural areas and across millennia. They conclude that certain virtues have prevailed in all cultures they examined. The major virtues they identified include wisdom / knowledge; courage; humanity; justice; temperance; and transcendence. Each of these include several divisions. For instance humanity includes love, kindness, and social intelligence.
Still, others theorize that morality is not always absolute, contending that moral issues often differ along cultural lines. A 2014 PEW research study among several nations illuminates significant cultural differences among issues commonly related to morality, including divorce, extramarital affairs, homosexuality, gambling, abortion, alcohol use, contraceptive use, and premarital sex. Each of the 40 countries in this study has a range of percentages according to what percentage of each country believes the common moral issues are acceptable, unacceptable, or not moral issues at all. Each percentage regarding the significance of the moral issue varies greatly on the culture in which the moral issue is presented.
Advocates of a theory known as moral relativism subscribe to the notion that moral virtues are right or wrong only within the context of a certain standpoint (e.g., cultural community). In other words, what is morally acceptable in one culture may be taboo in another. They further contend that no moral virtue can objectively be proven right or wrong Critics of moral relativism point to historical atrocities such as infanticide, slavery, or genocide as counter arguments, noting the difficulty in accepting these actions simply through cultural lenses.
Fons Trompenaars, author of Did the Pedestrian Die?, tested members of different cultures with various moral dilemmas. One of these was whether the driver of a car would have his friend, a passenger riding in the car, lie in order to protect the driver from the consequences of driving too fast and hitting a pedestrian. Trompenaars found that different cultures had quite different expectations, from none to definite.
Anthropologists from Oxford's Institute of Cognitive & Evolutionary Anthropology (part of the School of Anthropology & Museum Ethnography) analysed ethnographic accounts of ethics from 60 societies, comprising over 600,000 words from over 600 sources and discovered what they believe to be seven universal moral rules: help your family, help your group, return favours, be brave, defer to superiors, divide resources fairly, and respect others' property.
Evolution
The development of modern morality is a process closely tied to sociocultural evolution. Some evolutionary biologists, particularly sociobiologists, believe that morality is a product of evolutionary forces acting at an individual level and also at the group level through group selection (although to what degree this actually occurs is a controversial topic in evolutionary theory). Some sociobiologists contend that the set of behaviors that constitute morality evolved largely because they provided possible survival or reproductive benefits (i.e. increased evolutionary success). Humans consequently evolved "pro-social" emotions, such as feelings of empathy or guilt, in response to these moral behaviors.
On this understanding, moralities are sets of self-perpetuating and biologically driven behaviors which encourage human cooperation. Biologists contend that all social animals, from ants to elephants, have modified their behaviors, by restraining immediate selfishness in order to improve their evolutionary fitness. Human morality, although sophisticated and complex relative to the moralities of other animals, is essentially a natural phenomenon that evolved to restrict excessive individualism that could undermine a group's cohesion and thereby reducing the individuals' fitness.
On this view, moral codes are ultimately founded on emotional instincts and intuitions that were selected for in the past because they aided survival and reproduction (inclusive fitness). Examples: the maternal bond is selected for because it improves the survival of offspring; the Westermarck effect, where close proximity during early years reduces mutual sexual attraction, underpins taboos against incest because it decreases the likelihood of genetically risky behaviour such as inbreeding.
The phenomenon of reciprocity in nature is seen by evolutionary biologists as one way to begin to understand human morality. Its function is typically to ensure a reliable supply of essential resources, especially for animals living in a habitat where food quantity or quality fluctuates unpredictably. For example, some vampire bats fail to feed on prey some nights while others manage to consume a surplus. Bats that did eat will then regurgitate part of their blood meal to save a conspecific from starvation. Since these animals live in close-knit groups over many years, an individual can count on other group members to return the favor on nights when it goes hungry (Wilkinson, 1984)
Marc Bekoff and Jessica Pierce (2009) have argued that morality is a suite of behavioral capacities likely shared by all mammals living in complex social groups (e.g., wolves, coyotes, elephants, dolphins, rats, chimpanzees). They define morality as "a suite of interrelated other-regarding behaviors that cultivate and regulate complex interactions within social groups." This suite of behaviors includes empathy, reciprocity, altruism, cooperation, and a sense of fairness. In related work, it has been convincingly demonstrated that chimpanzees show empathy for each other in a wide variety of contexts. They also possess the ability to engage in deception, and a level of social politics prototypical of our own tendencies for gossip and reputation management.
Christopher Boehm (1982) has hypothesized that the incremental development of moral complexity throughout hominid evolution was due to the increasing need to avoid disputes and injuries in moving to open savanna and developing stone weapons. Other theories are that increasing complexity was simply a correlate of increasing group size and brain size, and in particular the development of theory of mind abilities.
Psychology
In modern moral psychology, morality is sometimes considered to change through personal development. Several psychologists have produced theories on the development of morals, usually going through stages of different morals. Lawrence Kohlberg, Jean Piaget, and Elliot Turiel have cognitive-developmental approaches to moral development; to these theorists morality forms in a series of constructive stages or domains. In the Ethics of care approach established by Carol Gilligan, moral development occurs in the context of caring, mutually responsive relationships which are based on interdependence, particularly in parenting but also in social relationships generally. Social psychologists such as Martin Hoffman and Jonathan Haidt emphasize social and emotional development based on biology, such as empathy. Moral identity theorists, such as William Damon and Mordechai Nisan, see moral commitment as arising from the development of a self-identity that is defined by moral purposes: this moral self-identity leads to a sense of responsibility to pursue such purposes. Of historical interest in psychology are the theories of psychoanalysts such as Sigmund Freud, who believe that moral development is the product of aspects of the super-ego as guilt-shame avoidance. Theories of moral development therefore tend to regard it as positive moral development: the higher stages are morally higher, though this, naturally, involves a circular argument. The higher stages are better because they are higher, but the better higher because they are better.
As an alternative to viewing morality as an individual trait, some sociologists as well as social- and discursive psychologists have taken upon themselves to study the in-vivo aspects of morality by examining how persons conduct themselves in social interaction.
A new study analyses the common perception of a decline in morality in societies worldwide and throughout history. Adam M. Mastroianni and Daniel T. Gilbert present a series of studies indicating that the perception of moral decline is an illusion and easily produced, with implications for misallocation of resources, underuse of social support, and social influence. To begin with, the authors demonstrate that people in no less than 60 nations hold the belief that morality is deteriorating continuously, and this conviction has been present for the last 70 years. Subsequently, they indicate that people ascribe this decay to the declining morality of individuals as they age and the succeeding generations. Thirdly, the authors demonstrate that people's evaluations of the morality of their peers have not decreased over time, indicating that the belief in moral decline is an illusion. Lastly, the authors explain a basic psychological mechanism that uses two well-established phenomena (distorted exposure to information and distorted memory of information) to cause the illusion of moral decline. The authors present studies that validate some of the predictions about the circumstances in which the perception of moral decline is attenuated, eliminated, or reversed (e.g., when participants are asked about the morality of people closest to them or people who lived before they were born).
Moral cognition
Moral cognition refers to cognitive processes implicated in moral judgment and decision making, and moral action. It consists of several domain-general cognitive processes, ranging from perception of a morally salient stimulus to reasoning when faced with a moral dilemma. While it is important to mention that there is not a single cognitive faculty dedicated exclusively to moral cognition, characterizing the contributions of domain-general processes to moral behavior is a critical scientific endeavor to understand how morality works and how it can be improved.
Cognitive psychologists and neuroscientists investigate the inputs to these cognitive processes and their interactions, as well as how these contribute to moral behavior by running controlled experiments. In these experiments putatively moral versus nonmoral stimuli are compared to each other, while controlling for other variables such as content or working memory load. Often, the differential neural response to specifically moral statements or scenes, are examined using functional neuroimaging experiments.
Critically, the specific cognitive processes that are involved depend on the prototypical situation that a person encounters. For instance, while situations that require an active decision on a moral dilemma may require active reasoning, an immediate reaction to a shocking moral violation may involve quick, affect-laden processes. Nonetheless, certain cognitive skills such as being able to attribute mental states—beliefs, intents, desires, emotions to oneself, and others is a common feature of a broad range of prototypical situations. In line with this, a meta-analysis found overlapping activity between moral emotion and moral reasoning tasks, suggesting a shared neural network for both tasks. The results of this meta-analysis, however, also demonstrated that the processing of moral input is affected by task demands.
Regarding the issues of morality in video games, some scholars believe that because players appear in video games as actors, they maintain a distance between their sense of self and the role of the game in terms of imagination. Therefore, the decision-making and moral behavior of players in the game are not representing player's Moral dogma.
It has been recently found that moral judgment consists in concurrent evaluations of three different components that align with precepts from three dominant moral theories (virtue ethics, deontology, and consequentialism): the character of a person (Agent-component, A); their actions (Deed-component, D); and the consequences brought about in the situation (Consequences-component, C). This, implies that various inputs of the situation a person encounters affect moral cognition.
Jonathan Haidt distinguishes between two types of moral cognition: moral intuition and moral reasoning. Moral intuition involves the fast, automatic, and affective processes that result in an evaluative feeling of good-bad or like-dislike, without awareness of going through any steps. Conversely, moral reasoning does involve conscious mental activity to reach a moral judgment. Moral reasoning is controlled and less affective than moral intuition. When making moral judgments, humans perform moral reasoning to support their initial intuitive feeling. However, there are three ways humans can override their immediate intuitive response. The first way is conscious verbal reasoning (for example, examining costs and benefits). The second way is reframing a situation to see a new perspective or consequence, which triggers a different intuition. Finally, one can talk to other people which illuminates new arguments. In fact, interacting with other people is the cause of most moral change.
Neuroscience
The brain areas that are consistently involved when humans reason about moral issues have been investigated by multiple quantitative large-scale meta-analyses of the brain activity changes reported in the moral neuroscience literature. The neural network underlying moral decisions overlaps with the network pertaining to representing others' intentions (i.e., theory of mind) and the network pertaining to representing others' (vicariously experienced) emotional states (i.e., empathy). This supports the notion that moral reasoning is related to both seeing things from other persons' points of view and to grasping others' feelings. These results provide evidence that the neural network underlying moral decisions is probably domain-global (i.e., there might be no such things as a "moral module" in the human brain) and might be dissociable into cognitive and affective sub-systems.
Cognitive neuroscientist Jean Decety thinks that the ability to recognize and vicariously experience what another individual is undergoing was a key step forward in the evolution of social behavior, and ultimately, morality. The inability to feel empathy is one of the defining characteristics of psychopathy, and this would appear to lend support to Decety's view. Recently, drawing on empirical research in evolutionary theory, developmental psychology, social neuroscience, and psychopathy, Jean Decety argued that empathy and morality are neither systematically opposed to one another, nor inevitably complementary.
Brain areas
An essential, shared component of moral judgment involves the capacity to detect morally salient content within a given social context. Recent research implicated the salience network in this initial detection of moral content. The salience network responds to behaviorally salient events and may be critical to modulate downstream default and frontal control network interactions in the service of complex moral reasoning and decision-making processes.
The explicit making of moral right and wrong judgments coincides with activation in the ventromedial prefrontal cortex (VMPC), a region involved in valuation, while intuitive reactions to situations containing implicit moral issues activates the temporoparietal junction area, a region that plays a key role in understanding intentions and beliefs.
Stimulation of the VMPC by transcranial magnetic stimulation, or neurological lesion, has been shown to inhibit the ability of human subjects to take into account intent when forming a moral judgment. According to such investigations, TMS did not disrupt participants' ability to make any moral judgment. On the contrary, moral judgments of intentional harms and non-harms were unaffected by TMS to either the RTPJ or the control site; presumably, however, people typically make moral judgments of intentional harms by considering not only the action's harmful outcome but the agent's intentions and beliefs. So why were moral judgments of intentional harms not affected by TMS to the RTPJ? One possibility is that moral judgments typically reflect a weighted function of any morally relevant information that is available at the time. Based on this view, when information concerning the agent's belief is unavailable or degraded, the resulting moral judgment simply reflects a higher weighting of other morally relevant factors (e.g., outcome). Alternatively, following TMS to the RTPJ, moral judgments might be made via an abnormal processing route that does not take belief into account. On either account, when belief information is degraded or unavailable, moral judgments are shifted toward other morally relevant factors (e.g., outcome). For intentional harms and non-harms, however, the outcome suggests the same moral judgment as to the intention. Thus, the researchers suggest that TMS to the RTPJ disrupted the processing of negative beliefs for both intentional harms and attempted harms, but the current design allowed the investigators to detect this effect only in the case of attempted harms, in which the neutral outcomes did not afford harsh moral judgments on their own.
Similarly, individuals with a lesion of the VMPC judge an action purely on its outcome and are unable to take into account the intent of that action.
Genetics
Moral intuitions may have genetic bases. A 2022 study conducted by scholars Michael Zakharin and Timothy C. Bates, and published by the European Journal of Personality, found that moral foundations have significant genetic bases. Another study, conducted by Smith and Hatemi, similarly found significant evidence in support of moral heritability by looking at and comparing the answers of moral dilemmas between twins.
Genetics play a role in influencing prosocial behaviors and moral decision-making. Genetics contribute to the development and expression of certain traits and behaviors, including those related to morality. However, it is important to note that while genetics play a role in shaping certain aspects of moral behavior, morality itself is a multifaceted concept that encompasses cultural, societal, and personal influences as well.
Politics
If morality is the answer to the question 'how ought we to live' at the individual level, politics can be seen as addressing the same question at the social level, though the political sphere raises additional problems and challenges. It is therefore unsurprising that evidence has been found of a relationship between attitudes in morality and politics. Moral foundations theory, authored by Jonathan Haidt and colleagues, has been used to study the differences between liberals and conservatives, in this regard. Haidt found that Americans who identified as liberals tended to value care and fairness higher than loyalty, respect and purity. Self-identified conservative Americans valued care and fairness less and the remaining three values more. Both groups gave care the highest over-all weighting, but conservatives valued fairness the lowest, whereas liberals valued purity the lowest. Haidt also hypothesizes that the origin of this division in the United States can be traced to geo-historical factors, with conservatism strongest in closely knit, ethnically homogeneous communities, in contrast to port-cities, where the cultural mix is greater, thus requiring more liberalism.
Group morality develops from shared concepts and beliefs and is often codified to regulate behavior within a culture or community. Various defined actions come to be called moral or immoral. Individuals who choose moral action are popularly held to possess "moral fiber", whereas those who indulge in immoral behavior may be labeled as socially degenerate. The continued existence of a group may depend on widespread conformity to codes of morality; an inability to adjust moral codes in response to new challenges is sometimes credited with the demise of a community (a positive example would be the function of Cistercian reform in reviving monasticism; a negative example would be the role of the Dowager Empress in the subjugation of China to European interests). Within nationalist movements, there has been some tendency to feel that a nation will not survive or prosper without acknowledging one common morality, regardless of its content.
Political morality is also relevant to the behavior internationally of national governments, and to the support they receive from their host population. The Sentience Institute, co-founded by Jacy Reese Anthis, analyzes the trajectory of moral progress in society via the framework of an expanding moral circle. Noam Chomsky states that
... if we adopt the principle of universality: if an action is right (or wrong) for others, it is right (or wrong) for us. Those who do not rise to the minimal moral level of applying to themselves the standards they apply to others—more stringent ones, in fact—plainly cannot be taken seriously when they speak of appropriateness of response; or of right and wrong, good and evil.
In fact, one of them, maybe the most, elementary of moral principles is that of universality, that is, If something's right for me, it's right for you; if it's wrong for you, it's wrong for me. Any moral code that is even worth looking at has that at its core somehow.
Religion
Religion and morality are not synonymous. Morality does not depend upon religion although for some this is "an almost automatic assumption". According to The Westminster Dictionary of Christian Ethics, religion and morality "are to be defined differently and have no definitional connections with each other. Conceptually and in principle, morality and a religious value system are two distinct kinds of value systems or action guides."
Positions
Within the wide range of moral traditions, religious value-systems co-exist with contemporary secular frameworks such as consequentialism, freethought, humanism, utilitarianism, and others. There are many types of religious value-systems. Modern monotheistic religions, such as Islam, Judaism, Christianity, and to a certain degree others such as Sikhism and Zoroastrianism, define right and wrong by the laws and rules as set forth by their respective scriptures and as interpreted by religious leaders within each respective faith. Other religions spanning pantheistic to nontheistic tend to be less absolute. For example, within Buddhism, the intention of the individual and the circumstances should be accounted for in the form of merit, to determine if an action is termed right or wrong. Barbara Stoler Miller points out a further disparity between the values of religious traditions, stating that in Hinduism, "practically, right and wrong are decided according to the categories of social rank, kinship, and stages of life. For modern Westerners, who have been raised on ideals of universality and egalitarianism, this relativity of values and obligations is the aspect of Hinduism most difficult to understand".
Religions provide different ways of dealing with moral dilemmas. For example, Hinduism lacks any absolute prohibition on killing, recognizing that it "may be inevitable and indeed necessary" in certain circumstances. Monotheistic traditions view certain acts—such as abortion or divorce—in more absolute terms. Religion is not always positively associated with morality. Philosopher David Hume stated that "the greatest crimes have been found, in many instances, to be compatible with a superstitious piety and devotion; Hence it is justly regarded as unsafe to draw any inference in favor of a man's morals, from the fervor or strictness of his religious exercises, even though he himself believe them sincere."
Religious value-systems can be used to justify acts that are contrary to general contemporary morality, such as massacres, misogyny and slavery. For example, Simon Blackburn states that "apologists for Hinduism defend or explain away its involvement with the caste system, and apologists for Islam defend or explain away its harsh penal code or its attitude to women and infidels". In regard to Christianity, he states that the "Bible can be read as giving us a carte blanche for harsh attitudes to children, the mentally handicapped, animals, the environment, the divorced, unbelievers, people with various sexual habits, and elderly women", and notes morally-suspect themes in the Bible's New Testament as well. Elizabeth Anderson likewise holds that "the Bible contains both good and evil teachings", and it is "morally inconsistent". Christian apologists address Blackburn's viewpoints and construe that Jewish laws in the Hebrew Bible showed the evolution of moral standards towards protecting the vulnerable, imposing a death penalty on those pursuing slavery and treating slaves as persons and not as property. Humanists like Paul Kurtz believe that we can identify moral values across cultures, even if we do not appeal to a supernatural or universalist understanding of principles – values including integrity, trustworthiness, benevolence, and fairness. These values can be resources for finding common ground between believers and nonbelievers.
Empirical analyses
Several studies have been conducted on the empirics of morality in various countries, and the overall relationship between faith and crime is unclear. A 2001 review of studies on this topic found "The existing evidence surrounding the effect of religion on crime is varied, contested, and inconclusive, and currently, no persuasive answer exists as to the empirical relationship between religion and crime." Phil Zuckerman's 2008 book, Society without God, based on studies conducted during 14 months in Scandinavia in 2005–2006, notes that Denmark and Sweden, "which are probably the least religious countries in the world, and possibly in the history of the world", enjoy "among the lowest violent crime rates in the world [and] the lowest levels of corruption in the world".
Dozens of studies have been conducted on this topic since the twentieth century. A 2005 study by Gregory S. Paul published in the Journal of Religion and Society stated that, "In general, higher rates of belief in and worship of a creator correlate with higher rates of homicide, juvenile and early adult mortality, STD infection rates, teen pregnancy, and abortion in the prosperous democracies," and "In all secular developing democracies a centuries long-term trend has seen homicide rates drop to historical lows" with the exceptions being the United States (with a high religiosity level) and "theistic" Portugal. In a response, Gary Jensen builds on and refines Paul's study. he concludes that a "complex relationship" exists between religiosity and homicide "with some dimensions of religiosity encouraging homicide and other dimensions discouraging it". In April 2012, the results of a study which tested their subjects' pro-social sentiments were published in the Social Psychological and Personality Science journal in which non-religious people had higher scores showing that they were more motivated by their own compassion to perform pro-social behaviors. Religious people were found to be less motivated by compassion to be charitable than by an inner sense of moral obligation.
See also
Ethics
Integrity
Applied ethics
Appeal to tradition
Buddhist ethics
Christian ethics
De
Emotional intelligence
Ethical dilemma
Good and evil
Ideology
Index of ethics articles
Islamic ethics
Jewish ethics
Moral agency
Moral character
Moral conviction
Moral intelligence
Moral outsourcing
Moral panic
Moral skepticism
Outline of ethics
Value theory
Worldview
Notes
a. Studies on divorce in the United States done by the Barna Group suggested that atheists and agnostics have lower divorce rates than faith groups on average (though some faith groups had lower rates still). The study notes that fewer atheists and agnostics enter into marriage relative to faith-based individuals.
b. Some studies appear to show positive links in the relationship between religiosity and moral behavior Modern research in criminology also suggests an inverse relationship between religion and crime, with some studies establishing this connection. A meta-analysis of 60 studies on religion and crime concluded, "religious behaviors and beliefs exert a moderate deterrent effect on individuals' criminal behavior".
c. Zuckerman identifies that Scandinavians have "relatively high rates of petty crime and burglary", but "their overall rates of violent crime—such as murder, aggravated assault, and rape—are among the lowest on earth" (Zuckerman 2008, pp. 5–6).
d. The authors also state that "A few hundred years ago rates of homicide were astronomical in Christian Europe and the American colonies," and "the least theistic secular developing democracies such as Japan, France, and Scandinavia have been most successful in these regards." They argue for a positive correlation between the degree of public religiosity in a society and certain measures of dysfunction, an analysis published later in the same journal argues that a number of methodological problems undermine any findings or conclusions in the research.
e. Blackburn provides examples such as the phrase in Exodus 22:18 that has "helped to burn alive tens or hundreds of thousands of women in Europe and America": "Thou shalt not suffer a witch to live," and notes that the Old Testament God apparently has "no problems with a slave-owning society", considers birth control a crime punishable by death, and "is keen on child abuse". Others interpret these passages differently, arguing for example that Jewish laws show the evolution of moral standards in society: that Jews actually threatened those who pursued forced slavery with the death penalty, held that slaves were persons instead of property, and protected them in several ways.
References
Further reading
(Reviewed in The Montreal Review)
Richard Dawkins, "The roots of morality: why are we good?", in The God Delusion, Black Swan, 2007.
Lunn, Arnold, and Garth Lean (1964). The New Morality. London: Blandford Press.
John Newton, Complete Conduct Principles for the 21st Century, 2000. .
containing articles by Paterson Brown:
"Religious Morality", (from Mind, 1963).
"Religious Morality: a Reply to Flew and Campbell", (from Mind, 1964).
"God and the Good", (from Religious Studies, 1967).
Ashley Welch, "Virtuous behaviors sanction later sins: people are quick to treat themselves after a good deed or healthy act" March 4, 2012.
Roberto Andorno, "Do our moral judgements need to be guided by principles?" Cambridge Quarterly of Healthcare Ethics, 2012, 21(4), 457–65.
External links
The Definition of Morality, Stanford Encyclopedia of Philosophy
Boston College's Morality Lab
Morality and Judaism, chabad.org
"The Moral Instinct" by Steven Pinker, The New York Times, 13 January 2008
Concepts in ethics
Social concepts | 0.789858 | 0.998788 | 0.788901 |
Definitions of philosophy | Definitions of philosophy aim at determining what all forms of philosophy have in common and how to distinguish philosophy from other disciplines. Many different definitions have been proposed but there is very little agreement on which is the right one. Some general characteristics of philosophy are widely accepted, for example, that it is a form of rational inquiry that is systematic, critical, and tends to reflect on its own methods. But such characteristics are usually too vague to give a proper definition of philosophy. Many of the more concrete definitions are very controversial, often because they are revisionary in that they deny the label philosophy to various subdisciplines for which it is normally used. Such definitions are usually only accepted by philosophers belonging to a specific philosophical movement. One reason for these difficulties is that the meaning of the term "philosophy" has changed throughout history: it used to include the sciences as its subdisciplines, which are seen as distinct disciplines in the modern discourse. But even in its contemporary usage, it is still a wide term spanning over many different subfields.
An important distinction among approaches to defining philosophy is between deflationism and essentialism. Deflationist approaches see it as an empty blanket term, while essentialistic approaches hold that there is a certain set of characteristic features shared by all parts of philosophy. Between these two extremes, it has been argued that these parts are related to each other by family resemblance even though they do not all share the same characteristic features. Some approaches try to define philosophy based on its method by emphasizing its use of pure reasoning instead of empirical evidence. Others focus on the wideness of its topic, either in the sense that it includes almost every field or based on the idea that it is concerned with the world as a whole or the big questions. These two approaches may also be combined to give a more precise definition based both on method and on topic.
Many definitions of philosophy concentrate on its close relation to science. Some see it as a proper science itself, focusing, for example, on the essences of things and not on empirical matters of fact, in contrast to most other sciences, or on its level of abstractness by talking about very wide-ranging empirical patterns instead of particular observations. But since philosophy seems to lack the progress found in regular sciences, various theorists have opted for a weaker definition by seeing philosophy as an immature science that has not yet found its sure footing. This position is able to explain both the lack of progress and the fact that various sciences used to belong to philosophy, while they were still in their provisional stages. It has the disadvantage of degrading philosophical practice in relation to the sciences.
Other approaches see philosophy more in contrast to the sciences as concerned mainly with meaning, understanding, or the clarification of language. This can take the form of the analysis of language and how it relates to the world, of finding the necessary and sufficient conditions for the applications of technical terms, as the task of identifying what pre-ontological understanding of the world we already have and which a priori conditions of possibility govern all experience, or as a form of therapy that tries to dispel illusions due to the confusing structure of natural language (therapeutic approach, e.g. quietism). An outlook on philosophy prevalent in the ancient discourse sees it as the love of wisdom expressed in the spiritual practice of developing one's reasoning ability in order to lead a better life. A closely related approach holds that the articulation of worldviews is the principal task of philosophy. Other conceptions emphasize the reflective nature of philosophy, for example, as thinking about thinking or as an openness to questioning any presupposition.
General characteristics and sources of disagreement
The problem of defining philosophy concerns the question of what all forms of philosophy have in common, i.e. how philosophy differs from non-philosophy or other disciplines, such as the empirical sciences or fine art. One difficulty is due to the fact that the meaning of the term "philosophy" has changed a lot in history: it was used in a much wider sense to refer to any form of rational inquiry before the modern age. In this sense, it included many of the individual sciences and mathematics, which are not seen as part of philosophy today. For example, Isaac Newton's Philosophiæ Naturalis Principia Mathematica formulating the laws of classical mechanics carries the term in its title. Modern definitions of philosophy, as discussed in this article, tend to focus on how the term is used today, i.e. on a more narrow sense. Some basic characterizations of philosophy are widely accepted, like that it is a critical and mostly systematic study of a great range of areas. Other such characterizations include that it seeks to uncover fundamental truths in these areas using a reasoned approach while also reflecting on its own methods and standards of evidence. Such characterizations succeed at characterizing many or all parts of philosophy, which is a wide discipline spanning across many fields, as reflected in its sub-disciplines termed "philosophy of...", like the philosophy of science, of mind, of law, of religion, or of pornography. One difficulty for this type of approach is that it may include non-philosophical disciplines in its definition instead of distinguishing philosophy from them.
To overcome these difficulties, various more specific definitions of philosophy have been proposed. Most of them are controversial. In many cases, they are only accepted by philosophers belonging to one philosophical movement but not by others. The more general conceptions are sometimes referred to as descriptive conceptions in contrast to the more specific prescriptive conceptions. Descriptive conceptions try to give an account of how the term "philosophy" is actually used or what philosophers in the widest sense do. Prescriptive conceptions, on the other hand, aim at clarifying what philosophy ideally is or what it ought to be, even if what philosophers actually do often fall behind this ideal. This issue is particularly controversial since different philosophical movements often diverge widely in what they consider to be good philosophy. They are often revisionistic in the sense that many presumed parts of philosophy, past and present, would not deserve the title "philosophy" if they were true.
Some definitions of philosophy focus mainly on what the activity of doing philosophy is like, such as striving towards knowledge. Others concentrate more on the theories and systems arrived at this way. In this sense, the terms "philosophy" and "philosophical" can apply both to a thought process, to the results of this activity in the form of theories, or even to contemplative forms of life reflecting such theories. Another common approach is to define philosophy in relation to the task or goal it seeks to accomplish such as answering certain types of questions or arriving at a certain type of knowledge.
The difficulty in defining "philosophy" is also reflected in the fact that introductions to philosophy often do not start with a precise definition but introduce it instead by providing an overview of its many branches and subfields, such as epistemology, ethics, logic, and metaphysics. The discipline known as metaphilosophy has as one of its main goals to clarify the nature of philosophy. Outside the academic context, the term "philosophy" is sometimes used in an unspecific sense referring to general ideas or guidelines, such as the business philosophy of a company, the leadership philosophy of an entrepreneur, or the teaching philosophy of a schoolmaster.
Deflationism, essentialism, and family resemblance
An important distinction among definitions of philosophy is between deflationism and essentialism. The deflationist approach holds that philosophy is an empty blanket term. It is used for convenience by deans and librarians to group various forms of inquiry together. This approach is usually motivated by the enduring difficulties in giving a satisfying definition. According to this view, philosophy does not have a precise essence shared by all its manifestations. One difficulty with the deflationist approach is that it is not helpful for solving disagreements on whether a certain new theory or activity qualifies as philosophy since this would seem to be just a matter of convention. Another is that it implies that the term "philosophy" is rather empty or meaningless.
This approach is opposed by essentialists, who contend that a set of features constitutes the essence of philosophy and characterizes all and only its parts. Many of the definitions based on subject matter, method, its relation to science or to meaning and understanding are essentialists conceptions of philosophy. They are controversial since they often exclude various theories and activities usually treated as part of philosophy.
These difficulties with the deflationist and the essentialist approach have moved some philosophers towards a middle ground, according to which the different parts of philosophy are characterized by family resemblances. This means that the various parts of philosophy resemble each other by sharing several features. But different parts share different features with each other, i.e. they do not all share the same features. This approach can explain both that the term "philosophy" has some substance to it, i.e. that it is not just based on an empty convention, and that some parts of philosophy may differ a lot from each other, for example, that some parts are very similar to mathematics while others almost belong to the natural sciences and psychology. This approach has the disadvantage that it leaves the definition of philosophy vague, thereby making it difficult for the non-paradigmatic cases to determine whether they belong to philosophy or not, i.e. that there is no clear-cut distinction.
Based on method and subject matter
Two important aspects for distinguishing philosophy from other disciplines have been its topic or domain of inquiry and its method. The problem with these approaches is usually that they are either too wide, i.e. they include various other disciplines, like empirical sciences or fine arts, in their definition, or too narrow by excluding various parts of philosophy. Some have argued that its method focuses on a priori knowledge, i.e. that philosophy does not depend on empirical observations and experiments. Instead, such an approach bases philosophical justification primarily on pure reasoning, similar to how mathematical theory-making is based on mathematical proofs and in contrast to the scientific method based on empirical evidence. This way of doing philosophy is often referred to as armchair philosophy or armchair theorizing since it can be done from the comfort of one's armchair without any field work. But this characterization by itself is not sufficient as a definition, since it applies equally well to other fields, such as mathematics. Giving a more precise account of the method, for example, as conceptual analysis or phenomenological inquiry, on the other hand, results in a too narrow definition that excludes various parts of philosophy.
Definitions focusing on the domain of inquiry or topic of philosophy often emphasize its wide scope in contrast to the individual sciences. According to Wilfrid Sellars, for example, philosophy aims "to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term". Similar definitions focus on how philosophy is concerned with the whole of the universe or at least with the big questions regarding life and the world. Such attempts usually result in a definition that is too broad and may include both some natural sciences and some forms of fine art and literature in it. On the other hand, they may also be too narrow, since some philosophical topics concern very specific questions that do not directly deal with the big questions or the world as a whole.
Because of these difficulties, philosophers have often tried to combine methodological and topical characterizations in their definitions. This can happen, for example, by emphasizing the wideness of its domain of inquiry, to distinguish it from the other individual sciences, together with its rational method, to distinguish it from fine art and literature. Such approaches are usually more successful at determining the right extension of the term but they also do not fully solve this problem.
Based on relation to science
Various definitions of philosophy emphasize its close relation to science, either by seeing it itself as a science or by characterizing the role it plays for science. The plausibility of such definitions is affected by how wide the term "science" is to be understood. If it refers to the natural sciences, such definitions are usually quite controversial. But if science is understood in a very wide sense as a form of rational inquiry that includes both the formal sciences and the humanities, such characterizations are less controversial but also less specific. This wide sense is how the term "philosophy" was traditionally used to cover various disciplines that are today considered as distinct disciplines. But this does not reflect its contemporary usage. Many science-based definitions of philosophy face the difficulty of explaining why philosophy has historically not shown the same level of progress as the sciences. Some reject this claim by emphasizing that philosophy has significantly progressed, but in a different and less obvious way. Others allow that this type of progress is not found in philosophy and try to find other explanations why it should still be considered a science.
As a proper science
The strongest relation to science is posited by definitions that see philosophy itself as a science. One such conception of philosophy is found within the phenomenological movement, which sees philosophy as a rigorous science. On this view, philosophy studies the structures of consciousness, more specifically, the essences that show themselves in consciousness and their relations to each other, independent of whether they have instances in the external world. It contrasts with other sciences in that they do not reflect on the essences themselves but research whether and in which ways these essences are manifested in the world. This position was already anticipated by Arthur Schopenhauer, who holds that philosophy is only interested in the nature of what there is but not in the causal relations explaining why it is there or what will become of it. But this science-based definition of philosophy found in phenomenology has come under attack on various points. On the one hand, it does not seem to be as rigorously scientific as its proponents proclaim. This is reflected in the fact that even within the phenomenological movement, there are still various fundamental disagreements that the phenomenological method has not been able to resolve, suggesting that philosophy has not yet found a solid epistemological footing. On the other hand, different forms of philosophy study various other topics besides essences and the relations between them.
Another conception of philosophy as a science is due to Willard Van Orman Quine. His outlook is based on the idea that there are no analytic propositions, i.e. that any claim may be revised based on new experiences. On this view, both philosophy and mathematics are empirical sciences. They differ from other sciences in that they are more abstract by being concerned with wide-reaching empirical patterns instead of particular empirical observations. But this distance to individual observations does not mean that their claims are non-empirical, according to Quine. A similar outlook in the contemporary discourse is sometimes found in experimental philosophers, who reject the exclusive armchair approach and try to base their theories on experiments.
Seeing philosophy as a proper science is often paired with the claim that philosophy has just recently reached this status, for example, due to the discovery of a new philosophical methodology. Such a view can explain that philosophy is a science despite not having made much progress: because it has had much less time in comparison to the other sciences.
As an immature science
But a more common approach is to see philosophy not as a fully developed science on its own but as an immature or preliminary science. Georg Simmel, for example, sees it as a provisional science studying appearances. On this view, a field of inquiry belongs to philosophy until it has developed sufficiently to provide exact knowledge of the real elements underlying these appearances. Karl Jaspers gives a similar characterization by emphasizing the deep disagreements within philosophy in contrast to the sciences, which have achieved the status of generally accepted knowledge. This is often connected to the idea that philosophy does not have a clearly demarcated domain of inquiry, in contrast to the individual sciences: the demarcation only happens once a philosophical subdiscipline has reached its full maturity.
This approach has the advantage of explaining both the lack of progress in philosophy and the fact that many sciences used to be part of philosophy before they matured enough to constitute fully developed sciences. But the parts that still belong to philosophy have so far failed to reach a sufficient consensus on their fundamental theories and methods. A philosophical discipline ceases to be philosophy and becomes a science once definite knowledge of its topic is possible. In this sense, philosophy is the midwife of the sciences. Philosophy itself makes no progress because the newly created science takes all the credit. On such a view, it is even conceivable that philosophy ceases to exist at some point once all its sub-disciplines have been turned into sciences. An important disadvantage of this view is that it has difficulty in accounting for the seriousness and the importance of the achievements of philosophers, including the ones affecting the sciences. The reason for this is that labeling philosophy as an immature science implies that philosophers are unable to go about their research in the proper manner. Another disadvantage of this conception is that the closeness to science does not fit equally well for all parts of philosophy, especially in relation to moral and political philosophy. Some even hold that philosophy as a whole may never outgrow its immature status since humans lack the cognitive faculties to give answers based on solid evidence to the philosophical questions they are considering. If this view were true, it would have the serious consequence that doing philosophy would be downright pointless.
Based on meaning, understanding, and clarification
Many definitions of philosophy see as its main task the creation of meaning and understanding or the clarification of concepts. In this sense, philosophy is often contrasted with the sciences in the sense that it is not so much about what the actual world is like but about how we experience it or how we think and talk about it. This may be expressed by stating that philosophy is "the pursuit not of knowledge but of understanding". In some cases, this takes the form of making various practices and assumptions explicit that have been implicit before, similar to how a grammar makes the rules of a language explicit without inventing them. This is a form of reflective, second-order understanding that can be applied to various fields, not just the sciences.
A conception of philosophy based on clarification and meaning is defended by logical positivists, who saw the "clarification of problems and assertions" as the main task of philosophy. According to Moritz Schlick, for example, philosophy is unlike the sciences in that it does not aim at establishing a system of true propositions. Instead, it is the activity of finding meaning. But this activity is nonetheless quite relevant for the sciences since familiarity with the meaning of a proposition is important for assessing whether it is true. A closely related definition is given by Rudolf Carnap, who sees philosophy as the logic of science, meaning that it is concerned with analyzing scientific concepts and theories. From the perspective of logical atomism, this clarification takes the form of decomposing propositions into basic elements, which are then correlated to the entities found in the world. On this approach, philosophy has both a destructive and a constructive side. Its destructive side focuses on eliminating meaningless statements that are neither verifiable by experience nor true by definition. This position is often connected to the idea that some sentences, such as metaphysical, ethical, or aesthetical sentences, lack a meaning since they cannot be correlated to elements in the world that determine whether they are true or false. In this sense, philosophy can be understood as a critique of language that exposes senseless expressions. Its constructive side, on the other hand, concerns epistemology and philosophy of science, often with the goal of finding a unified science.
Other conceptions of philosophy agree that it has to do with finding meaning and clarifying concepts but focus on a wider domain beyond the sciences. For example, a conception commonly found in the analytic tradition equates philosophy with conceptual analysis. In this sense, philosophy has as its main task to clarify the meanings of the terms we use, often in the form of searching for the necessary and sufficient conditions under which a concept applies to something. Such an analysis is not interested in whether any actual entity falls under this concept. For example, a physicist may study what causes a certain event to happen while a philosopher may study what we mean when using the term "causation". This analysis may be applied to scientific terms but is not limited to them.
From the perspective of ordinary language philosophy, philosophy has as its main enterprise the analysis of natural language. According to Ludwig Wittgenstein, for example, philosophy is not a theory but a practice that takes the form of linguistic therapy. This therapy is important because ordinary language is structured in confusing ways that make us susceptible to all kinds of misunderstandings. It is the task of the philosopher to uncover the root causes of such illusions. This often takes the form of exposing how traditional philosophical "problems" are only pseudo-problems, thereby dissolving them rather than resolving them. So on a theoretical level, philosophy leaves everything as it is without trying to provide new insights, explanations, or deductions.
The focus on understanding is also reflected in the transcendental traditions and in some strands of phenomenology, where the task of philosophy is identified with making comprehensible and articulating the understanding we already have of the world, sometimes referred to as pre-understanding or pre-ontological understanding. The need for such an inquiry is expressed in Saint Augustine's remark concerning the nature of time: "I know well enough what it is, provided that nobody asks me; but if I am asked what it is and try to explain, I am baffled". This type of understanding is prior to experience in the sense that experience of a particular thing is not possible without some form of pre-understanding of this thing. In this sense, philosophy is a transcendental inquiry into the a priori conditions of possibility underlying both ordinary and scientific experience. But characterizing philosophy this way seems to exclude many of its sub-disciplines, like applied ethics.
Others
Various other definitions of philosophy have been proposed. Some focus on its role in helping the practitioner lead a good life: they see philosophy as the spiritual practice of developing one's reasoning ability through which some ideal of health is to be realized. Such an outlook on philosophy was already explicitly articulated in stoicism and has also been adopted by some contemporary philosophers. A closely related conception sees philosophy as a way of life. This is based on a conception of what it means to lead a good life that is centered on increasing one's wisdom through various types of spiritual exercises or on the development and usage of reason. Such an outlook can already be discerned in ancient Greek philosophy, where philosophy is often seen as the love of wisdom. According to this characterization, philosophy differs from wisdom itself since it implies more the continued struggle to attain wisdom, i.e. being on the way towards wisdom.
A closely related approach sees the principal task of philosophy as the development and articulation of worldviews. Worldviews are comprehensive representations of the world and our place in it. They go beyond science by articulating not just theoretical facts concerning the world but also include practical and ethical components, both on a general and a specific level. This way, worldviews articulate what matters in life and can guide people in living their lives accordingly. On the worldview account of philosophy, it is the task of philosophers to articulate such global visions both of how things on the grand scale hang together and which practical stance we should take towards them.
Other conceptions of philosophy focus on its reflective and metacognitive aspects. One way to emphasize the reflective nature of philosophy is to define it as thinking about thinking. Another characterization of philosophy sometimes found in the literature is that, at least in principle, it does not take any facts for granted and allows any presupposition to be questioned, including its own methods. This is reflected in the fact that philosophy has no solid foundations to build on since whatever foundations one philosopher accepts may be questioned by another. Sokrates identified philosophy with the awareness of one's ignorance. For Immanuel Kant, philosophical inquiry is characterized as "knowledge gained by reason from concepts". According to Georg Wilhelm Friedrich Hegel, philosophy is the science of reason.
References
Analytic philosophy
Definitions
Concepts in metaphilosophy
Phenomenology | 0.801186 | 0.984117 | 0.78846 |
Postmodern philosophy | Postmodern philosophy is a philosophical movement that arose in the second half of the 20th century as a critical response to assumptions allegedly present in modernist philosophical ideas regarding culture, identity, history, or language that were developed during the 18th-century Age of Enlightenment. Postmodernist thinkers developed concepts like différance, repetition, trace, and hyperreality to subvert "grand narratives", univocity of being, and epistemic certainty. Postmodern philosophy questions the importance of power relationships, personalization, and discourse in the "construction" of truth and world views. Many postmodernists appear to deny that an objective reality exists, and appear to deny that there are objective moral values.
Jean-François Lyotard defined philosophical postmodernism in The Postmodern Condition, writing "Simplifying to the extreme, I define postmodern as incredulity towards meta narratives...." where what he means by metanarrative is something like a unified, complete, universal, and epistemically certain story about everything that is. Postmodernists reject metanarratives because they reject the conceptualization of truth that metanarratives presuppose. Postmodernist philosophers in general argue that truth is always contingent on historical and social context rather than being absolute and universal and that truth is always partial and "at issue" rather than being complete and certain.
Postmodern philosophy is often particularly skeptical about simple binary oppositions characteristic of structuralism, emphasizing the problem of the philosopher cleanly distinguishing knowledge from ignorance, social progress from reversion, dominance from submission, good from bad, and presence from absence.
Subjects
On Literature
Postmodern philosophy has had strong relations with the substantial literature of critical theory, although some critical theorists such as Jurgen Habermas have opposed postmodern philosophy.
On The Enlightenment
Many postmodern claims are critical of certain 18th-century Enlightenment values. Some postmodernists tolerate multiple conceptions of morality, even if they disagree with them subjectively. Postmodern writings often focus on deconstructing the role that power and ideology play in shaping discourse and belief. Postmodern philosophy shares ontological similarities with classical skeptical and relativistic belief systems.
On Truth and Objectivity
The Routledge Encyclopedia of Philosophy states that "The assumption that there is no common denominator in 'nature' or 'truth' ... that guarantees the possibility of neutral or objective thought" is a key assumption of postmodernism. The Stanford Encyclopedia of Philosophy describes it as "a set of critical, strategic and rhetorical practices employing concepts such as difference, repetition, the trace, the simulacrum, and hyperreality to destabilize other concepts such as presence, identity, historical progress, epistemic certainty, and the univocity of meaning." The National Research Council has characterized the belief that "social science research can never generate objective or trustworthy knowledge" as an example of a postmodernist belief. Jean-François Lyotard's seminal 1979 The Postmodern Condition stated that its hypotheses "should not be accorded predictive value in relation to reality, but strategic value in relation to the questions raised". Lyotard's statement in 1984 that "I define postmodern as incredulity toward meta-narratives" extends to incredulity toward science. Jacques Derrida, who is generally identified as a postmodernist, stated that "every referent, all reality has the structure of a differential trace". There are strong similarities with post-modernism in the work of Paul Feyerabend; Feyerabend held that modern science is no more justified than witchcraft, and has denounced the "tyranny" of "abstract concepts such as 'truth', 'reality', or 'objectivity', which narrow people's vision and ways of being in the world". Defenders of postmodernism state that many descriptions of postmodernism exaggerate its antipathy to science; for example, Feyerabend denied that he was "anti-science", accepted that some scientific theories are superior to other theories (even if science itself is not superior to other modes of inquiry), and attempted conventional medical treatments during his fight against cancer.
Influences
Postmodern philosophy was greatly influenced by the writings of Søren Kierkegaard and Friedrich Nietzsche in the 19th century and other early-to-mid 20th-century philosophers, including the phenomenologist Martin Heidegger, the psychoanalyst Jacques Lacan, cultural critic Roland Barthes, theorist Georges Bataille, and the later work of Ludwig Wittgenstein.
Postmodern philosophy also drew from the world of the arts and architecture, particularly Marcel Duchamp, John Cage, and artists who practiced collage.
Postmodern Philosophers
Michel Foucault
Michel Foucault is often cited as an early postmodernist although he personally rejected that label. Following Nietzsche, Foucault argued that knowledge is produced through the operations of power, and changes fundamentally in different historical periods.
Jean Baudrillard
Baudrillard, known for his simulation theory, argued that the individual's experience and perception of reality derives its basis entirely from media-propagated ideals and images. The real and fantasy become indistinguishable, leading to the emergence of a wide-spread simulation of reality.
Jean François Lyotard
The writings of Lyotard were largely concerned with the role of narrative in human culture, and particularly how that role has changed as we have left modernity and entered a "postindustrial" or postmodern condition. He argued that modern philosophies legitimized their truth-claims not (as they themselves claimed) on logical or empirical grounds, but rather on the grounds of accepted stories (or "metanarratives") about knowledge and the world—comparing these with Wittgenstein's concept of language-games. He further argued that in our postmodern condition, these metanarratives no longer work to legitimize truth-claims. He suggested that in the wake of the collapse of modern metanarratives, people are developing a new "language-game"—one that does not make claims to absolute truth but rather celebrates a world of ever-changing relationships (among people and between people and the world).
Jacques Derrida
Derrida, the father of deconstruction, practiced philosophy as a form of textual criticism. He criticized Western philosophy as privileging the concept of presence and logos, as opposed to absence and markings or writings.
Richard Rorty
In the United States, a well-known pragmatist and self-proclaimed postmodernist was Richard Rorty. An analytic philosopher, Rorty believed that combining Willard Van Orman Quine's criticism of the analytic-synthetic distinction with Wilfrid Sellars's critique of the "Myth of the Given" allowed for an abandonment of the view of the thought or language as a mirror of a reality or an external world. Further, drawing upon Donald Davidson's criticism of the dualism between conceptual scheme and empirical content, he challenges the sense of questioning whether our particular concepts are related to the world in an appropriate way, whether we can justify our ways of describing the world as compared with other ways. He argued that truth was not about getting it right or representing reality, but was part of a social practice and language was what served our purposes in a particular time; ancient languages are sometimes untranslatable into modern ones because they possess a different vocabulary and are unuseful today. Donald Davidson is not usually considered a postmodernist, although he and Rorty have both acknowledged that there are few differences between their philosophies.
Douglas Kellner
Douglas Kellner insists that the "assumptions and procedures of modern theory" must be forgotten. Kellner analyzes the terms of this theory in real-life experiences and examples. Kellner uses science and technology studies as a major part of his analysis; he urges that the theory is incomplete without it. The scale is larger than just postmodernism alone; it must be interpreted through cultural studies where science and technology studies play a large role. The reality of the September 11 attacks on the United States of America is the catalyst for his explanation. In response, Kellner continues to examine the repercussions of understanding the effects of the 11 September attacks. He questions if the attacks are only able to be understood in a limited form of postmodern theory due to the level of irony. The conclusion he depicts is simple: postmodernism, as most use it today, will decide what experiences and signs in one's reality will be one's reality as they know it.
Criticism
Some criticism responds to postmodernist skepticism towards objective reality and claims that truth and morality are relative, including the argument that this relativism is self-contradictory. In part in reference to postmodernism, conservative English philosopher Roger Scruton wrote, "A writer who says that there are no truths, or that all truth is 'merely relative,' is asking you not to believe him. So don't." In 2014, the philosophers Theodore Schick and Lewis Vaughn wrote: "the statement that 'No unrestricted universal generalizations are true' is itself an unrestricted universal generalization. So if relativism in any of its forms is true, it's false." Some responses to postmodernist relativism argue that, contrary to its proponents' usual intentions, it does not necessarily benefit the political left. For example, the historian Richard J. Evans argued that if relativism rejects truth, it can legitimize far-right pseudohistory such as Holocaust denial.
Further lines of criticism are that postmodernist discourse is characterized by obscurantism, that the term itself is vaguely defined, and that postmodernism lacks a clear epistemology. The linguist and philosopher Noam Chomsky accused postmodernist intellectuals of failing to meaningfully answer questions such as "what are the principles of their theories, on what evidence are they based, what do they explain that wasn't already obvious, etc.?"
The French psychotherapist and philosopher, Félix Guattari, rejected its theoretical assumptions by arguing that the structuralist and postmodernist visions of the world were not flexible enough to seek explanations in psychological, social, and environmental domains at the same time. In an interview with Truls Lie, Jean Baudrillard noted: "[Transmodernism, etc.] are better terms than "postmodernism". It is not about modernity; it is about every system that has developed its mode of expression to the extent that it surpasses itself and its own logic. This is what I am trying to analyze." "There is no longer any ontologically secret substance. I perceive this to be nihilism rather than postmodernism."
See also
Hyperreality
Natural philosophy
Ontological pluralism
Physical ontology
Postmaterialism
Postmodern art
Postmodernism
Postmodernity
Notes
Further reading
Charles Arthur Willard Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. University of Chicago Press. 1996.
John Deely "Quid sit Postmodernismus?," in Roman Ciapalo (ed.) Postmodernism and Christian philosophy, 68–96, Washington, D.C.: Catholic University of America Press. 1997.
External links
Modern Philosophical Discussions (archived 14 July 2011)
Philosophical schools and traditions | 0.793113 | 0.993891 | 0.788268 |
Pleonasm | Pleonasm (; , ) is redundancy in linguistic expression, such as in "black darkness," "burning fire," "the man he said," or "vibrating with motion." It is a manifestation of tautology by traditional rhetorical criteria. Pleonasm may also be used for emphasis, or because the phrase has become established in a certain form. Tautology and pleonasm are not consistently differentiated in literature.
Usage
Most often, pleonasm is understood to mean a word or phrase which is useless, clichéd, or repetitive, but a pleonasm can also be simply an unremarkable use of idiom. It can aid in achieving a specific linguistic effect, be it social, poetic or literary. Pleonasm sometimes serves the same function as rhetorical repetition—it can be used to reinforce an idea, contention or question, rendering writing clearer and easier to understand. Pleonasm can serve as a redundancy check; if a word is unknown, misunderstood, misheard, or if the medium of communication is poor—a static-filled radio transmission or sloppy handwriting—pleonastic phrases can help ensure that the meaning is communicated even if some of the words are lost.
Idiomatic expressions
Some pleonastic phrases are part of a language's idiom, like tuna fish, chain mail and safe haven in American English. They are so common that their use is unremarkable for native speakers, although in many cases the redundancy can be dropped with no loss of meaning.
When expressing possibility, English speakers often use potentially pleonastic expressions such as It might be possible or perhaps it's possible, where both terms (verb might or adverb perhaps along with the adjective possible) have the same meaning under certain constructions. Many speakers of English use such expressions for possibility in general, such that most instances of such expressions by those speakers are in fact pleonastic. Others, however, use this expression only to indicate a distinction between ontological possibility and epistemic possibility, as in "Both the ontological possibility of X under current conditions and the ontological impossibility of X under current conditions are epistemically possible" (in logical terms, "I am not aware of any facts inconsistent with the truth of proposition X, but I am likewise not aware of any facts inconsistent with the truth of the negation of X"). The habitual use of the double construction to indicate possibility per se is far less widespread among speakers of most other languages (except in Spanish; see examples); rather, almost all speakers of those languages use one term in a single expression:
French: or .
Portuguese: , lit. "What is it that", a more emphatic way of saying "what is"; usually suffices.
Romanian: or .
Typical Spanish pleonasms
– I am going to go up upstairs, "" not being necessary.
– enter inside, "" not being necessary.
Turkish has many pleonastic constructs because certain verbs necessitate objects:
– to eat food.
– to write writing.
– to exit outside.
– to enter inside.
– to play a game.
In a satellite-framed language like English, verb phrases containing particles that denote direction of motion are so frequent that even when such a particle is pleonastic, it seems natural to include it (e.g. "enter into").
Professional and scholarly use
Some pleonastic phrases, when used in professional or scholarly writing, may reflect a standardized usage that has evolved or a meaning familiar to specialists but not necessarily to those outside that discipline. Such examples as "null and void", "terms and conditions", "each and every" are legal doublets that are part of legally operative language that is often drafted into legal documents. A classic example of such usage was that by the Lord Chancellor at the time (1864), Lord Westbury, in the English case of Gorely, when he described a phrase in an Act as "redundant and pleonastic". This type of usage may be favored in certain contexts. However, it may also be disfavored when used gratuitously to portray false erudition, obfuscate, or otherwise introduce verbiage, especially in disciplines where imprecision may introduce ambiguities (such as the natural sciences).
Of the aforementioned phrases, "terms and conditions" may not be pleonastic in some legal systems, as they refer not to a set provisions forming part of a contract, but rather to the specific terms conditioning the effect of the contract or a contractual provision to a future event. In these cases, terms and conditions imply respectively the certainty or uncertainty of said event (e.g., in Brazilian law, a testament has the initial term for coming into force the death of the testator, while a health insurance has the condition of the insured suffering a, or one of a set of, certain injurie(s) from a or one of a set of certain causes).
Stylistic preference
In addition, pleonasms can serve purposes external to meaning. For example, a speaker who is too terse is often interpreted as lacking ease or grace, because, in oral and sign language, sentences are spontaneously created without the benefit of editing. The restriction on the ability to plan often creates many redundancies. In written language, removing words that are not strictly necessary sometimes makes writing seem stilted or awkward, especially if the words are cut from an idiomatic expression.
On the other hand, as is the case with any literary or rhetorical effect, excessive use of pleonasm weakens writing and speech; words distract from the content. Writers who want to obfuscate a certain thought may obscure their meaning with excess verbiage. William Strunk Jr. advocated concision in The Elements of Style (1918):
Literary uses
Examples from Baroque, Mannerist, and Victorian provide a counterpoint to Strunk's advocacy of concise writing:
"This was the most unkindest cut of all." — William Shakespeare, Julius Caesar (Act 3, Scene 2, 183)
"I will be brief: your noble son is mad:/Mad call I it; for, to define true madness,/What is't but to be nothing else but mad?" — Hamlet (Act 2, Scene 2)
"Let me tell you this, when social workers offer you, free, gratis and for nothing, something to hinder you from swooning, which with them is an obsession, it is useless to recoil ..." — Samuel Beckett, Molloy
Types
There are various kinds of pleonasm, including bilingual tautological expressions, syntactic pleonasm, semantic pleonasm and morphological pleonasm:
Bilingual tautological expressions
A bilingual tautological expression is a phrase that combines words that mean the same thing in two different languages. An example of a bilingual tautological expression is the Yiddish expression mayim akhroynem vaser. It literally means "water last water" and refers to "water for washing the hands after meal, grace water". Its first element, mayim, derives from the Hebrew ['majim] "water". Its second element, vaser, derives from the Middle High German word "water".
According to Ghil'ad Zuckermann, Yiddish abounds with both bilingual tautological compounds and bilingual tautological first names.
The following are examples of bilingual tautological compounds in Yiddish:
fíntster khóyshekh "very dark", literally "dark darkness", traceable back to the Middle High German word "dark" and the Hebrew word חושך ħōshekh "darkness".
khameréyzļ "womanizer", literally "donkey-donkey", traceable back to the Hebrew word חמור [ħă'mōr] "donkey" and the Middle High German word "donkey".
The following are examples of bilingual tautological first names (anthroponyms) in Yiddish:
Dov-Ber, literally "bear-bear", traceable back to the Hebrew word dov "bear" and the Middle High German word "bear".
Tsvi-Hirsh, literally "deer-deer", traceable back to the Hebrew word tsvi "deer" and the Middle High German word "deer".
Ze'ev-Volf, literally "wolf-wolf", traceable back to the Hebrew word ze'ev "wolf" and the Middle High German word "wolf".
Arye-Leyb, literally "lion-lion", traceable back to the Hebrew word arye "lion" and the Middle High German word "lion".
Examples occurring in English-language contexts include:
River Avon, literally "River River", from Welsh.
the Sahara Desert, literally "the The Desert Desert", from Arabic.
the La Brea Tar Pits, literally "the The Tar Tar Pits", from Spanish.
the Los Angeles Angels, literally "the The Angels Angels", from Spanish.
the hoi polloi, literally "the the many", from Greek.
Carmarthen Castle, may actually have "castle" in it three times: In its Welsh form, Castell Caerfyrddin, "Caer" means fort, while "fyrddin" is thought to be derived from the Latin Moridunum ("sea fort") making Carmarthen Castle "fort sea-fort castle".
Mount Maunganui, Lake Rotoroa, and Motutapu Island in New Zealand are "Mount Mount Big", "Lake Lake Long", and "Island Sacred Island" respectively, from Māori.
Syntactic pleonasm
Syntactic pleonasm occurs when the grammar of a language makes certain function words optional. For example, consider the following English sentences:
"I know you're coming."
"I know that you're coming."
In this construction, the conjunction that is optional when joining a sentence to a verb phrase with know. Both sentences are grammatically correct, but the word that is pleonastic in this case. By contrast, when a sentence is in spoken form and the verb involved is one of assertion, the use of that makes clear that the present speaker is making an indirect rather than a direct quotation, such that he is not imputing particular words to the person he describes as having made an assertion; the demonstrative adjective that also does not fit such an example. Also, some writers may use "that" for technical clarity reasons. In some languages, such as French, the word is not optional and should therefore not be considered pleonastic.
The same phenomenon occurs in Spanish with subject pronouns. Since Spanish is a null-subject language, which allows subject pronouns to be deleted when understood, the following sentences mean the same:
""
""
In this case, the pronoun ('I') is grammatically optional; both sentences mean "I love you" (however, they may not have the same tone or intention—this depends on pragmatics rather than grammar). Such differing but syntactically equivalent constructions, in many languages, may also indicate a difference in register.
The process of deleting pronouns is called pro-dropping, and it also happens in many other languages, such as Korean, Japanese, Hungarian, Latin, Italian, Portuguese, Swahili, Slavic languages, and the Lao language.
In contrast, formal English requires an overt subject in each clause. A sentence may not need a subject to have valid meaning, but to satisfy the syntactic requirement for an explicit subject a pleonastic (or dummy pronoun) is used; only the first sentence in the following pair is acceptable English:
"It's raining."
"Is raining."
In this example the pleonastic "it" fills the subject function, but it contributes no meaning to the sentence. The second sentence, which omits the pleonastic it is marked as ungrammatical although no meaning is lost by the omission. Elements such as "it" or "there", serving as empty subject markers, are also called (syntactic) expletives, or dummy pronouns. Compare:
"There is rain."
"Today is rain."
The pleonastic , expressing uncertainty in formal French, works as follows:
""('I fear it may rain.')
""('These ideas are harder to understand than I thought.')
Two more striking examples of French pleonastic construction are and .
The word / is translated as 'today', but originally means "on the day of today" since the now obsolete means "today". The expression (translated as "on the day of today") is common in spoken language and demonstrates that the original construction of is lost. It is considered a pleonasm.
The phrase meaning 'What's that?' or 'What is it?', while literally, it means "What is it that it is?".
There are examples of the pleonastic, or dummy, negative in English, such as the construction, heard in the New England region of the United States, in which the phrase "So don't I" is intended to have the same positive meaning as "So do I."
When Robert South said, "It is a pleonasm, a figure usual in Scripture, by a multiplicity of expressions to signify one notable thing", he was observing the Biblical Hebrew poetic propensity to repeat thoughts in different words, since written Biblical Hebrew was a comparatively early form of written language and was written using oral patterning, which has many pleonasms. In particular, very many verses of the Psalms are split into two halves, each of which says much the same thing in different words. The complex rules and forms of written language as distinct from spoken language were not as well-developed as they are today when the books making up the Old Testament were written. See also parallelism (rhetoric).
This same pleonastic style remains very common in modern poetry and songwriting (e.g., "Anne, with her father / is out in the boat / riding the water / riding the waves / on the sea", from Peter Gabriel's "Mercy Street").
Types of syntactic pleonasm
Overinflection: Many languages with inflection, as a result of convention, tend to inflect more words in a given phrase than actually needed in order to express a single grammatical property. Take for example the German, ("The old women speak"). Even though the use of the plural form of the noun ("woman", plural ) shows the grammatical number of the noun phrase, agreement in the German language still dictates that the definite article , attributive adjective , and the verb must all also be in the plural. Not all languages are quite as redundant however, and will permit inflection for number when there is an obvious numerical marker, as is the case with Hungarian, which does have a plural proper, but would express two flowers as two flower. (The same is the case in Celtic languages, where numerical markers precede singular nouns.) The main contrast between Hungarian and other tongues such as German or even English (to a lesser extent) is that in either of the latter, expressing plurality when already evident is not optional, but mandatory; making the neglect of these rules result in an ungrammatical sentence. As well as for number, our aforementioned German phrase also overinflects for grammatical case.
Multiple negation: In some languages, repeated negation may be used for emphasis, as in the English sentence, "There ain't nothing wrong with that". While a literal interpretation of this sentence would be "There is not nothing wrong with that", i.e. "There is something wrong with that", the intended meaning is, in fact, the opposite: "There is nothing wrong with that" or "There isn't anything wrong with that." The repeated negation is used pleonastically for emphasis. However, this is not always the case. In the sentence "I don't not like it", the repeated negative may be used to convey ambivalence ("I neither like nor dislike it") or even affirmation ("I do like it"). (Rhetorically, this becomes the device of litotes; it can be difficult to distinguish litotes from pleonastic double negation, a feature which may be used for ironic effect.) Although the use of "double negatives" for emphatic purposes is sometimes discouraged in standard English, it is mandatory in other languages like Spanish or French. For example, the Spanish phrase ('It is nothing') contains both a negated verb ("") and another negative, the word for nothing ("").
Multiple affirmations: In English, repeated affirmation can be used to add emphasis to an affirmative statement, just as repeated negation can add emphasis to a negative one. A sentence like I do love you, with a stronger intonation on the do, uses double affirmation. This is because English, by default, automatically expresses its sentences in the affirmative and must then alter the sentence in one way or another to express the opposite. Therefore, the sentence I love you is already affirmative, and adding the extra do only adds emphasis and does not change the meaning of the statement.
Double possession: The double genitive of English, as with a friend of mine, is seemingly pleonastic, and therefore has been stigmatized, but it has a long history of use by careful writers and has been analyzed as either a partitive genitive or an appositive genitive.
Multiple quality gradation: In English, different degrees of comparison (comparatives and superlatives) are created through a morphological change to an adjective (e.g., "prettier", "fastest") or a syntactic construction (e.g., "more complex", "most impressive"). It is thus possible to combine both forms for additional emphasis: "more bigger" or "bestest". This may be considered ungrammatical but is common in informal speech for some English speakers. "The most unkindest cut of all" is from Shakespeare's Julius Caesar. Musical notation has a repeated Italian superlative in fortississimo and pianississimo.
Not all uses of constructions such as "more bigger" are pleonastic, however. Some speakers who use such utterances do so in an attempt, albeit a grammatically unconventional one, to create a non-pleonastic construction: A person who says "X is more bigger than Y" may, in the context of a conversation featuring a previous comparison of some object Z with Y, mean "The degree by which X exceeds Y in size is greater than the degree by which Z exceeds Y in size". This usage amounts to the treatment of "bigger than Y" as a single grammatical unit, namely an adjective itself admitting of degrees, such that "X is more bigger than Y" is equivalent to "X is more bigger-than-Y than Z is."[alternatively, "X is bigger than Y more than Z is."] Another common way to express this is: "X is even bigger than Z."
Semantic pleonasm
Semantic pleonasm is a question more of style and usage than of grammar. Linguists usually call this redundancy to avoid confusion with syntactic pleonasm, a more important phenomenon for theoretical linguistics. It usually takes one of two forms: Overlap or prolixity.
Overlap: One word's semantic component is subsumed by the other:
"Receive a free gift with every purchase."; a gift is usually already free.
"A tuna fish sandwich."
"The plumber fixed our hot water heater." (This pleonasm was famously attacked by American comedian George Carlin, but is not truly redundant; a device that increases the temperature of cold water to room temperature would also be a water heater.)
The Big Friendly Giant (title of a children's book by Roald Dahl); giants are inherently already "big".
Prolixity: A phrase may have words which add nothing, or nothing logical or relevant, to the meaning.
"I'm going down south."(South is not really "down", it is just drawn that way on maps by convention.)
"You can't seem to face up to the facts."
"He entered into the room."
"Every mother's child" (as in 'The Christmas Song' by Nat King Cole', also known as 'Chestnuts roasting...'). (Being a child, or a human at all, generally implies being the child of/to a mother. So the redundancy here is used to broaden the context of the child's curiosity regarding the sleigh of Santa Claus, including the concept of maternity. The full line goes: "And every mother's child is gonna spy, to see if reindeer really know how to fly". One can furthermore argue that the word "mother" is included for the purpose of lyrical flow, adding two syllables, which make the line sound complete, as "every child" would be too short to fit the lyrical/rhyme scheme.)
"What therefore God hath joined together, let no man put asunder."
"He raised up his hands in a gesture of surrender."
"Where are you at?"
"Located" or similar before a preposition: "the store is located on Main St." The preposition contains the idea of locatedness and does not need a servant.
"The house itself" for "the house", and similar: unnecessary re-specifiers.
"Actual fact": fact.
"On a daily basis": daily.
"This particular item": this item.
"Different" or "separate" after numbers: for example:
"Four different species" are merely "four species", as two non-different species are together one same species. (However, in "a discount if you buy ten different items", "different" has meaning, because if the ten items include two packets of frozen peas of the same weight and brand, those ten items are not all different.)
"Nine separate cars": cars are always separate.
"Despite the fact that": although.
An expression like "tuna fish", however, might elicit one of many possible responses, such as:
It will simply be accepted as synonymous with "tuna".
It will be perceived as redundant (and thus perhaps silly, illogical, ignorant, inefficient, dialectal, odd, and/or intentionally humorous).
It will imply a distinction. A reader of "tuna fish" could properly wonder: "Is there a kind of tuna which is not a fish? There is, after all, a dolphin mammal and a dolphin fish." This assumption turns out to be correct, as a "tuna" can also mean a prickly pear. Further, "tuna fish" is sometimes used to refer to the flesh of the animal as opposed to the animal itself (similar to the distinction between beef and cattle). Similarly, while all sound-making horns use air, an "air horn" has a special meaning: one that uses compressed air specifically; while most clocks tell time, a "time clock" specifically means one that keeps track of workers' presence at the workplace.
It will be perceived as a verbal clarification, since the word "tuna" is quite short, and may, for example, be misheard as "tune" followed by an aspiration, or (in dialects that drop the final -r sound) as "tuner".
Careful speakers, and writers, too, are aware of pleonasms, especially with cases such as "tuna fish", which is normally used only in some dialects of American English, and would sound strange in other variants of the language, and even odder in translation into other languages.
Similar situations are:
"Ink pen" instead of merely "pen" in the southern United States, where "pen" and "pin" are pronounced similarly.
"Extra accessories" which must be ordered separately for a new camera, as distinct from the accessories provided with the camera as sold.
Not all constructions that are typically pleonasms are so in all cases, nor are all constructions derived from pleonasms themselves pleonastic:
"Put that glass over there on the table." This could, depending on room layout, mean "Put that glass on the table across the room, not the table right in front of you"; if the room were laid out like that, most English speakers would intuitively understand that the distant, not immediate table was the one being referred to; however, if there were only one table in the room, the phrase would indeed be pleonastic. Also, it could mean, "Put that glass on the spot (on the table) which I am gesturing to"; thus, in this case, it is not pleonastic.
"I'm going way down South." This may imply "I'm going much farther south than you might think if I didn't stress the southerliness of my destination"; but such phrasing is also sometimes—and sometimes jokingly—used pleonastically when simply "south" would do; it depends upon the context, the intent of the speaker/writer, and ultimately even on the expectations of the listener/reader.
Morphemic pleonasm
Morphemes, not just words, can enter the realm of pleonasm: Some word-parts are simply optional in various languages and dialects. A familiar example to American English speakers would be the allegedly optional "-al-", probably most commonly seen in "" vs. "publicly"—both spellings are considered correct/acceptable in American English, and both pronounced the same, in this dialect, rendering the "" spelling pleonastic in US English; in other dialects it is "required", while it is quite conceivable that in another generation or so of American English it will be "forbidden". This treatment of words ending in "-ic", "-ac", etc., is quite inconsistent in US English—compare "maniacally" or "forensically" with "stoicly" or "heroicly"; "forensicly" doesn't look "right" in any dialect, but "heroically" looks internally redundant to many Americans. (Likewise, there are thousands of mostly American Google search results for "eroticly", some in reputable publications, but it does not even appear in the 23-volume, 23,000-page, 500,000-definition Oxford English Dictionary (OED), the largest in the world; and even American dictionaries give the correct spelling as "erotically".) In a more modern pair of words, Institute of Electrical and Electronics Engineers dictionaries say that "electric" and "electrical" mean the same thing. However, the usual adverb form is "electrically". (For example, "The glass rod is electrically charged by rubbing it with silk".)
Some (mostly US-based) prescriptive grammar pundits would say that the "-ly" not "-ally" form is "correct" in any case in which there is no "-ical" variant of the basic word, and vice versa; i.e. "maniacally", not "maniacly", is correct because "maniacal" is a word, while "publicly", not "", must be correct because "publical" is (arguably) not a real word (it does not appear in the OED). This logic is in doubt, since most if not all "-ical" constructions arguably are "real" words and most have certainly occurred more than once in "reputable" publications and are also immediately understood by any educated reader of English even if they "look funny" to some, or do not appear in popular dictionaries. Additionally, there are numerous examples of words that have very widely accepted extended forms that have skipped one or more intermediary forms, e.g., "disestablishmentarian" in the absence of "disestablishmentary" (which does not appear in the OED). At any rate, while some US editors might consider "-ally" vs. "-ly" to be pleonastic in some cases, the majority of other English speakers would not, and many "-ally" words are not pleonastic to anyone, even in American English.
The most common definitely pleonastic morphological usage in English is "irregardless", which is very widely criticized as being a non-word. The standard usage is "regardless", which is already negative; adding the additional negative ir- is interpreted by some as logically reversing the meaning to "with regard to/for", which is certainly not what the speaker intended to convey. (According to most dictionaries that include it, "irregardless" appears to derive from confusion between "regardless" and "irrespective", which have overlapping meanings.)
Morphemic pleonasm in Modern Standard Chinese
There are several instances in Chinese vocabulary where pleonasms and cognate objects are present. Their presence usually indicate the plural form of the noun or the noun in formal context.
('book(s)' – in general)
('paper, tissue, pieces of paper' – formal)
In some instances, the pleonasmic form of the verb is used with the intention as an emphasis to one meaning of the verb, isolating them from their idiomatic and figurative uses. But over time, the pseudo-object, which sometimes repeats the verb, is almost inherently coupled with the it.
For example, the word ('to sleep') is an intransitive verb, but may express different meaning when coupled with objects of prepositions as in "to sleep with". However, in Mandarin, is usually coupled with a pseudo-character , yet it is not entirely a cognate object, to express the act of resting.
('I want sleep'). Although such usage of is not found among native speakers of Mandarin and may sound awkward, this expression is grammatically correct and it is clear that means 'to sleep/to rest' in this context.
('I want to sleep') and ('I'm going to sleep'). In this context, ('to sleep') is a complete verb and native speakers often express themselves this way. Adding this particle clears any suspicion from using it with any direct object shown in the next example:
('I want to have sex with her') and ('I want to sleep with her'). When the verb follows an animate direct object the meaning changes dramatically. The first instance is mainly seen in colloquial speech. Note that the object of preposition of "to have sex with" is the equivalent of the direct object of in Mandarin.
One can also find a way around this verb, using another one which does not is used to express idiomatic expressions nor necessitate a pleonasm, because it only has one meaning:
('I want "to dorm)
Nevertheless, is a verb used in high-register diction, just like English verbs with Latin roots.
There is no relationship found between Chinese and English regarding verbs that can take pleonasms and cognate objects. Although the verb to sleep may take a cognate object as in "sleep a restful sleep", it is a pure coincidence, since verbs of this form are more common in Chinese than in English; and when the English verb is used without the cognate objects, its diction is natural and its meaning is clear in every level of diction, as in "I want to sleep" and "I want to have a rest".
Subtler redundancies
In some cases, the redundancy in meaning occurs at the syntactic level above the word, such as at the phrase level:
"It's déjà vu all over again."
"I never make predictions, especially about the future."
The redundancy of these two well-known statements is deliberate, for humorous effect. (See Yogi Berra#"Yogi-isms".) But one does hear educated people say "my predictions about the future of politics" for "my predictions about politics", which are equivalent in meaning. While predictions are necessarily about the future (at least in relation to the time the prediction was made), the nature of this future can be subtle (e.g., "I predict that he died a week ago"—the prediction is about future discovery or proof of the date of death, not about the death itself). Generally "the future" is assumed, making most constructions of this sort pleonastic. The latter humorous quote above about not making predictions—by Yogi Berra—is not really a pleonasm, but rather an ironic play on words.
Alternatively it could be an analogy between predict and guess.
However, "It's déjà vu all over again" could mean that there was earlier another déjà vu of the same event or idea, which has now arisen for a third time; or that the speaker had very recently experienced a déjà vu of a different idea.
Redundancy, and "useless" or "nonsensical" words (or phrases, or morphemes), can also be inherited by one language from the influence of another and are not pleonasms in the more critical sense but actual changes in grammatical construction considered to be required for "proper" usage in the language or dialect in question. Irish English, for example, is prone to a number of constructions that non-Irish speakers find strange and sometimes directly confusing or silly:
"I'm after putting it on the table."('I [have] put it on the table.') This example further shows that the effect, whether pleonastic or only pseudo-pleonastic, can apply to words and word-parts, and multi-word phrases, given that the fullest rendition would be "I am after putting it on the table".
"Have a look at your man there."('Have a look at that man there.') An example of word substitution, rather than addition, that seems illogical outside the dialect. This common possessive-seeming construction often confuses the non-Irish enough that they do not at first understand what is meant. Even "Have a look at that man there" is arguably further doubly redundant, in that a shorter "Look at that man" version would convey essentially the same meaning.
"She's my wife so she is."('She's my wife.') Duplicate subject and verb, post-complement, used to emphasize a simple factual statement or assertion.
All of these constructions originate from the application of Irish Gaelic grammatical rules to the English dialect spoken, in varying particular forms, throughout the island.
Seemingly "useless" additions and substitutions must be contrasted with similar constructions that are used for stress, humor, or other intentional purposes, such as:
"I abso-fuckin'-lutely agree!"(tmesis, for stress)
"Topless-shmopless—nudity doesn't distract me."(shm-reduplication, for humor)
The latter of these is a result of Yiddish influences on modern English, especially East Coast US English.
Sometimes editors and grammatical stylists will use "pleonasm" to describe simple wordiness. This phenomenon is also called prolixity or logorrhea. Compare:
"The sound of the loud music drowned out the sound of the burglary."
"The loud music drowned out the sound of the burglary."
or even:
"The music drowned out the burglary."
The reader or hearer does not have to be told that loud music has a sound, and in a newspaper headline or other abbreviated prose can even be counted upon to infer that "burglary" is a proxy for "sound of the burglary" and that the music necessarily must have been loud to drown it out, unless the burglary was relatively quiet (this is not a trivial issue, as it may affect the legal culpability of the person who played the music); the word "loud" may imply that the music should have been played quietly if at all. Many are critical of the excessively abbreviated constructions of "headline-itis" or "newsspeak", so "loud [music]" and "sound of the [burglary]" in the above example should probably not be properly regarded as pleonastic or otherwise genuinely redundant, but simply as informative and clarifying.
Prolixity is also used to obfuscate, confuse, or euphemize and is not necessarily redundant or pleonastic in such constructions, though it often is. "Post-traumatic stress disorder" (shell shock) and "pre-owned vehicle" (used car) are both tumid euphemisms but are not redundant. Redundant forms, however, are especially common in business, political, and academic language that is intended to sound impressive (or to be vague so as to make it hard to determine what is actually being promised, or otherwise misleading). For example: "This quarter, we are presently focusing with determination on an all-new, innovative integrated methodology and framework for rapid expansion of customer-oriented external programs designed and developed to bring the company's consumer-first paradigm into the marketplace as quickly as possible."
In contrast to redundancy, an oxymoron results when two seemingly contradictory words are adjoined.
Foreign words
Redundancies sometimes take the form of foreign words whose meaning is repeated in the context:
"We went to the El Restaurante restaurant."
"The La Brea tar pits are fascinating."
"Roast beef served with au jus sauce."
"Please R.S.V.P."
"The Schwarzwald Forest is deep and dark."
"The Drakensberg Mountains are in South Africa."
"We will vacation in Timor-Leste."
LibreOffice office suite.
The hoi polloi.
I'd like to have a chai tea.
"That delicious Queso cheese."
"Some salsa sauce on the side?."
These sentences use phrases which mean, respectively, "the restaurant restaurant", "the tar tar", "with juice sauce" and so on. However, many times these redundancies are necessary—especially when the foreign words make up a proper noun as opposed to a common one. For example, "We went to Il Ristorante" is acceptable provided the audience can infer that it is a restaurant. (If they understand Italian and English it might, if spoken, be misinterpreted as a generic reference and not a proper noun, leading the hearer to ask "Which ristorante do you mean?"—such confusions are common in richly bilingual areas like Montreal or the American Southwest when mixing phrases from two languages.) But avoiding the redundancy of the Spanish phrase in the second example would only leave an awkward alternative: "La Brea pits are fascinating".
Most find it best to not even drop articles when using proper nouns made from foreign languages:
"The movie is playing at the El Capitan theater."
However, there are some exceptions to this, for example:
"Jude Bellingham plays for Real Madrid in La Liga." ("La Liga" literally means "The League" in Spanish)
This is also similar to the treatment of definite and indefinite articles in titles of books, films, etc. where the article can—some would say must—be present where it would otherwise be "forbidden":
"Stephen King's The Shining is scary."(Normally, the article would be left off following a possessive.)
"I'm having an An American Werewolf in London movie night at my place."(Seemingly doubled article, which would be taken for a stutter or typographical error in other contexts.)
Some cross-linguistic redundancies, especially in placenames, occur because a word in one language became the title of a place in another (e.g., the Sahara Desert—"Sahara" is an English approximation of the word for "deserts" in Arabic). "The Los Angeles Angels" professional baseball team is literally "the The Angels Angels". A supposed extreme example is Torpenhow Hill in Cumbria, where some of the elements in the name likely mean "hill". See the List of tautological place names for many more examples.
The word tsetse means "fly" in the Tswana language, a Bantu language spoken in Botswana and South Africa. This word is the root of the English name for a biting fly found in Africa, the tsetse fly.
Acronyms and initialisms
Acronyms and initialisms can also form the basis for redundancies; this is known humorously as RAS syndrome (for Redundant Acronym Syndrome syndrome). In all the examples that follow, the word after the acronym repeats a word represented in the acronym. The full redundant phrase is stated in the parentheses that follow each example:
"I forgot my PIN number for the ATM machine." (Personal Identification Number number; Automated Teller Machine machine)
"I upgraded the RAM memory of my computer." (Random Access Memory memory)
"She is infected with the HIV virus." (Human Immunodeficiency Virus virus)
"I have installed a CMS system on my server." (Content Management System system)
"The SI system of units is the modern form of the metric system." (International System system)
(See RAS syndrome for many more examples.) The expansion of an acronym like PIN or HIV may be well known to English speakers, but the acronyms themselves have come to be treated as words, so little thought is given to what their expansion is (and "PIN" is also pronounced the same as the word "pin"; disambiguation is probably the source of "PIN number"; "SIN number" for "Social Insurance Number number" is a similar common phrase in Canada.) But redundant acronyms are more common with technical (e.g., computer) terms where well-informed speakers recognize the redundancy and consider it silly or ignorant, but mainstream users might not, since they may not be aware or certain of the full expansion of an acronym like "RAM".
Typographical
Some redundancies are simply typographical. For instance, when a short inflexional word like "the" occurs at the end of a line, it is very common to accidentally repeat it at the beginning of the following line, and a large number of readers would not even notice it.
Apparent redundancies that actually are not redundant
Carefully constructed expressions, especially in poetry and political language, but also some general usages in everyday speech, may appear to be redundant but are not. This is most common with cognate objects (a verb's object that is cognate with the verb):
"She slept a deep sleep."
Or, a classic example from Latin:
mutatis mutandis = "with change made to what needs to be changed" (an ablative absolute construction)
The words need not be etymologically related, but simply conceptually, to be considered an example of cognate object:
"We wept tears of joy."
Such constructions are not actually redundant (unlike "She slept a sleep" or "We wept tears") because the object's modifiers provide additional information. A rarer, more constructed form is polyptoton, the stylistic repetition of the same word or words derived from the same root:
"...[T]he only thing we have to fear is fear itself." — Franklin D. Roosevelt, "First Inaugural Address", March 1933.
"With eager feeding[,] food doth choke the feeder." — William Shakespeare, Richard II, II, i, 37.
As with cognate objects, these constructions are not redundant because the repeated words or derivatives cannot be removed without removing meaning or even destroying the sentence, though in most cases they could be replaced with non-related synonyms at the cost of style (e.g., compare "The only thing we have to fear is terror".)
Semantic pleonasm and context
In many cases of semantic pleonasm, the status of a word as pleonastic depends on context. The relevant context can be as local as a neighboring word, or as global as the extent of a speaker's knowledge. In fact, many examples of redundant expressions are not inherently redundant, but can be redundant if used one way, and are not redundant if used another way. The "up" in "climb up" is not always redundant, as in the example "He climbed up and then fell down the mountain." Many other examples of pleonasm are redundant only if the speaker's knowledge is taken into account. For example, most English speakers would agree that "tuna fish" is redundant because tuna is a kind of fish. However, given the knowledge that "tuna" can also refer to a kind of edible prickly pear, the "fish" in "tuna fish" can be seen as non-pleonastic, but rather a disambiguator between the fish and the prickly pear.
Conversely, to English speakers who do not know Spanish, there is nothing redundant about "the La Brea tar pits" because the name "La Brea" is opaque: the speaker does not know that it is Spanish for "the tar" and thus "the La Brea Tar Pits" translates to "the the tar tar pits". Similarly, even though scuba stands for "self-contained underwater breathing apparatus", a phrase like "the scuba gear" would probably not be considered pleonastic because "scuba" has been reanalyzed into English as a simple word, and not an acronym suggesting the pleonastic word sequence "apparatus gear". (Most do not even know that it is an acronym and do not spell it SCUBA or S.C.U.B.A. Similar examples are radar and laser.)
See also
Notes
References
Citations
Bibliography
External links
Figures of speech
Linguistics
Rhetoric
Semantics
Syntax | 0.792214 | 0.994977 | 0.788235 |
Ontology (information science) | In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.
Every academic discipline or field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain, interoperability of data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages. For instance, the definition and ontology of economics is a primary concern in Marxist economics, but also in other subfields of economics. An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining what capital assets are at risk and by how much (see risk management).
What ontologies in both information science and philosophy have in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontology engineering (e.g., Quine and Kripke in philosophy, Sowa and Guarino in information science), and debates concerning to what extent normative ontology is possible (e.g., foundationalism and coherentism in philosophy, BFO and Cyc in artificial intelligence).
Applied ontology is considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishing controlled vocabularies of narrow domains than with philosophical first principles, or with questions such as the mode of existence of fixed essences or whether enduring objects (e.g., perdurantism and endurantism) may be ontologically more primary than processes. Artificial intelligence has retained considerable attention regarding applied ontology in subfields like natural language processing within machine translation and knowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics, industry. Such efforts often use ontology editing tools such as Protégé.
Ontology in Philosophy
Ontology is a branch of philosophy and intersects areas such as metaphysics, epistemology, and philosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality. Metaphysics deals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those between particulars and universals, intrinsic and extrinsic properties, or essence and existence. Metaphysics has been an ongoing topic of discussion since recorded history.
Etymology
The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation.
While the etymology is Greek, the oldest extant record of the word itself, the Neo-Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius).
The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in Archeologia Philosophica Nova or New Principles of Philosophy by Gideon Harvey.
Formal Ontology
Since the mid-1970s, researchers in the field of artificial intelligence (AI) have recognized that knowledge engineering is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning, which was only marginally successful. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge-based systems. In particular, David Powers introduced the word ontology to AI to refer to real world or robotic grounding, publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.
In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber used ontology as a technical term in computer science closely related to earlier idea of semantic networks and taxonomies. Gruber introduced the term as a specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.
Attempting to distance ontologies from taxonomies and similar efforts in knowledge modeling that rely on classes and inheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions – that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world. To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms.
As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."
Formal Ontology Components
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations.
Types
Domain ontology
A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings.
Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.).
At present, merging ontologies that are not developed from a common upper ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies, but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like the OBO Foundry.
Upper ontology
An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that overarches the terms and associated object descriptions as they are used in various relevant domain ontologies.
Standardized upper ontologies available for use include BFO, BORO method, Dublin Core, GFO, Cyc, SUMO, UMBEL, and DOLCE. WordNet has been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.
Hybrid ontology
The Gellish ontology is an example of a combination of an upper and a domain ontology.
Visualization
A survey of ontology visualization methods is presented by Katifori et al. An updated survey of ontology visualization methods and tools was published by Dudás et al. The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al. A visual language for ontologies represented in OWL is specified by the Visual Notation for OWL Ontologies (VOWL).
Engineering
Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain. It is a subfield of knowledge engineering that studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.
Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include:
Ensuring the ontology is current with domain knowledge and term use
Providing sufficient specificity and concept coverage for the domain of interest, thus minimizing the content completeness problem
Ensuring the ontology can support its use cases
Editors
Ontology editors are applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or more ontology languages.
Aspects of ontology editors include: visual navigation possibilities within the knowledge model, inference engines and information extraction; support for modules; the import and export of foreign knowledge representation languages for ontology matching; and the support of meta-ontologies such as OWL-S, Dublin Core, etc.
Learning
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction and text mining have been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.
Research
Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.
Languages
An ontology language is a formal language used to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based:
Common Algebraic Specification Language is a general logic-based specification language developed within the IFIP working group 1.3 "Foundations of System Specifications" and is a de facto standard language for software specifications. It is now being applied to ontology specifications in order to provide modularity and structuring mechanisms.
Common logic is ISO standard 24707, a specification of a family of ontology languages that can be accurately translated into each other.
The Cyc project has its own ontology language called CycL, based on first-order predicate calculus with some higher-order extensions.
DOGMA (Developing Ontology-Grounded Methods and Applications) adopts the fact-oriented modeling approach to provide a higher level of semantic stability.
The Gellish language includes rules for its own extension and thus integrates an ontology with an ontology language.
IDEF5 is a software engineering method to develop and maintain usable, accurate, domain ontologies.
KIF is a syntax for first-order logic that is based on S-expressions. SUO-KIF is a derivative version supporting the Suggested Upper Merged Ontology.
MOF and UML are standards of the OMG
Olog is a category theoretic approach to ontologies, emphasizing translations between ontologies using functors.
OBO, a language used for biological and biomedical ontologies.
OntoUML is an ontologically well-founded profile of UML for conceptual modeling of domain ontologies.
OWL is a language for making ontological statements, developed as a follow-on from RDF and RDFS, as well as earlier ontology language projects including OIL, DAML, and DAML+OIL. OWL is intended to be used over the World Wide Web, and all its elements (classes, properties and individuals) are defined as RDF resources, and identified by URIs.
Rule Interchange Format (RIF) and F-Logic combine ontologies and rules.
Semantic Application Design Language (SADL) captures a subset of the expressiveness of OWL, using an English-like language entered via an Eclipse Plug-in.
SBVR (Semantics of Business Vocabularies and Rules) is an OMG standard adopted in industry to build ontologies.
TOVE Project, TOronto Virtual Enterprise project
Published examples
Arabic Ontology, a linguistic ontology for Arabic, which can be used as an Arabic Wordnet but with ontologically-clean content.
AURUM – Information Security Ontology, An ontology for information security knowledge sharing, enabling users to collaboratively understand and extend the domain knowledge body. It may serve as a basis for automated information security risk and compliance management.
BabelNet, a very large multilingual semantic network and ontology, lexicalized in many languages
Basic Formal Ontology, a formal upper ontology designed to support scientific research
BioPAX, an ontology for the exchange and interoperability of biological pathway (cellular processes) data
BMO, an e-Business Model Ontology based on a review of enterprise ontologies and business model literature
SSBMO, a Strongly Sustainable Business Model Ontology based on a review of the systems based natural and social science literature (including business). Includes critique of and significant extensions to the Business Model Ontology (BMO).
CCO and GexKB, Application Ontologies (APO) that integrate diverse types of knowledge with the Cell Cycle Ontology (CCO) and the Gene Expression Knowledge Base (GexKB)
CContology (Customer Complaint Ontology), an e-business ontology to support online customer complaint management
CIDOC Conceptual Reference Model, an ontology for cultural heritage
COSMO, a Foundation Ontology (current version in OWL) that is designed to contain representations of all of the primitive concepts needed to logically specify the meanings of any domain entity. It is intended to serve as a basic ontology that can be used to translate among the representations in other ontologies or databases. It started as a merger of the basic elements of the OpenCyc and SUMO ontologies, and has been supplemented with other ontology elements (types, relations) so as to include representations of all of the words in the Longman dictionary defining vocabulary.
Computer Science Ontology, an automatically generated ontology of research topics in the field of computer science
Cyc, a large Foundation Ontology for formal representation of the universe of discourse
Disease Ontology, designed to facilitate the mapping of diseases and associated conditions to particular medical codes
DOLCE, a Descriptive Ontology for Linguistic and Cognitive Engineering
Drammar, ontology of drama
Dublin Core, a simple ontology for documents and publishing
Financial Industry Business Ontology (FIBO), a business conceptual ontology for the financial industry
Foundational, Core and Linguistic Ontologies
Foundational Model of Anatomy, an ontology for human anatomy
Friend of a Friend, an ontology for describing persons, their activities and their relations to other people and objects
Gene Ontology for genomics
Gellish English dictionary, an ontology that includes a dictionary and taxonomy that includes an upper ontology and a lower ontology that focusses on industrial and business applications in engineering, technology and procurement.
Geopolitical ontology, an ontology describing geopolitical information created by Food and Agriculture Organization(FAO). The geopolitical ontology includes names in multiple languages (English, French, Spanish, Arabic, Chinese, Russian and Italian); maps standard coding systems (UN, ISO, FAOSTAT, AGROVOC, etc.); provides relations among territories (land borders, group membership, etc.); and tracks historical changes. In addition, FAO provides web services of geopolitical ontology and a module maker to download modules of the geopolitical ontology into different formats (RDF, XML, and EXCEL). See more information at FAO Country Profiles.
GAO (General Automotive Ontology) – an ontology for the automotive industry that includes 'car' extensions
GOLD, General Ontology for Linguistic Description
GUM (Generalized Upper Model), a linguistically motivated ontology for mediating between clients systems and natural language technology
IDEAS Group, a formal ontology for enterprise architecture being developed by the Australian, Canadian, UK and U.S. Defence Depts.
Linkbase, a formal representation of the biomedical domain, founded upon Basic Formal Ontology.
LPL, Landmark Pattern Language
NCBO Bioportal, biological and biomedical ontologies and associated tools to search, browse and visualise
NIFSTD Ontologies from the Neuroscience Information Framework: a modular set of ontologies for the neuroscience domain.
OBO-Edit, an ontology browser for most of the Open Biological and Biomedical Ontologies
OBO Foundry, a suite of interoperable reference ontologies in biology and biomedicine
OMNIBUS Ontology, an ontology of learning, instruction, and instructional design
Ontology for Biomedical Investigations, an open-access, integrated ontology of biological and clinical investigations
ONSTR, Ontology for Newborn Screening Follow-up and Translational Research, Newborn Screening Follow-up Data Integration Collaborative, Emory University, Atlanta.
Plant Ontology for plant structures and growth/development stages, etc.
POPE, Purdue Ontology for Pharmaceutical Engineering
PRO, the Protein Ontology of the Protein Information Resource, Georgetown University
ProbOnto, knowledge base and ontology of probability distributions.
Program abstraction taxonomy
Protein Ontology for proteomics
RXNO Ontology, for name reactions in chemistry
SCDO, the Sickle Cell Disease Ontology, facilitates data sharing and collaborations within the SDC community, amongst other applications (see list on SCDO website).
Schema.org, for embedding structured data into web pages, primarily for the benefit of search engines
Sequence Ontology, for representing genomic feature types found on biological sequences
SNOMED CT (Systematized Nomenclature of Medicine – Clinical Terms)
Suggested Upper Merged Ontology, a formal upper ontology
Systems Biology Ontology (SBO), for computational models in biology
SWEET, Semantic Web for Earth and Environmental Terminology
SSN/SOSA, The Semantic Sensor Network Ontology (SSN) and Sensor, Observation, Sample, and Actuator Ontology (SOSA) are W3C Recommendation and OGC Standards for describing sensors and their observations.
ThoughtTreasure ontology
TIME-ITEM, Topics for Indexing Medical Education
Uberon, representing animal anatomical structures
UMBEL, a lightweight reference structure of 20,000 subject concept classes and their relationships derived from OpenCyc
WordNet, a lexical reference system
YAMATO, Yet Another More Advanced Top-level Ontology
YSO – General Finnish Ontology
The W3C Linking Open Data community project coordinates attempts to converge different ontologies into worldwide Semantic Web.
Libraries
The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries.
The following are libraries of human-selected ontologies.
COLORE is an open repository of first-order ontologies in Common Logic with formal links between ontologies in the repository.
DAML Ontology Library maintains a legacy of ontologies in DAML.
Ontology Design Patterns portal is a wiki repository of reusable components and practices for ontology design, and also maintains a list of exemplary ontologies.
Protégé Ontology Library contains a set of OWL, Frame-based and other format ontologies.
SchemaWeb is a directory of RDF schemata expressed in RDFS, OWL and DAML+OIL.
The following are both directories and search engines.
OBO Foundry is a suite of interoperable reference ontologies in biology and biomedicine.
Bioportal (ontology repository of NCBO)
Linked Open Vocabularies
OntoSelect Ontology Library offers similar services for RDF/S, DAML and OWL ontologies.
Ontaria is a "searchable and browsable directory of semantic web data" with a focus on RDF vocabularies with OWL ontologies. (NB Project "on hold" since 2004).
Swoogle is a directory and search engine for all RDF resources available on the Web, including ontologies.
Open Ontology Repository initiative
ROMULUS is a foundational ontology repository aimed at improving semantic interoperability. Currently there are three foundational ontologies in the repository: DOLCE, BFO and GFO.
Examples of applications
In general, ontologies can be used beneficially in several fields.
Enterprise applications. A more concrete example is SAPPHIRE (Health care) or Situational Awareness and Preparedness for Public Health Incidences and Reasoning Engines which is a semantics-based health information system capable of tracking and evaluating situations and occurrences that may affect public health.
Geographic information systems bring together data from different sources and benefit therefore from ontological metadata which helps to connect the semantics of the data.
Domain-specific ontologies are extremely important in biomedical research, which requires named entity disambiguation of various biomedical terms and abbreviations that have the same string of characters but represent different biomedical concepts. For example, CSF can represent Colony Stimulating Factor or Cerebral Spinal Fluid, both of which are represented by the same term, CSF, in biomedical literature. This is why a large number of public ontologies are related to the life sciences. Life science data science tools that fail to implement these types of biomedical ontologies will not be able to accurately determine causal relationships between concepts.
See also
Commonsense knowledge bases
Concept map
Controlled vocabulary
Classification scheme (information science)
Folksonomy
Formal concept analysis
Formal ontology
General Concept Lattice
Knowledge graph
Lattice
Ontology
Ontology alignment
Ontology chart
Open Semantic Framework
Semantic technology
Soft ontology
Terminology extraction
Weak ontology
Web Ontology Language
Related philosophical concepts
Alphabet of human thought
Characteristica universalis
Interoperability
Level of measurement
Metalanguage
Natural semantic metalanguage
References
Further reading
External links
Knowledge Representation at Open Directory Project
Library of ontologies (Archive, Unmaintained)
GoPubMed using Ontologies for searching
ONTOLOG (a.k.a. "Ontolog Forum") - an Open, International, Virtual Community of Practice on Ontology, Ontological Engineering and Semantic Technology
Use of Ontologies in Natural Language Processing
Ontology Summit - an annual series of events (first started in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit.
Standardization of Ontologies
Knowledge engineering
Technical communication
Information science
Semantic Web
Knowledge representation
Knowledge bases
Ontology editors | 0.790434 | 0.996876 | 0.787964 |
Dialectic | Dialectic (, dialektikḗ; ), also known as the dialectical method, refers originally to dialogue between people holding different points of view about a subject but wishing to arrive at the truth through reasoned argumentation. Dialectic resembles debate, but the concept excludes subjective elements such as emotional appeal and rhetoric. It has its origins in ancient philosophy and continued to be developed in the Middle Ages.
Hegelianism refigured "dialectic" to no longer refer to a literal dialogue. Instead, the term takes on the specialized meaning of development by way of overcoming internal contradictions. Dialectical materialism, a theory advanced by Karl Marx and Friedrich Engels, adapted the Hegelian dialectic into a materialist theory of history. The legacy of Hegelian and Marxian dialectics has been criticized by philosophers such as Karl Popper and Mario Bunge, who considered it unscientific.
Dialectic implies a developmental process and so does not naturally fit within classical logic. Nevertheless, some twentieth-century logicians have attempted to formalize it.
History
There are a variety of meanings of dialectic or dialectics within Western philosophy.
Classical philosophy
In classical philosophy, dialectic is a form of reasoning based upon dialogue of arguments and counter-arguments, advocating propositions (theses) and counter-propositions (antitheses). The outcome of such a dialectic might be the refutation of a relevant proposition, or a synthesis, a combination of the opposing assertions, or a qualitative improvement of the dialogue.
The term "dialectic" owes much of its prestige to its role in the philosophies of Socrates and Plato, in the Greek Classical period (5th to 4th centuries BC). Aristotle said that it was the pre-Socratic philosopher Zeno of Elea who invented dialectic, of which the dialogues of Plato are examples of the Socratic dialectical method.
Socratic method
The Socratic dialogues are a particular form of dialectic known as the method of elenchus (literally, "refutation, scrutiny") whereby a series of questions clarifies a more precise statement of a vague belief, logical consequences of that statement are explored, and a contradiction is discovered. The method is largely destructive, in that false belief is exposed and only constructive in that this exposure may lead to further search for truth. The detection of error does not amount to a proof of the antithesis. For example, a contradiction in the consequences of a definition of piety does not provide a correct definition. The principal aim of Socratic activity may be to improve the soul of the interlocutors, by freeing them from unrecognized errors, or indeed, by teaching them the spirit of inquiry.
In common cases, Socrates uses enthymemes as the foundation of his argument.
For example, in the Euthyphro, Socrates asks Euthyphro to provide a definition of piety. Euthyphro replies that the pious is that which is loved by the gods. But, Socrates also has Euthyphro agreeing that the gods are quarrelsome and their quarrels, like human quarrels, concern objects of love or hatred. Therefore, Socrates reasons, at least one thing exists that certain gods love but other gods hate. Again, Euthyphro agrees. Socrates concludes that if Euthyphro's definition of piety is acceptable, then there must exist at least one thing that is both pious and impious (as it is both loved and hated by the gods)—which Euthyphro admits is absurd. Thus, Euthyphro is brought to a realization by this dialectical method that his definition of piety is not sufficiently meaningful.
In another example, in Plato's Gorgias, dialectic occurs between Socrates, the Sophist Gorgias, and two men, Polus and Callicles. Because Socrates' ultimate goal was to reach true knowledge, he was even willing to change his own views in order to arrive at the truth. The fundamental goal of dialectic, in this instance, was to establish a precise definition of the subject (in this case, rhetoric) and with the use of argumentation and questioning, make the subject even more precise. In the Gorgias, Socrates reaches the truth by asking a series of questions and in return, receiving short, clear answers.
Plato
In Platonism and Neoplatonism, dialectic assumed an ontological and metaphysical role in that it became the process whereby the intellect passes from sensibles to intelligibles, rising from idea to idea until it finally grasps the supreme idea, the first principle which is the origin of all. The philosopher is consequently a "dialectician". In this sense, dialectic is a process of inquiry that does away with hypotheses up to the first principle. It slowly embraces multiplicity in unity. The philosopher Simon Blackburn wrote that the dialectic in this sense is used to understand "the total process of enlightenment, whereby the philosopher is educated so as to achieve knowledge of the supreme good, the Form of the Good".
Medieval philosophy
Logic, which could be considered to include dialectic, was one of the three liberal arts taught in medieval universities as part of the trivium; the other elements were rhetoric and grammar.
Based mainly on Aristotle, the first medieval philosopher to work on dialectics was Boethius (480–524). After him, many scholastic philosophers also made use of dialectics in their works, such as Abelard, William of Sherwood, Garlandus Compotista, Walter Burley, Roger Swyneshed, William of Ockham, and Thomas Aquinas.
This dialectic (a quaestio disputata) was formed as follows:
The question to be determined ("It is asked whether...");
A provisory answer to the question ("And it seems that...");
The principal arguments in favor of the provisory answer;
An argument against the provisory answer, traditionally a single argument from authority ("On the contrary...");
The determination of the question after weighing the evidence ("I answer that...");
The replies to each of the initial objections. ("To the first, to the second etc., I answer that...")
Modern philosophy
The concept of dialectics was given new life at the start of the 19th century by Georg Wilhelm Friedrich Hegel, whose dialectical model of nature and of history made dialectics a fundamental aspect of reality, instead of regarding the contradictions into which dialectics leads as evidence of the limits of pure reason, as Immanuel Kant had argued. Hegel was influenced by Johann Gottlieb Fichte's conception of synthesis, although Hegel didn't adopt Fichte's "thesis–antithesis–synthesis" language except to describe Kant's philosophy: rather, Hegel argued that such language was "a lifeless schema" imposed on various contents, whereas he saw his own dialectic as flowing out of "the inner life and self-movement" of the content itself.
In the mid-19th century, Hegelian dialectic was appropriated by Karl Marx and Friedrich Engels and retooled in what they considered to be a nonidealistic manner. It would also become a crucial part of later representations of Marxism as a philosophy of dialectical materialism. These representations often contrasted dramatically and led to vigorous debate among different Marxist groups.
Hegelian dialectic
The Hegelian dialectic describes changes in the forms of thought through their own internal contradictions into concrete forms that overcome previous oppositions.
This dialectic is sometimes presented in a threefold manner, as first stated by Heinrich Moritz Chalybäus, as comprising three dialectical stages of development: a thesis, giving rise to its reaction; an antithesis, which contradicts or negates the thesis; and the tension between the two being resolved by means of a synthesis. Although, Hegel opposed these terms.
By contrast, the terms abstract, negative, and concrete suggest a flaw or an incompleteness in any initial thesis. For Hegel, the concrete must always pass through the phase of the negative, that is, mediation. This is the essence of what is popularly called Hegelian dialectics.
To describe the activity of overcoming the negative, Hegel often used the term Aufhebung, variously translated into English as "sublation" or "overcoming", to conceive of the working of the dialectic. Roughly, the term indicates preserving the true portion of an idea, thing, society, and so forth, while moving beyond its limitations. What is sublated, on the one hand, is overcome, but, on the other hand, is preserved and maintained.
As in the Socratic dialectic, Hegel claimed to proceed by making implicit contradictions explicit: each stage of the process is the product of contradictions inherent or implicit in the preceding stage. On his view, the purpose of dialectics is "to study things in their own being and movement and thus to demonstrate the finitude of the partial categories of understanding".
For Hegel, even history can be reconstructed as a unified dialectic, the major stages of which chart a progression from self-alienation as servitude to self-unification and realization as the rational constitutional state of free and equal citizens.
Marxist dialectic
Marxist dialectic is a form of Hegelian dialectic which applies to the study of historical materialism. Marxist dialectic is thus a method by which one can examine social and economic behaviors. It is the foundation of the philosophy of dialectical materialism, which forms the basis of historical materialism.
In the Marxist tradition, "dialectic" refers to regular and mutual relationships, interactions, and processes in nature, society, and human thought.
A dialectical relationship is a relationship in which two phenomena or ideas mutually impact each other, leading to development and negation. Development refers to the change and motion of phenomena and ideas from less advanced to more advanced or from less complete to more complete. Dialectical negation refers to a stage of development in which a contradiction between two previous subjects gives rise to a new subject. In the Marxist view, dialectical negation is never an endpoint, but instead creates new conditions for further development and negation.
Karl Marx and Friedrich Engels, writing several decades after Hegel's death, proposed that Hegel's dialectic is too abstract. Against this, Marx presented his own dialectic method, which he claimed to be "direct opposite" of Hegel's method.
Marxist dialectics is exemplified in Das Kapital. As Marx explained dialectical materialism,
Class struggle is the primary contradiction to be resolved by Marxist dialectics because of its central role in the social and political lives of a society. Nonetheless, Marx and Marxists developed the concept of class struggle to comprehend the dialectical contradictions between mental and manual labor and between town and country. Hence, philosophic contradiction is central to the development of dialectics: the progress from quantity to quality, the acceleration of gradual social change; the negation of the initial development of the status quo; the negation of that negation; and the high-level recurrence of features of the original status quo.
Friedrich Engels further proposed that nature itself is dialectical, and that this is "a very simple process, which is taking place everywhere and every day". His dialectical "law of the transformation of quantity into quality and vice versa" corresponds, according to Christian Fuchs, to the concept of phase transition and anticipated the concept of emergence "a hundred years ahead of his time".
For Vladimir Lenin, the primary feature of Marx's "dialectical materialism" (Lenin's term) is its application of materialist philosophy to history and social sciences. Lenin's main contribution to the philosophy of dialectical materialism is his theory of reflection, which presents human consciousness as a dynamic reflection of the objective material world that fully shapes its contents and structure.
Later, Stalin's works on the subject established a rigid and formalistic division of Marxist–Leninist theory into dialectical materialism and historical materialism. While the first was supposed to be the key method and theory of the philosophy of nature, the second was the Soviet version of the philosophy of history.
Soviet systems theory pioneer Alexander Bogdanov viewed Hegelian and materialist dialectic as progressive, albeit inexact and diffuse, attempts at achieving what he called tektology, or a universal science of organization.
Dialectical naturalism
Dialectical naturalism is a term coined by American philosopher Murray Bookchin to describe the philosophical underpinnings of the political program of social ecology. Dialectical naturalism explores the complex interrelationship between social problems, and the direct consequences they have on the ecological impact of human society. Bookchin offered dialectical naturalism as a contrast to what he saw as the "empyrean, basically antinaturalistic dialectical idealism" of Hegel, and "the wooden, often scientistic dialectical materialism of orthodox Marxists".
Theological dialectics
Neo-orthodoxy, in Europe also known as theology of crisis and dialectical theology, is an approach to theology in Protestantism that was developed in the aftermath of the First World War (1914–1918). It is characterized as a reaction against doctrines of 19th-century liberal theology and a more positive reevaluation of the teachings of the Reformation, much of which had been in decline (especially in western Europe) since the late 18th century. It is primarily associated with two Swiss professors and pastors, Karl Barth (1886–1968) and Emil Brunner (1899–1966), even though Barth himself expressed his unease in the use of the term.
In dialectical theology the difference and opposition between God and human beings is stressed in such a way that all human attempts at overcoming this opposition through moral, religious or philosophical idealism must be characterized as 'sin'. In the death of Christ humanity is negated and overcome, but this judgment also points forwards to the resurrection in which humanity is reestablished in Christ. For Barth this meant that only through God's 'no' to everything human can his 'yes' be perceived. Applied to traditional themes of Protestant theology, such as double predestination, this means that election and reprobation cannot be viewed as a quantitative limitation of God's action. Rather it must be seen as its "qualitative definition". As Christ bore the rejection as well as the election of God for all humanity, every person is subject to both aspects of God's double predestination.
Dialectic prominently figured in Bernard Lonergan's philosophy, in his books Insight and Method in Theology. Michael Shute wrote about Lonergan's use of dialectic in The Origins of Lonergan's Notion of the Dialectic of History. For Lonergan, dialectic is both individual and operative in community. Simply described, it is a dynamic process that results in something new:
Dialectic is one of the eight functional specialties Lonergan envisaged for theology to bring this discipline into the modern world. Lonergan believed that the lack of an agreed method among scholars had inhibited substantive agreement from being reached and progress from being made compared to the natural sciences. Karl Rahner, S.J., however, criticized Lonergan's theological method in a short article entitled "Some Critical Thoughts on 'Functional Specialties in Theology'" where he stated: "Lonergan's theological methodology seems to me to be so generic that it really fits every science, and hence is not the methodology of theology as such, but only a very general methodology of science."
Criticisms
Friedrich Nietzsche viewed dialectic as a method that imposes artificial boundaries and suppresses the richness and diversity of reality. He rejected the notion that truth can be fully grasped through dialectical reasoning and offered a critique of dialectic, challenging its traditional framework and emphasizing the limitations of its approach to understanding reality. He expressed skepticism towards its methodology and implications in his work Twilight of the Idols: "I mistrust all systematizers and I avoid them. The will to a system is a lack of integrity". In the same book, Nietzsche criticized Socrates' dialectics because he believed it prioritized reason over instinct, resulting in the suppression of individual passions and the imposition of an artificial morality.
Karl Popper attacked the dialectic repeatedly. In 1937, he wrote and delivered a paper entitled "What Is Dialectic?" in which he criticized the dialectics of Hegel, Marx, and Engels for their willingness "to put up with contradictions". He argued that accepting contradiction as a valid form of logic would lead to the principle of explosion and thus trivialism. Popper concluded the essay with these words: "The whole development of dialectic should be a warning against the dangers inherent in philosophical system-building. It should remind us that philosophy should not be made a basis for any sort of scientific system and that philosophers should be much more modest in their claims. One task which they can fulfill quite usefully is the study of the critical methods of science". Seventy years later, Nicholas Rescher responded that "Popper's critique touches only a hyperbolic version of dialectic", and he quipped: "Ironically, there is something decidedly dialectical about Popper's critique of dialectics." Around the same time as Popper's critique was published, philosopher Sidney Hook discussed the "sense and nonsense in dialectic" and rejected two conceptions of dialectic as unscientific but accepted one conception as a "convenient organizing category".
The philosopher of science and physicist Mario Bunge repeatedly criticized Hegelian and Marxian dialectics, calling them "fuzzy and remote from science" and a "disastrous legacy". He concluded: "The so-called laws of dialectics, such as formulated by Engels (1940, 1954) and Lenin (1947, 1981), are false insofar as they are intelligible." Poe Yu-ze Wan, reviewing Bunge's criticisms of dialectics, found Bunge's arguments to be important and sensible, but he thought that dialectics could still serve some heuristic purposes for scientists. Wan pointed out that scientists such as the American Marxist biologists Richard Levins and Richard Lewontin (authors of The Dialectical Biologist) and the German-American evolutionary biologist Ernst Mayr, not a Marxist himself, have found agreement between dialectical principles and their own scientific outlooks, although Wan opined that Engels's "laws" of dialectics "in fact 'explain' nothing".
Even some Marxists are critical of the term "dialectics". For instance, Michael Heinrich wrote, "More often than not, the grandiose rhetoric about dialectics is reducible to the simple fact that everything is dependent upon everything else and is in a state of interaction and that it's all rather complicated—which is true in most cases, but doesn't really say anything."
Formalization
Defeasibility
Dialog games
Mathematics
Mathematician William Lawvere interpreted dialectics in the setting of categorical logic in terms of adjunctions between idempotent monads. This perspective may be useful in the context of theoretical computer science where the duality between syntax and semantics can be interpreted as a dialectic in this sense. For example, the Curry-Howard equivalence is such an adjunction or more generally the duality between closed monoidal categories and their internal logic.
See also
Conversation
Dialogue
A philosophical journal
Various works on dialectics and logical reasoning
Dialectical behavior therapy
Dialectical research
Dialogic
Discourse
Doublethink
False dilemma
Reflective equilibrium
Relational dialectics
Tarka sastra
Unity of opposites
Universal dialectic
References
External links
v:Dialectic algorithm – An algorithm based on the principles of classical dialectics
Studies in the Hegelian Dialectic by J. M. E. McTaggart (1896) at marxists.org
Rhetoric
Philosophical methodology
Concepts in ancient Greek metaphysics
Ancient Greek logic | 0.788496 | 0.999204 | 0.787869 |
Knowledge | Knowledge is an awareness of facts, a familiarity with individuals and situations, or a practical skill. Knowledge of facts, also called propositional knowledge, is often characterized as true belief that is distinct from opinion or guesswork by virtue of justification. While there is wide agreement among philosophers that propositional knowledge is a form of true belief, many controversies focus on justification. This includes questions like how to understand justification, whether it is needed at all, and whether something else besides it is needed. These controversies intensified in the latter half of the 20th century due to a series of thought experiments called Gettier cases that provoked alternative definitions.
Knowledge can be produced in many ways. The main source of empirical knowledge is perception, which involves the usage of the senses to learn about the external world. Introspection allows people to learn about their internal mental states and processes. Other sources of knowledge include memory, rational intuition, inference, and testimony. According to foundationalism, some of these sources are basic in that they can justify beliefs, without depending on other mental states. Coherentists reject this claim and contend that a sufficient degree of coherence among all the mental states of the believer is necessary for knowledge. According to infinitism, an infinite chain of beliefs is needed.
The main discipline investigating knowledge is epistemology, which studies what people know, how they come to know it, and what it means to know something. It discusses the value of knowledge and the thesis of philosophical skepticism, which questions the possibility of knowledge. Knowledge is relevant to many fields like the sciences, which aim to acquire knowledge using the scientific method based on repeatable experimentation, observation, and measurement. Various religions hold that humans should seek knowledge and that God or the divine is the source of knowledge. The anthropology of knowledge studies how knowledge is acquired, stored, retrieved, and communicated in different cultures. The sociology of knowledge examines under what sociohistorical circumstances knowledge arises, and what sociological consequences it has. The history of knowledge investigates how knowledge in different fields has developed, and evolved, in the course of history.
Definitions
Knowledge is a form of familiarity, awareness, understanding, or acquaintance. It often involves the possession of information learned through experience and can be understood as a cognitive success or an epistemic contact with reality, like making a discovery. Many academic definitions focus on propositional knowledge in the form of believing certain facts, as in "I know that Dave is at home". Other types of knowledge include knowledge-how in the form of practical competence, as in "she knows how to swim", and knowledge by acquaintance as a familiarity with the known object based on previous direct experience, like knowing someone personally.
Knowledge is often understood as a state of an individual person, but it can also refer to a characteristic of a group of people as group knowledge, social knowledge, or collective knowledge. Some social sciences understand knowledge as a broad social phenomenon that is similar to culture. The term may further denote knowledge stored in documents like the "knowledge housed in the library" or the knowledge base of an expert system. Knowledge is closely related to intelligence, but intelligence is more about the ability to acquire, process, and apply information, while knowledge concerns information and skills that a person already possesses.
The word knowledge has its roots in the 12th-century Old English word , which comes from the Old High German word . The English word includes various meanings that some other languages distinguish using several words. In ancient Greek, for example, four important terms for knowledge were used: epistēmē (unchanging theoretical knowledge), technē (expert technical knowledge), mētis (strategic knowledge), and gnōsis (personal intellectual knowledge). The main discipline studying knowledge is called epistemology or the theory of knowledge. It examines the nature of knowledge and justification, how knowledge arises, and what value it has. Further topics include the different types of knowledge and the limits of what can be known.
Despite agreements about the general characteristics of knowledge, its exact definition is disputed. Some definitions only focus on the most salient features of knowledge to give a practically useful characterization. Another approach, termed analysis of knowledge, tries to provide a theoretically precise definition by listing the conditions that are individually necessary and jointly sufficient, similar to how chemists analyze a sample by seeking a list of all the chemical elements composing it. According to a different view, knowledge is a unique state that cannot be analyzed in terms of other phenomena. Some scholars base their definition on abstract intuitions while others focus on concrete cases or rely on how the term is used in ordinary language. There is also disagreement about whether knowledge is a rare phenomenon that requires high standards or a common phenomenon found in many everyday situations.
Analysis of knowledge
An often-discussed definition characterizes knowledge as justified true belief. This definition identifies three essential features: it is (1) a belief that is (2) true and (3) justified. Truth is a widely accepted feature of knowledge. It implies that, while it may be possible to believe something false, one cannot know something false. That knowledge is a form of belief implies that one cannot know something if one does not believe it. Some everyday expressions seem to violate this principle, like the claim that "I do not believe it, I know it!" But the point of such expressions is usually to emphasize one's confidence rather than denying that a belief is involved.
The main controversy surrounding this definition concerns its third feature: justification. This component is often included because of the impression that some true beliefs are not forms of knowledge, such as beliefs based on superstition, lucky guesses, or erroneous reasoning. For example, a person who guesses that a coin flip will land heads usually does not know that even if their belief turns out to be true. This indicates that there is more to knowledge than just being right about something. These cases are excluded by requiring that beliefs have justification for them to count as knowledge. Some philosophers hold that a belief is justified if it is based on evidence, which can take the form of mental states like experience, memory, and other beliefs. Others state that beliefs are justified if they are produced by reliable processes, like sensory perception or logical reasoning.
The definition of knowledge as justified true belief came under severe criticism in the 20th century, when epistemologist Edmund Gettier formulated a series of counterexamples. They purport to present concrete cases of justified true beliefs that fail to constitute knowledge. The reason for their failure is usually a form of epistemic luck: the beliefs are justified but their justification is not relevant to the truth. In a well-known example, someone drives along a country road with many barn facades and only one real barn. The person is not aware of this, stops in front of the real barn by a lucky coincidence, and forms the justified true belief that they are in front of a barn. This example aims to establish that the person does not know that they are in front of a real barn, since they would not have been able to tell the difference. This means that it is a lucky coincidence that this justified belief is also true.
According to some philosophers, these counterexamples show that justification is not required for knowledge and that knowledge should instead be characterized in terms of reliability or the manifestation of cognitive virtues. Another approach defines knowledge in regard to the function it plays in cognitive processes as that which provides reasons for thinking or doing something. A different response accepts justification as an aspect of knowledge and include additional criteria. Many candidates have been suggested, like the requirements that the justified true belief does not depend on any false beliefs, that no defeaters are present, or that the person would not have the belief if it was false. Another view states that beliefs have to be infallible to amount to knowledge. A further approach, associated with pragmatism, focuses on the aspect of inquiry and characterizes knowledge in terms of what works as a practice that aims to produce habits of action. There is still very little consensus in the academic discourse as to which of the proposed modifications or reconceptualizations is correct, and there are various alternative definitions of knowledge.
Types
A common distinction among types of knowledge is between propositional knowledge, or knowledge-that, and non-propositional knowledge in the form of practical skills or acquaintance. Other distinctions focus on how the knowledge is acquired and on the content of the known information.
Propositional
Propositional knowledge, also referred to as declarative and descriptive knowledge, is a form of theoretical knowledge about facts, like knowing that "2 + 2 = 4". It is the paradigmatic type of knowledge in analytic philosophy. Propositional knowledge is propositional in the sense that it involves a relation to a proposition. Since propositions are often expressed through that-clauses, it is also referred to as knowledge-that, as in "Akari knows that kangaroos hop". In this case, Akari stands in the relation of knowing to the proposition "kangaroos hop". Closely related types of knowledge are know-wh, for example, knowing who is coming to dinner and knowing why they are coming. These expressions are normally understood as types of propositional knowledge since they can be paraphrased using a that-clause.
Propositional knowledge takes the form of mental representations involving concepts, ideas, theories, and general rules. These representations connect the knower to certain parts of reality by showing what they are like. They are often context-independent, meaning that they are not restricted to a specific use or purpose. Propositional knowledge encompasses both knowledge of specific facts, like that the atomic mass of gold is 196.97 u, and generalities, like that the color of leaves of some trees changes in autumn. Because of the dependence on mental representations, it is often held that the capacity for propositional knowledge is exclusive to relatively sophisticated creatures, such as humans. This is based on the claim that advanced intellectual capacities are needed to believe a proposition that expresses what the world is like.
Non-propositional
Non-propositional knowledge is knowledge in which no essential relation to a proposition is involved. The two most well-known forms are knowledge-how (know-how or procedural knowledge) and knowledge by acquaintance. To possess knowledge-how means to have some form of practical ability, skill, or competence, like knowing how to ride a bicycle or knowing how to swim. Some of the abilities responsible for knowledge-how involve forms of knowledge-that, as in knowing how to prove a mathematical theorem, but this is not generally the case. Some types of knowledge-how do not require a highly developed mind, in contrast to propositional knowledge, and are more common in the animal kingdom. For example, an ant knows how to walk even though it presumably lacks a mind sufficiently developed to represent the corresponding proposition.
Knowledge by acquaintance is familiarity with something that results from direct experiential contact. The object of knowledge can be a person, a thing, or a place. For example, by eating chocolate, one becomes acquainted with the taste of chocolate, and visiting Lake Taupō leads to the formation of knowledge by acquaintance of Lake Taupō. In these cases, the person forms non-inferential knowledge based on first-hand experience without necessarily acquiring factual information about the object. By contrast, it is also possible to indirectly learn a lot of propositional knowledge about chocolate or Lake Taupō by reading books without having the direct experiential contact required for knowledge by acquaintance. The concept of knowledge by acquaintance was first introduced by Bertrand Russell. He holds that knowledge by acquaintance is more basic than propositional knowledge since to understand a proposition, one has to be acquainted with its constituents.
A priori and a posteriori
The distinction between a priori and a posteriori knowledge depends on the role of experience in the processes of formation and justification. To know something a posteriori means to know it based on experience. For example, by seeing that it rains outside or hearing that the baby is crying, one acquires a posteriori knowledge of these facts. A priori knowledge is possible without any experience to justify or support the known proposition. Mathematical knowledge, such as that 2 + 2 = 4, is traditionally taken to be a priori knowledge since no empirical investigation is necessary to confirm this fact. In this regard, a posteriori knowledge is empirical knowledge while a priori knowledge is non-empirical knowledge.
The relevant experience in question is primarily identified with sensory experience. Some non-sensory experiences, like memory and introspection, are often included as well. Some conscious phenomena are excluded from the relevant experience, like rational insight. For example, conscious thought processes may be required to arrive at a priori knowledge regarding the solution of mathematical problems, like when performing mental arithmetic to multiply two numbers. The same is the case for the experience needed to learn the words through which the claim is expressed. For example, knowing that "all bachelors are unmarried" is a priori knowledge because no sensory experience is necessary to confirm this fact even though experience was needed to learn the meanings of the words "bachelor" and "unmarried".
It is difficult to explain how a priori knowledge is possible and some empiricists deny it exists. It is usually seen as unproblematic that one can come to know things through experience, but it is not clear how knowledge is possible without experience. One of the earliest solutions to this problem comes from Plato, who argues that the soul already possesses the knowledge and just needs to recollect, or remember, it to access it again. A similar explanation is given by Descartes, who holds that a priori knowledge exists as innate knowledge present in the mind of each human. A further approach posits a special mental faculty responsible for this type of knowledge, often referred to as rational intuition or rational insight.
Others
Various other types of knowledge are discussed in the academic literature. In philosophy, "self-knowledge" refers to a person's knowledge of their own sensations, thoughts, beliefs, and other mental states. A common view is that self-knowledge is more direct than knowledge of the external world, which relies on the interpretation of sense data. Because of this, it is traditionally claimed that self-knowledge is indubitable, like the claim that a person cannot be wrong about whether they are in pain. However, this position is not universally accepted in the contemporary discourse and an alternative view states that self-knowledge also depends on interpretations that could be false. In a slightly different sense, self-knowledge can also refer to knowledge of the self as a persisting entity with certain personality traits, preferences, physical attributes, relationships, goals, and social identities.
Metaknowledge is knowledge about knowledge. It can arise in the form of self-knowledge but includes other types as well, such as knowing what someone else knows or what information is contained in a scientific article. Other aspects of metaknowledge include knowing how knowledge can be acquired, stored, distributed, and used.
Common knowledge is knowledge that is publicly known and shared by most individuals within a community. It establishes a common ground for communication, understanding, social cohesion, and cooperation. General knowledge encompasses common knowledge but also includes knowledge that many people have been exposed to but may not be able to immediately recall. Common knowledge contrasts with domain knowledge or specialized knowledge, which belongs to a specific domain and is only possessed by experts.
Situated knowledge is knowledge specific to a particular situation. It is closely related to practical or tacit knowledge, which is learned and applied in specific circumstances. This especially concerns certain forms of acquiring knowledge, such as trial and error or learning from experience. In this regard, situated knowledge usually lacks a more explicit structure and is not articulated in terms of universal ideas. The term is often used in feminism and postmodernism to argue that many forms of knowledge are not absolute but depend on the concrete historical, cultural, and linguistic context.
Explicit knowledge is knowledge that can be fully articulated, shared, and explained, like the knowledge of historical dates and mathematical formulas. It can be acquired through traditional learning methods, such as reading books and attending lectures. It contrasts with tacit knowledge, which is not easily articulated or explained to others, like the ability to recognize someone's face and the practical expertise of a master craftsman. Tacit knowledge is often learned through first-hand experience or direct practice.
Cognitive load theory distinguishes between biologically primary and secondary knowledge. Biologically primary knowledge is knowledge that humans have as part of their evolutionary heritage, such as knowing how to recognize faces and speech and many general problem-solving capacities. Biologically secondary knowledge is knowledge acquired because of specific social and cultural circumstances, such as knowing how to read and write.
Knowledge can be occurrent or dispositional. Occurrent knowledge is knowledge that is actively involved in cognitive processes. Dispositional knowledge, by contrast, lies dormant in the back of a person's mind and is given by the mere ability to access the relevant information. For example, if a person knows that cats have whiskers then this knowledge is dispositional most of the time and becomes occurrent while they are thinking about it.
Many forms of Eastern spirituality and religion distinguish between higher and lower knowledge. They are also referred to as para vidya and apara vidya in Hinduism or the two truths doctrine in Buddhism. Lower knowledge is based on the senses and the intellect. It encompasses both mundane or conventional truths as well as discoveries of the empirical sciences. Higher knowledge is understood as knowledge of God, the absolute, the true self, or the ultimate reality. It belongs neither to the external world of physical objects nor to the internal world of the experience of emotions and concepts. Many spiritual teachings stress the importance of higher knowledge to progress on the spiritual path and to see reality as it truly is beyond the veil of appearances.
Sources
Sources of knowledge are ways in which people come to know things. They can be understood as cognitive capacities that are exercised when a person acquires new knowledge. Various sources of knowledge are discussed in the academic literature, often in terms of the mental faculties responsible. They include perception, introspection, memory, inference, and testimony. However, not everyone agrees that all of them actually lead to knowledge. Usually, perception or observation, i.e. using one of the senses, is identified as the most important source of empirical knowledge. Knowing that a baby is sleeping is observational knowledge if it was caused by a perception of the snoring baby. However, this would not be the case if one learned about this fact through a telephone conversation with one's spouse. Perception comes in different modalities, including vision, sound, touch, smell, and taste, which correspond to different physical stimuli. It is an active process in which sensory signals are selected, organized, and interpreted to form a representation of the environment. This leads in some cases to illusions that misrepresent certain aspects of reality, like the Müller-Lyer illusion and the Ponzo illusion.
Introspection is often seen in analogy to perception as a source of knowledge, not of external physical objects, but of internal mental states. A traditionally common view is that introspection has a special epistemic status by being infallible. According to this position, it is not possible to be mistaken about introspective facts, like whether one is in pain, because there is no difference between appearance and reality. However, this claim has been contested in the contemporary discourse and critics argue that it may be possible, for example, to mistake an unpleasant itch for a pain or to confuse the experience of a slight ellipse for the experience of a circle. Perceptual and introspective knowledge often act as a form of fundamental or basic knowledge. According to some empiricists, they are the only sources of basic knowledge and provide the foundation for all other knowledge.
Memory differs from perception and introspection in that it is not as independent or basic as they are since it depends on other previous experiences. The faculty of memory retains knowledge acquired in the past and makes it accessible in the present, as when remembering a past event or a friend's phone number. It is generally seen as a reliable source of knowledge. However, it can be deceptive at times nonetheless, either because the original experience was unreliable or because the memory degraded and does not accurately represent the original experience anymore.
Knowledge based on perception, introspection, and memory may give rise to inferential knowledge, which comes about when reasoning is applied to draw inferences from other known facts. For example, the perceptual knowledge of a Czech stamp on a postcard may give rise to the inferential knowledge that one's friend is visiting the Czech Republic. This type of knowledge depends on other sources of knowledge responsible for the premises. Some rationalists argue for rational intuition as a further source of knowledge that does not rely on observation and introspection. They hold for example that some beliefs, like the mathematical belief that 2 + 2 = 4, are justified through pure reason alone.
Testimony is often included as an additional source of knowledge that, unlike the other sources, is not tied to one specific cognitive faculty. Instead, it is based on the idea that one person can come to know a fact because another person talks about this fact. Testimony can happen in numerous ways, like regular speech, a letter, a newspaper, or a blog. The problem of testimony consists in clarifying why and under what circumstances testimony can lead to knowledge. A common response is that it depends on the reliability of the person pronouncing the testimony: only testimony from reliable sources can lead to knowledge.
Limits
The problem of the limits of knowledge concerns the question of which facts are unknowable. These limits constitute a form of inevitable ignorance that can affect both what is knowable about the external world as well as what one can know about oneself and about what is good. Some limits of knowledge only apply to particular people in specific situations while others pertain to humanity at large. A fact is unknowable to a person if this person lacks access to the relevant information, like facts in the past that did not leave any significant traces. For example, it may be unknowable to people today what Caesar's breakfast was the day he was assassinated but it was knowable to him and some contemporaries. Another factor restricting knowledge is given by the limitations of the human cognitive faculties. Some people may lack the cognitive ability to understand highly abstract mathematical truths and some facts cannot be known by any human because they are too complex for the human mind to conceive. A further limit of knowledge arises due to certain logical paradoxes. For instance, there are some ideas that will never occur to anyone. It is not possible to know them because if a person knew about such an idea then this idea would have occurred at least to them.
There are many disputes about what can or cannot be known in certain fields. Religious skepticism is the view that beliefs about God or other religious doctrines do not amount to knowledge. Moral skepticism encompasses a variety of views, including the claim that moral knowledge is impossible, meaning that one cannot know what is morally good or whether a certain behavior is morally right. An influential theory about the limits of metaphysical knowledge was proposed by Immanuel Kant. For him, knowledge is restricted to the field of appearances and does not reach the things in themselves, which exist independently of humans and lie beyond the realm of appearances. Based on the observation that metaphysics aims to characterize the things in themselves, he concludes that no metaphysical knowledge is possible, like knowing whether the world has a beginning or is infinite.
There are also limits to knowledge in the empirical sciences, such as the uncertainty principle, which states that it is impossible to know the exact magnitudes of certain certain pairs of physical properties, like the position and momentum of a particle, at the same time. Other examples are physical systems studied by chaos theory, for which it is not practically possible to predict how they will behave since they are so sensitive to initial conditions that even the slightest of variations may produce a completely different behavior. This phenomenon is known as the butterfly effect.
The strongest position about the limits of knowledge is radical or global skepticism, which holds that humans lack any form of knowledge or that knowledge is impossible. For example, the dream argument states that perceptual experience is not a source of knowledge since dreaming provides unreliable information and a person could be dreaming without knowing it. Because of this inability to discriminate between dream and perception, it is argued that there is no perceptual knowledge of the external world. This thought experiment is based on the problem of underdetermination, which arises when the available evidence is not sufficient to make a rational decision between competing theories. In such cases, a person is not justified in believing one theory rather than the other. If this is always the case then global skepticism follows. Another skeptical argument assumes that knowledge requires absolute certainty and aims to show that all human cognition is fallible since it fails to meet this standard.
An influential argument against radical skepticism states that radical skepticism is self-contradictory since denying the existence of knowledge is itself a knowledge-claim. Other arguments rely on common sense or deny that infallibility is required for knowledge. Very few philosophers have explicitly defended radical skepticism but this position has been influential nonetheless, usually in a negative sense: many see it as a serious challenge to any epistemological theory and often try to show how their preferred theory overcomes it. Another form of philosophical skepticism advocates the suspension of judgment as a form of attaining tranquility while remaining humble and open-minded.
A less radical limit of knowledge is identified by falliblists, who argue that the possibility of error can never be fully excluded. This means that even the best-researched scientific theories and the most fundamental commonsense views could still be subject to error. Further research may reduce the possibility of being wrong, but it can never fully exclude it. Some fallibilists reach the skeptical conclusion from this observation that there is no knowledge but the more common view is that knowledge exists but is fallible. Pragmatists argue that one consequence of fallibilism is that inquiry should not aim for truth or absolute certainty but for well-supported and justified beliefs while remaining open to the possibility that one's beliefs may need to be revised later.
Structure
The structure of knowledge is the way in which the mental states of a person need to be related to each other for knowledge to arise. A common view is that a person has to have good reasons for holding a belief if this belief is to amount to knowledge. When the belief is challenged, the person may justify it by referring to their reason for holding it. In many cases, this reason depends itself on another belief that may as well be challenged. An example is a person who believes that Ford cars are cheaper than BMWs. When their belief is challenged, they may justify it by claiming that they heard it from a reliable source. This justification depends on the assumption that their source is reliable, which may itself be challenged. The same may apply to any subsequent reason they cite. This threatens to lead to an infinite regress since the epistemic status at each step depends on the epistemic status of the previous step. Theories of the structure of knowledge offer responses for how to solve this problem.
Three traditional theories are foundationalism, coherentism, and infinitism. Foundationalists and coherentists deny the existence of an infinite regress, in contrast to infinitists. According to foundationalists, some basic reasons have their epistemic status independent of other reasons and thereby constitute the endpoint of the regress. Some foundationalists hold that certain sources of knowledge, like perception, provide basic reasons. Another view is that this role is played by certain self-evident truths, like the knowledge of one's own existence and the content of one's ideas. The view that basic reasons exist is not universally accepted. One criticism states that there should be a reason why some reasons are basic while others are not. According to this view, the putative basic reasons are not actually basic since their status would depend on other reasons. Another criticism is based on hermeneutics and argues that all understanding is circular and requires interpretation, which implies that knowledge does not need a secure foundation.
Coherentists and infinitists avoid these problems by denying the contrast between basic and non-basic reasons. Coherentists argue that there is only a finite number of reasons, which mutually support and justify one another. This is based on the intuition that beliefs do not exist in isolation but form a complex web of interconnected ideas that is justified by its coherence rather than by a few privileged foundational beliefs. One difficulty for this view is how to demonstrate that it does not involve the fallacy of circular reasoning. If two beliefs mutually support each other then a person has a reason for accepting one belief if they already have the other. However, mutual support alone is not a good reason for newly accepting both beliefs at once. A closely related issue is that there can be distinct sets of coherent beliefs. Coherentists face the problem of explaining why someone should accept one coherent set rather than another. For infinitists, in contrast to foundationalists and coherentists, there is an infinite number of reasons. This view embraces the idea that there is a regress since each reason depends on another reason. One difficulty for this view is that the human mind is limited and may not be able to possess an infinite number of reasons. This raises the question of whether, according to infinitism, human knowledge is possible at all.
Value
Knowledge may be valuable either because it is useful or because it is good in itself. Knowledge can be useful by helping a person achieve their goals. For example, if one knows the answers to questions in an exam one is able to pass that exam or by knowing which horse is the fastest, one can earn money from bets. In these cases, knowledge has instrumental value. Not all forms of knowledge are useful and many beliefs about trivial matters have no instrumental value. This concerns, for example, knowing how many grains of sand are on a specific beach or memorizing phone numbers one never intends to call. In a few cases, knowledge may even have a negative value. For example, if a person's life depends on gathering the courage to jump over a ravine, then having a true belief about the involved dangers may hinder them from doing so.
Besides having instrumental value, knowledge may also have intrinsic value. This means that some forms of knowledge are good in themselves even if they do not provide any practical benefits. According to philosopher Duncan Pritchard, this applies to forms of knowledge linked to wisdom. It is controversial whether all knowledge has intrinsic value, including knowledge about trivial facts like knowing whether the biggest apple tree had an even number of leaves yesterday morning. One view in favor of the intrinsic value of knowledge states that having no belief about a matter is a neutral state and knowledge is always better than this neutral state, even if the value difference is only minimal.
A more specific issue in epistemology concerns the question of whether or why knowledge is more valuable than mere true belief. There is wide agreement that knowledge is usually good in some sense but the thesis that knowledge is better than true belief is controversial. An early discussion of this problem is found in Plato's Meno in relation to the claim that both knowledge and true belief can successfully guide action and, therefore, have apparently the same value. For example, it seems that mere true belief is as effective as knowledge when trying to find the way to Larissa. According to Plato, knowledge is better because it is more stable. Another suggestion is that knowledge gets its additional value from justification. One difficulty for this view is that while justification makes it more probable that a belief is true, it is not clear what additional value it provides in comparison to an unjustified belief that is already true.
The problem of the value of knowledge is often discussed in relation to reliabilism and virtue epistemology. Reliabilism can be defined as the thesis that knowledge is reliably formed true belief. This view has difficulties in explaining why knowledge is valuable or how a reliable belief-forming process adds additional value. According to an analogy by philosopher Linda Zagzebski, a cup of coffee made by a reliable coffee machine has the same value as an equally good cup of coffee made by an unreliable coffee machine. This difficulty in solving the value problem is sometimes used as an argument against reliabilism. Virtue epistemology, by contrast, offers a unique solution to the value problem. Virtue epistemologists see knowledge as the manifestation of cognitive virtues. They hold that knowledge has additional value due to its association with virtue. This is based on the idea that cognitive success in the form of the manifestation of virtues is inherently valuable independent of whether the resulting states are instrumentally useful.
Acquiring and transmitting knowledge often comes with certain costs, such as the material resources required to obtain new information and the time and energy needed to understand it. For this reason, an awareness of the value of knowledge is crucial to many fields that have to make decisions about whether to seek knowledge about a specific matter. On a political level, this concerns the problem of identifying the most promising research programs to allocate funds. Similar concerns affect businesses, where stakeholders have to decide whether the cost of acquiring knowledge is justified by the economic benefits that this knowledge may provide, and the military, which relies on intelligence to identify and prevent threats. In the field of education, the value of knowledge can be used to choose which knowledge should be passed on to the students.
Science
The scientific approach is usually regarded as an exemplary process of how to gain knowledge about empirical facts. Scientific knowledge includes mundane knowledge about easily observable facts, for example, chemical knowledge that certain reactants become hot when mixed together. It also encompasses knowledge of less tangible issues, like claims about the behavior of genes, neutrinos, and black holes.
A key aspect of most forms of science is that they seek natural laws that explain empirical observations. Scientific knowledge is discovered and tested using the scientific method. This method aims to arrive at reliable knowledge by formulating the problem in a clear way and by ensuring that the evidence used to support or refute a specific theory is public, reliable, and replicable. This way, other researchers can repeat the experiments and observations in the initial study to confirm or disconfirm it. The scientific method is often analyzed as a series of steps that begins with regular observation and data collection. Based on these insights, scientists then try to find a hypothesis that explains the observations. The hypothesis is then tested using a controlled experiment to compare whether predictions based on the hypothesis match the observed results. As a last step, the results are interpreted and a conclusion is reached whether and to what degree the findings confirm or disconfirm the hypothesis.
The empirical sciences are usually divided into natural and social sciences. The natural sciences, like physics, biology, and chemistry, focus on quantitative research methods to arrive at knowledge about natural phenomena. Quantitative research happens by making precise numerical measurements and the natural sciences often rely on advanced technological instruments to perform these measurements and to setup experiments. Another common feature of their approach is to use mathematical tools to analyze the measured data and formulate exact and general laws to describe the observed phenomena.
The social sciences, like sociology, anthropology, and communication studies, examine social phenomena on the level of human behavior, relationships, and society at large. While they also make use of quantitative research, they usually give more emphasis to qualitative methods. Qualitative research gathers non-numerical data, often with the goal of arriving at a deeper understanding of the meaning and interpretation of social phenomena from the perspective of those involved. This approach can take various forms, such as interviews, focus groups, and case studies. Mixed-method research combines quantitative and qualitative methods to explore the same phenomena from a variety of perspectives to get a more comprehensive understanding.
The progress of scientific knowledge is traditionally seen as a gradual and continuous process in which the existing body of knowledge is increased at each step. This view has been challenged by some philosophers of science, such as Thomas Kuhn, who holds that between phases of incremental progress, there are so-called scientific revolutions in which a paradigm shift occurs. According to this view, some basic assumptions are changed due to the paradigm shift, resulting in a radically new perspective on the body of scientific knowledge that is incommensurable with the previous outlook.
Scientism refers to a group of views that privilege the sciences and the scientific method over other forms of inquiry and knowledge acquisition. In its strongest formulation, it is the claim that there is no other knowledge besides scientific knowledge. A common critique of scientism, made by philosophers such as Hans-Georg Gadamer and Paul Feyerabend, is that the fixed requirement of following the scientific method is too rigid and results in a misleading picture of reality by excluding various relevant phenomena from the scope of knowledge.
History
The history of knowledge is the field of inquiry that studies how knowledge in different fields has developed and evolved in the course of history. It is closely related to the history of science, but covers a wider area that includes knowledge from fields like philosophy, mathematics, education, literature, art, and religion. It further covers practical knowledge of specific crafts, medicine, and everyday practices. It investigates not only how knowledge is created and employed, but also how it is disseminated and preserved.
Before the ancient period, knowledge about social conduct and survival skills was passed down orally and in the form of customs from one generation to the next. The ancient period saw the rise of major civilizations starting about 3000 BCE in Mesopotamia, Egypt, India, and China. The invention of writing in this period significantly increased the amount of stable knowledge within society since it could be stored and shared without being limited by imperfect human memory. During this time, the first developments in scientific fields like mathematics, astronomy, and medicine were made. They were later formalized and greatly expanded by the ancient Greeks starting in the 6th century BCE. Other ancient advancements concerned knowledge in the fields of agriculture, law, and politics.
In the medieval period, religious knowledge was a central concern, and religious institutions, like the Catholic Church in Europe, influenced intellectual activity. Jewish communities set up yeshivas as centers for studying religious texts and Jewish law. In the Muslim world, madrasa schools were established and focused on Islamic law and Islamic philosophy. Many intellectual achievements of the ancient period were preserved, refined, and expanded during the Islamic Golden Age from the 8th to 13th centuries. Centers of higher learning were established in this period in various regions, like Al-Qarawiyyin University in Morocco, the Al-Azhar University in Egypt, the House of Wisdom in Iraq, and the first universities in Europe. This period also saw the formation of guilds, which preserved and advanced technical and craft knowledge.
In the Renaissance period, starting in the 14th century, there was a renewed interest in the humanities and sciences. The printing press was invented in the 15th century and significantly increased the availability of written media and general literacy of the population. These developments served as the foundation of the Scientific Revolution in the Age of Enlightenment starting in the 16th and 17th centuries. It led to an explosion of knowledge in fields such as physics, chemistry, biology, and the social sciences. The technological advancements that accompanied this development made possible the Industrial Revolution in the 18th and 19th centuries. In the 20th century, the development of computers and the Internet led to a vast expansion of knowledge by revolutionizing how knowledge is stored, shared, and created.
In various disciplines
Religion
Knowledge plays a central role in many religions. Knowledge claims about the existence of God or religious doctrines about how each one should live their lives are found in almost every culture. However, such knowledge claims are often controversial and are commonly rejected by religious skeptics and atheists. The epistemology of religion is the field of inquiry studying whether belief in God and in other religious doctrines is rational and amounts to knowledge. One important view in this field is evidentialism, which states that belief in religious doctrines is justified if it is supported by sufficient evidence. Suggested examples of evidence for religious doctrines include religious experiences such as direct contact with the divine or inner testimony when hearing God's voice. Evidentialists often reject that belief in religious doctrines amounts to knowledge based on the claim that there is not sufficient evidence. A famous saying in this regard is due to Bertrand Russell. When asked how he would justify his lack of belief in God when facing his judgment after death, he replied "Not enough evidence, God! Not enough evidence."
However, religious teachings about the existence and nature of God are not always seen as knowledge claims by their defenders. Some explicitly state that the proper attitude towards such doctrines is not knowledge but faith. This is often combined with the assumption that these doctrines are true but cannot be fully understood by reason or verified through rational inquiry. For this reason, it is claimed that one should accept them even though they do not amount to knowledge. Such a view is reflected in a famous saying by Immanuel Kant where he claims that he "had to deny knowledge in order to make room for faith."
Distinct religions often differ from each other concerning the doctrines they proclaim as well as their understanding of the role of knowledge in religious practice. In both the Jewish and the Christian traditions, knowledge plays a role in the fall of man, in which Adam and Eve were expelled from the Garden of Eden. Responsible for this fall was that they ignored God's command and ate from the tree of knowledge, which gave them the knowledge of good and evil. This is seen as a rebellion against God since this knowledge belongs to God and it is not for humans to decide what is right or wrong. In the Christian literature, knowledge is seen as one of the seven gifts of the Holy Spirit. In Islam, "the Knowing" (al-ʿAlīm) is one of the 99 names reflecting distinct attributes of God. The Qur'an asserts that knowledge comes from Allah and the acquisition of knowledge is encouraged in the teachings of Muhammad.
In Buddhism, knowledge that leads to liberation is called vijjā. It contrasts with avijjā or ignorance, which is understood as the root of all suffering. This is often explained in relation to the claim that humans suffer because they crave things that are impermanent. The ignorance of the impermanent nature of things is seen as the factor responsible for this craving. The central goal of Buddhist practice is to stop suffering. This aim is to be achieved by understanding and practicing the teaching known as the Four Noble Truths and thereby overcoming ignorance. Knowledge plays a key role in the classical path of Hinduism known as jñāna yoga or "path of knowledge". It aims to achieve oneness with the divine by fostering an understanding of the self and its relation to Brahman or ultimate reality.
Anthropology
The anthropology of knowledge is a multi-disciplinary field of inquiry. It studies how knowledge is acquired, stored, retrieved, and communicated. Special interest is given to how knowledge is reproduced and changes in relation to social and cultural circumstances. In this context, the term knowledge is used in a very broad sense, roughly equivalent to terms like understanding and culture. This means that the forms and reproduction of understanding are studied irrespective of their truth value. In epistemology, by contrast, knowledge is usually restricted to forms of true belief. The main focus in anthropology is on empirical observations of how people ascribe truth values to meaning contents, like when affirming an assertion, even if these contents are false. This also includes practical components: knowledge is what is employed when interpreting and acting on the world and involves diverse phenomena, such as feelings, embodied skills, information, and concepts. It is used to understand and anticipate events to prepare and react accordingly.
The reproduction of knowledge and its changes often happen through some form of communication used to transfer knowledge. This includes face-to-face discussions and online communications as well as seminars and rituals. An important role in this context falls to institutions, like university departments or scientific journals in the academic context. Anthropologists of knowledge understand traditions as knowledge that has been reproduced within a society or geographic region over several generations. They are interested in how this reproduction is affected by external influences. For example, societies tend to interpret knowledge claims found in other societies and incorporate them in a modified form.
Within a society, people belonging to the same social group usually understand things and organize knowledge in similar ways to one another. In this regard, social identities play a significant role: people who associate themselves with similar identities, like age-influenced, professional, religious, and ethnic identities, tend to embody similar forms of knowledge. Such identities concern both how a person sees themselves, for example, in terms of the ideals they pursue, as well as how other people see them, such as the expectations they have toward the person.
Sociology
The sociology of knowledge is the subfield of sociology that studies how thought and society are related to each other. Like the anthropology of knowledge, it understands "knowledge" in a wide sense that encompasses philosophical and political ideas, religious and ideological doctrines, folklore, law, and technology. The sociology of knowledge studies in what sociohistorical circumstances knowledge arises, what consequences it has, and on what existential conditions it depends. The examined conditions include physical, demographic, economic, and sociocultural factors. For instance, philosopher Karl Marx claimed that the dominant ideology in a society is a product of and changes with the underlying socioeconomic conditions. Another example is found in forms of decolonial scholarship that claim that colonial powers are responsible for the hegemony of Western knowledge systems. They seek a decolonization of knowledge to undermine this hegemony. A related issue concerns the link between knowledge and power, in particular, the extent to which knowledge is power. The philosopher Michel Foucault explored this issue and examined how knowledge and the institutions responsible for it control people through what he termed biopower by shaping societal norms, values, and regulatory mechanisms in fields like psychiatry, medicine, and the penal system.
A central subfield is the sociology of scientific knowledge, which investigates the social factors involved in the production and validation of scientific knowledge. This encompasses examining the impact of the distribution of resources and rewards on the scientific process, which leads some areas of research to flourish while others languish. Further topics focus on selection processes, such as how academic journals decide whether to publish an article and how academic institutions recruit researchers, and the general values and norms characteristic of the scientific profession.
Others
Formal epistemology studies knowledge using formal tools found in mathematics and logic. An important issue in this field concerns the epistemic principles of knowledge. These are rules governing how knowledge and related states behave and in what relations they stand to each other. The transparency principle, also referred to as the luminosity of knowledge, states that it is impossible for someone to know something without knowing that they know it. According to the conjunction principle, if a person has justified beliefs in two separate propositions, then they are also justified in believing the conjunction of these two propositions. In this regard, if Bob has a justified belief that dogs are animals and another justified belief that cats are animals, then he is justified to believe the conjunction that both dogs and cats are animals. Other commonly discussed principles are the closure principle and the evidence transfer principle.
Knowledge management is the process of creating, gathering, storing, and sharing knowledge. It involves the management of information assets that can take the form of documents, databases, policies, and procedures. It is of particular interest in the field of business and organizational development, as it directly impacts decision-making and strategic planning. Knowledge management efforts are often employed to increase operational efficiency in attempts to gain a competitive advantage. Key processes in the field of knowledge management are knowledge creation, knowledge storage, knowledge sharing, and knowledge application. Knowledge creation is the first step and involves the production of new information. Knowledge storage can happen through media like books, audio recordings, film, and digital databases. Secure storage facilitates knowledge sharing, which involves the transmission of information from one person to another. For the knowledge to be beneficial, it has to be put into practice, meaning that its insights should be used to either improve existing practices or implement new ones.
Knowledge representation is the process of storing organized information, which may happen using various forms of media and also includes information stored in the mind. It plays a key role in the artificial intelligence, where the term is used for the field of inquiry that studies how computer systems can efficiently represent information. This field investigates how different data structures and interpretative procedures can be combined to achieve this goal and which formal languages can be used to express knowledge items. Some efforts in this field are directed at developing general languages and systems that can be employed in a great variety of domains while others focus on an optimized representation method within one specific domain. Knowledge representation is closely linked to automatic reasoning because the purpose of knowledge representation formalisms is usually to construct a knowledge base from which inferences are drawn. Influential knowledge base formalisms include logic-based systems, rule-based systems, semantic networks, and frames. Logic-based systems rely on formal languages employed in logic to represent knowledge. They use linguistic devices like individual terms, predicates, and quantifiers. For rule-based systems, each unit of information is expressed using a conditional production rule of the form "if A then B". Semantic nets model knowledge as a graph consisting of vertices to represent facts or concepts and edges to represent the relations between them. Frames provide complex taxonomies to group items into classes, subclasses, and instances.
Pedagogy is the study of teaching methods or the art of teaching. It explores how learning takes place and which techniques teachers may employ to transmit knowledge to students and improve their learning experience while keeping them motivated. There is a great variety of teaching methods and the most effective approach often depends on factors like the subject matter and the age and proficiency level of the learner. In teacher-centered education, the teacher acts as the authority figure imparting information and directing the learning process. Student-centered approaches give a more active role to students with the teacher acting as a coach to facilitate the process. Further methodological considerations encompass the difference between group work and individual learning and the use of instructional media and other forms of educational technology.
See also
References
Notes
Citations
Sources
External links
Concepts in epistemology
Intelligence
Mental content
Virtue
Main topic articles | 0.787459 | 0.999386 | 0.786975 |
Postpositivism | Postpositivism or postempiricism is a metatheoretical stance that critiques and amends positivism and has impacted theories and practices across philosophy, social sciences, and various models of scientific inquiry. While positivists emphasize independence between the researcher and the researched person (or object), postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed. Postpositivists pursue objectivity by recognizing the possible effects of biases. While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.
Philosophy
Epistemology
Postpositivists believe that human knowledge is based not on a priori assessments from an objective individual, but rather upon human conjectures. As human knowledge is thus unavoidably conjectural, the assertion of these conjectures are warranted, or more specifically, justified by a set of warrants, which can be modified or withdrawn in the light of further investigation. However, postpositivism is not a form of relativism, and generally retains the idea of objective truth.
Ontology
Postpositivists believe that a reality exists, but, unlike positivists, they believe reality can be known only imperfectly. Postpositivists also draw from social constructionism in forming their understanding and definition of reality.
Axiology
While positivists believe that research is or can be value-free or value-neutral, postpositivists take the position that bias is undesired but inevitable, and therefore the investigator must work to detect and try to correct it. Postpositivists work to understand how their axiology (i.e. values and beliefs) may have influenced their research, including through their choice of measures, populations, questions, and definitions, as well as through their interpretation and analysis of their work.
History
Historians identify two types of positivism: classical positivism, an empirical tradition first described by Henri de Saint-Simon and Auguste Comte in the first half of the 19th century, and logical positivism, which is most strongly associated with the Vienna Circle, which met near Vienna, Austria, in the 1920s and 1930s. Postpositivism is the name D.C. Phillips gave to a group of critiques and amendments which apply to both forms of positivism.
One of the first thinkers to criticize logical positivism was Karl Popper. He advanced falsification in lieu of the logical positivist idea of verificationism. Falsificationism argues that it is impossible to verify that beliefs about universals or unobservables are true, though it is possible to reject false beliefs if they are phrased in a way amenable to falsification.
In 1965, Karl Popper and Thomas Kuhn had a debate as Thomas Kuhn's theory did not incorporate this idea of falsification. It has influenced contemporary research methodologies.
Thomas Kuhn is credited with having popularized and at least in part originated the post-empiricist philosophy of science. Kuhn's idea of paradigm shifts offers a broader critique of logical positivism, arguing that it is not simply individual theories but whole worldviews that must occasionally shift in response to evidence.
Postpositivism is not a rejection of the scientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability of objective truth, and the use of experimental methodology. The work of philosophers Nancy Cartwright and Ian Hacking are representative of these ideas. Postpositivism of this type is described in social science guides to research methods.
Structure of a postpositivist theory
Robert Dubin describes the basic components of a postpositivist theory as being composed of basic "units" or ideas and topics of interest, "laws of interactions" among the units, and a description of the "boundaries" for the theory. A postpositivist theory also includes "empirical indicators" to connect the theory to observable phenomena, and hypotheses that are testable using the scientific method.
According to Thomas Kuhn, a postpositivist theory can be assessed on the basis of whether it is "accurate", "consistent", "has broad scope", "parsimonious", and "fruitful".
Main publications
Karl Popper (1934) Logik der Forschung, rewritten in English as The Logic of Scientific Discovery (1959)
Thomas Kuhn (1962) The Structure of Scientific Revolutions
Karl Popper (1963) Conjectures and Refutations
Ian Hacking (1983) Representing and Intervening
Andrew Pickering (1984) Constructing Quarks
Peter Galison (1987) How Experiments End
Nancy Cartwright (1989) Nature's Capacities and Their Measurement
See also
Antipositivism
Philosophy of science
Scientism
Sociology of scientific knowledge
Notes
References
Alexander, J.C. (1995), Fin De Siecle Social Theory: Relativism, Reductionism and The Problem of Reason, London; Verso.
Phillips, D.C. & Nicholas C. Burbules (2000): Postpositivism and Educational Research. Lanham & Boulder: Rowman & Littlefield Publishers.
Zammito, John H. (2004): A Nice Derangement of Epistemes. Post-positivism in the study of Science from Quine to Latour. Chicago & London: The University of Chicago Press.
Popper, K. (1963), Conjectures and Refutations: The Growth of Scientific Knowledge, London; Routledge.
Moore, R. (2009), Towards the Sociology of Truth, London; Continuum.
External links
Positivism and Post-positivism
Positivism
Metatheory of science
Epistemological theories | 0.793764 | 0.990931 | 0.786566 |
Essentialism | Essentialism is the view that objects have a set of attributes that are necessary to their identity. In early Western thought, Platonic idealism held that all things have such an "essence"—an "idea" or "form". In Categories, Aristotle similarly proposed that all objects have a substance that, as George Lakoff put it, "make the thing what it is, and without which it would be not that kind of thing". The contrary view—non-essentialism—denies the need to posit such an "essence". Essentialism has been controversial from its beginning. In the Parmenides dialogue, Plato depicts Socrates questioning the notion, suggesting that if we accept the idea that every beautiful thing or just action partakes of an essence to be beautiful or just, we must also accept the "existence of separate essences for hair, mud, and dirt".
Older social theories were often conceptually essentialist. In biology and other natural sciences, essentialism provided the rationale for taxonomy at least until the time of Charles Darwin. The role and importance of essentialism in modern biology is still a matter of debate. Beliefs which posit that social identities such as race, ethnicity, nationality, or gender are essential characteristics have been central to many discriminatory or extremist ideologies. For instance, psychological essentialism is correlated with racial prejudice. Essentialist views about race have also been shown to diminish empathy when dealing with members of another racial group. In medical sciences, essentialism can lead to a reified view of identities, leading to fallacious conclusions and potentially unequal treatment.
In philosophy
An essence characterizes a substance or a form, in the sense of the forms and ideas in Platonic idealism. It is permanent, unalterable, and eternal, and is present in every possible world. Classical humanism has an essentialist conception of the human, in its endorsement of the notion of an eternal and unchangeable human nature. This has been criticized by Kierkegaard, Marx, Heidegger, Sartre, Badiou and many other existential, materialist and anti-humanist thinkers. Essentialism, in its broadest sense, is any philosophy that acknowledges the primacy of essence. Unlike existentialism, which posits "being" as the fundamental reality, the essentialist ontology must be approached from a metaphysical perspective. Empirical knowledge is developed from experience of a relational universe whose components and attributes are defined and measured in terms of intellectually constructed laws. Thus, for the scientist, reality is explored as an evolutionary system of diverse entities, the order of which is determined by the principle of causality.
In Plato's philosophy, in particular the Timaeus and the Philebus, things were said to come into being by the action of a demiurge who works to form chaos into ordered entities. Many definitions of essence hark back to the ancient Greek hylomorphic understanding of the formation of the things. According to that account, the structure and real existence of any thing can be understood by analogy to an artefact produced by a craftsperson. The craftsperson requires hyle (timber or wood) and a model, plan or idea in their own mind, according to which the wood is worked to give it the indicated contour or form (morphe). Aristotle was the first to use the terms hyle and morphe. According to his explanation, all entities have two aspects: "matter" and "form". It is the particular form imposed that gives some matter its identity—its quiddity or "whatness" (i.e., "what it is"). Plato was one of the first essentialists, postulating the concept of ideal forms—an abstract entity of which individual objects are mere facsimiles. To give an example: the ideal form of a circle is a perfect circle, something that is physically impossible to make manifest; yet the circles we draw and observe clearly have some idea in common—the ideal form. Plato proposed that these ideas are eternal and vastly superior to their manifestations, and that we understand these manifestations in the material world by comparing and relating them to their respective ideal form. Plato's forms are regarded as patriarchs to essentialist dogma simply because they are a case of what is intrinsic and a-contextual of objects—the abstract properties that make them what they are. One example is Plato's parable of the cave. Plato believed that the universe was perfect and that its observed imperfections came from man's limited perception of it. For Plato, there were two realities: the "essential" or ideal and the "perceived".
Aristotle (384–322 BC) applied the term essence to that which things in a category have in common and without which they cannot be members of that category (for example, rationality is the essence of man; without rationality a creature cannot be a man). In his critique of Aristotle's philosophy, Bertrand Russell said that his concept of essence transferred to metaphysics what was only a verbal convenience and that it confused the properties of language with the properties of the world. In fact, a thing's "essence" consisted in those defining properties without which we could not use the name for it. Although the concept of essence was "hopelessly muddled" it became part of every philosophy until modern times. The Egyptian-born philosopher Plotinus (204–270 AD) brought idealism to the Roman Empire as Neoplatonism, and with it the concept that not only do all existents emanate from a "primary essence" but that the mind plays an active role in shaping or ordering the objects of perception, rather than passively receiving empirical data.
Examples
Naturalism
Dating back to the 18th century, naturalism is a form of essentialism in which social matters are explained through the logic of natural dispositions. The invoked nature can be biological, ontological or theological. Its opponent is culturalism.
Human nature
In the case of Homo sapiens, the divergent conceptions of human nature may be partitioned into essentialist versus non-essentialist (or even anti-essentialist) positions. Another established dichotomy is that of monism versus pluralism about the matter.
Biological essentialism
Before evolution was developed as a scientific theory, the essentialist view of biology posited that all species are unchanging throughout time. The historian Mary P. Winsor has argued that biologists such as Louis Agassiz in the 19th century believed that taxa such as species and genus were fixed, reflecting the mind of the creator. Some religious opponents of evolution continue to maintain this view of biology.
Work by historians of systematic biology in the 21st century has cast doubt upon this view of pre-Darwinian thinkers. Winsor, Ron Amundson and Staffan Müller-Wille have each argued that in fact the usual suspects (such as Linnaeus and the Ideal Morphologists) were very far from being essentialists, and that the so-called "essentialism story" (or "myth") in biology is a result of conflating the views expressed and biological examples used by philosophers going back to Aristotle and continuing through to John Stuart Mill and William Whewell in the immediately pre-Darwinian period, with the way that biologists used such terms as species.
Anti-essentialists contend that an essentialist typological categorization has been rendered obsolete and untenable by evolutionary theory for several reasons. First, they argue that biological species are dynamic entities, emerging and disappearing as distinct populations are molded by natural selection. This view contrasts with the static essences that essentialists say characterize natural categories. Second, the opponents of essentialism argue that our current understanding of biological species emphasizes genealogical relationships rather than intrinsic traits. Lastly, non-essentialists assert that every organism has a mutational load, and the variability and diversity within species contradict the notion of fixed biological natures.
Gender essentialism
In feminist theory and gender studies, gender essentialism is the attribution of fixed essences to men and women—this idea that men and women are fundamentally different continues to be a matter of contention. Gay/lesbian rights advocate Diana Fuss wrote: "Essentialism is most commonly understood as a belief in the real, true essence of things, the invariable and fixed properties which define the 'whatness' of a given entity." Women's essence is assumed to be universal and is generally identified with those characteristics viewed as being specifically feminine. These ideas of femininity are usually biologized and are often preoccupied with psychological characteristics, such as nurturance, empathy, support, and non-competitiveness, etc. Feminist theorist Elizabeth Grosz states in her 1995 publication Space, time and perversion: essays on the politics of bodies that essentialism "entails the belief that those characteristics defined as women's essence are shared in common by all women at all times. It implies a limit of the variations and possibilities of change—it is not possible for a subject to act in a manner contrary to her essence. Her essence underlies all the apparent variations differentiating women from each other. Essentialism thus refers to the existence of fixed characteristic, given attributes, and ahistorical functions that limit the possibilities of change and thus of social reorganization."
Gender essentialism is pervasive in popular culture, as illustrated by the #1 New York Times best seller Men Are from Mars, Women Are from Venus, but this essentialism is routinely critiqued in introductory women's studies textbooks such as Women: Images & Realities. Starting in the 1980s, some feminist writers have put forward essentialist theories about gender and science. Evelyn Fox Keller, Sandra Harding,
and Nancy Tuana
argued that the modern scientific enterprise is inherently patriarchal and incompatible with women's nature. Other feminist scholars, such as Ann Hibner Koblitz, Lenore Blum, Mary Gray, Mary Beth Ruskai, and Pnina Abir-Am and Dorinda Outram have criticized those theories for ignoring the diverse nature of scientific research and the tremendous variation in women's experiences in different cultures and historical periods.
Racial, cultural and strategic essentialism
Cultural and racial essentialism is the view that fundamental biological or physical characteristics of human "races" produce personality, heritage, cognitive abilities, or 'natural talents' that are shared by all members of a racial group. In the early 20th century, many anthropologists taught this theory – that race was an entirely biological phenomenon and that this was core to a person's behavior and identity. This, coupled with a belief that linguistic, cultural, and social groups fundamentally existed along racial lines, formed the basis of what is now called scientific racism. After the Nazi eugenics program, along with the rise of anti-colonial movements, racial essentialism lost widespread popularity. New studies of culture and the fledgling field of population genetics undermined the scientific standing of racial essentialism, leading race anthropologists to revise their conclusions about the sources of phenotypic variation. A significant number of modern anthropologists and biologists in the West came to view race as an invalid genetic or biological designation.
Historically, beliefs which posit that social identities such as ethnicity, nationality or gender determine a person's essential characteristics have in many cases been shown to have destructive or harmful results. It has been argued by some that essentialist thinking lies at the core of many simplistic, discriminatory or extremist ideologies. Psychological essentialism is also correlated with racial prejudice. In medical sciences, essentialism can lead to an over-emphasis on the role of identities—for example assuming that differences in hypertension in African-American populations are due to racial differences rather than social causes—leading to fallacious conclusions and potentially unequal treatment. Older social theories were often conceptually essentialist.
Strategic essentialism, a major concept in postcolonial theory, was introduced in the 1980s by the Indian literary critic and theorist Gayatri Chakravorty Spivak. It refers to a political tactic in which minority groups, nationalities, or ethnic groups mobilize on the basis of shared gendered, cultural, or political identity. While strong differences may exist between members of these groups, and among themselves they engage in continuous debates, it is sometimes advantageous for them to temporarily "essentialize" themselves, despite it being based on erroneous logic, and to bring forward their group identity in a simplified way to achieve certain goals, such as equal rights or antiglobalization.
In historiography
Essentialism in history as a field of study entails discerning and listing essential cultural characteristics of a particular nation or culture, in the belief that a people or culture can be understood in this way. Sometimes such essentialism leads to claims of a praiseworthy national or cultural identity, or to its opposite, the condemnation of a culture based on presumed essential characteristics. Herodotus, for example, claims that Egyptian culture is essentially feminized and possesses a "softness" which has made Egypt easy to conquer. To what extent Herodotus was an essentialist is a matter of debate; he is also credited with not essentializing the concept of the Athenian identity, or differences between the Greeks and the Persians that are the subject of his Histories.
Essentialism had been operative in colonialism, as well as in critiques of colonialism. Post-colonial theorists, such as Edward Said, insisted that essentialism was the "defining mode" of "Western" historiography and ethnography until the nineteenth century and even after, according to Touraj Atabaki, manifesting itself in the historiography of the Middle East and Central Asia as Eurocentrism, over-generalization, and reductionism. Into the 21st century, most historians, social scientists, and humanists reject methodologies associated with essentialism, although some have argued that certain varieties of essentialism may be useful or even necessary. Karl Popper splits the ambiguous term realism into essentialism and realism. He uses essentialism whenever he means the opposite of nominalism, and realism only as opposed to idealism. Popper himself is a realist as opposed to an idealist, but a methodological nominalist as opposed to an essentialist. For example, statements like "a puppy is a young dog" should be read from right to left as an answer to "What shall we call a young dog", never from left to right as an answer to "What is a puppy?"
In psychology
There is a difference between metaphysical essentialism and psychological essentialism, the latter referring not to an actual claim about the world but a claim about a way of representing entities in cognitions. Influential in this area is Susan Gelman, who has outlined many domains in which children and adults construe classes of entities, particularly biological entities, in essentialist terms—i.e., as if they had an immutable underlying essence which can be used to predict unobserved similarities between members of that class. This causal relationship is unidirectional; an observable feature of an entity does not define the underlying essence.
In developmental psychology
Essentialism has emerged as an important concept in psychology, particularly developmental psychology. In 1991, Kathryn Kremer and Susan Gelman studied the extent to which children from four–seven years old demonstrate essentialism. Children believed that underlying essences predicted observable behaviours. Children were able to describe living objects' behaviour as self-perpetuated and non-living objects' behavior as a result of an adult influencing the object. Understanding the underlying causal mechanism for behaviour suggests essentialist thinking. Younger children were unable to identify causal mechanisms of behaviour whereas older children were able to. This suggests that essentialism is rooted in cognitive development. It can be argued that there is a shift in the way that children represent entities, from not understanding the causal mechanism of the underlying essence to showing sufficient understanding.
There are four key criteria that constitute essentialist thinking. The first facet is the aforementioned individual causal mechanisms. The second is innate potential: the assumption that an object will fulfill its predetermined course of development. According to this criterion, essences predict developments in entities that will occur throughout its lifespan. The third is immutability. Despite altering the superficial appearance of an object it does not remove its essence. Observable changes in features of an entity are not salient enough to alter its essential characteristics. The fourth is inductive potential. This suggests that entities may share common features but are essentially different; however similar two beings may be, their characteristics will be at most analogous, differing most importantly in essences. The implications of psychological essentialism are numerous. Prejudiced individuals have been found to endorse exceptionally essential ways of thinking, suggesting that essentialism may perpetuate exclusion among social groups. For example, essentialism of nationality has been linked to anti-immigration attitudes. In multiple studies in India and the United States, it was shown that in lay view a person's nationality is considerably fixed at birth, even if that person is adopted and raised by a family of another nationality at day one and never told about their origin. This may be due to an over-extension of an essential-biological mode of thinking stemming from cognitive development. Paul Bloom of Yale University has stated that "one of the most exciting ideas in cognitive science is the theory that people have a default assumption that things, people and events have invisible essences that make them what they are. Experimental psychologists have argued that essentialism underlies our understanding of the physical and social worlds, and developmental and cross-cultural psychologists have proposed that it is instinctive and universal. We are natural-born essentialists." Scholars suggest that the categorical nature of essentialist thinking predicts the use of stereotypes and can be targeted in the application of stereotype prevention.
See also
Determinism
Educational essentialism
Moral panic
Nature vs. nurture
Mereological essentialism
Medium essentialism
National essentialism (Japan)
Non-essentialism
Pleasure
Poststructuralism
Primordialism
Social constructionism
Scientific essentialism
Structuralism
Traditionalist School
Vitalism
Political acceptation: Identity politics, Strategic essentialism, Ethnic nationalism
Brian David Ellis (New essentialism)
Greg McKeown (author) (Essentialism: The Disciplined Pursuit of Less)
References
Notes
Bibliography
Further reading
Runes, Dagobert D. (1972) Dictionary of Philosophy (Littlefield, Adams & Co.). See for instance the articles on "Essence", p. 97; "Quiddity", p. 262; "Form", p. 110; "Hylomorphism", p. 133; "Individuation", p. 145; and "Matter", p. 191.
Barrett, H. C. (2001). On the functional origins of essentialism. Mind and Society, 3, Vol. 2, 1–30.
Sayer, Andrew (August 1997) "Essentialism, Social Constructionism, and Beyond", Sociological Review 45 : 456.
Oderberg, David S. (2007) Real Essentialism New York, Routledge.
Cattarini, L.S. (2018) Beyond Sartre and Sterility (Montreal), argues for priority of essence/conscience over existence/consciousness
External links
Substance theory
Philosophical theories
Identity (philosophy) | 0.789045 | 0.996854 | 0.786562 |
Positivism | Positivism is a philosophical school that holds that all genuine knowledge is either true by definition or positive meaning a posteriori facts derived by reason and logic from sensory experience. Other ways of knowing, such as intuition, introspection, or religious faith, are rejected or considered meaningless.
Although the positivist approach has been a recurrent theme in the history of western thought, modern positivism was first articulated in the early 19th century by Auguste Comte. His school of sociological positivism holds that society, like the physical world, operates according to general laws. After Comte, positivist schools arose in logic, psychology, economics, historiography, and other fields of thought. Generally, positivists attempted to introduce scientific methods to their respective fields. Since the turn of the 20th century, positivism, although still popular, has declined under criticism in parts of social sciences from antipositivists and critical theorists, among others, for its alleged scientism, reductionism, overgeneralizations, and methodological limitations.
Etymology
The English noun positivism in this meaning was imported in the 19th century from the French word , derived from in its philosophical sense of 'imposed on the mind by experience'. The corresponding adjective has been used in a similar sense to discuss law (positive law compared to natural law) since the time of Chaucer.Background
Kieran Egan argues that positivism can be traced to the philosophy side of what Plato described as the quarrel between philosophy and poetry, later reformulated by Wilhelm Dilthey as a quarrel between the natural sciences and the human sciences.Saunders, T. J. Introduction to Ion. London: Penguin Books, 1987, p. 46
In the early nineteenth century, massive advances in the natural sciences encouraged philosophers to apply scientific methods to other fields. Thinkers such as Henri de Saint-Simon, Pierre-Simon Laplace and Auguste Comte believed that the scientific method, the circular dependence of theory and observation, must replace metaphysics in the history of thought.
Positivism in the social sciences
Comte's positivism
Auguste Comte (1798–1857) first described the epistemological perspective of positivism in The Course in Positive Philosophy, a series of texts published between 1830 and 1842. These texts were followed in 1844 by A General View of Positivism (published in French 1848, English in 1865). The first three volumes of the Course dealt chiefly with the physical sciences already in existence (mathematics, astronomy, physics, chemistry, biology), whereas the latter two emphasized the inevitable coming of social science. Observing the circular dependence of theory and observation in science, and classifying the sciences in this way, Comte may be regarded as the first philosopher of science in the modern sense of the term. For him, the physical sciences had necessarily to arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. His View of Positivism therefore set out to define the empirical goals of sociological method:
Comte offered an account of social evolution, proposing that society undergoes three phases in its quest for the truth according to a general "law of three stages". Comte intended to develop a secular-scientific ideology in the wake of European secularisation.
Comte's stages were (1) the theological, (2) the metaphysical, and (3) the positive. The theological phase of man was based on whole-hearted belief in all things with reference to God. God, Comte says, had reigned supreme over human existence pre-Enlightenment. Humanity's place in society was governed by its association with the divine presences and with the church. The theological phase deals with humankind's accepting the doctrines of the church (or place of worship) rather than relying on its rational powers to explore basic questions about existence. It dealt with the restrictions put in place by the religious organization at the time and the total acceptance of any "fact" adduced for society to believe.
Comte describes the metaphysical phase of humanity as the time since the Enlightenment, a time steeped in logical rationalism, to the time right after the French Revolution. This second phase states that the universal rights of humanity are most important. The central idea is that humanity is invested with certain rights that must be respected. In this phase, democracies and dictators rose and fell in attempts to maintain the innate rights of humanity.
The final stage of the trilogy of Comte's universal law is the scientific, or positive, stage. The central idea of this phase is that individual rights are more important than the rule of any one person. Comte stated that the idea of humanity's ability to govern itself makes this stage inherently different from the rest. There is no higher power governing the masses and the intrigue of any one person can achieve anything based on that individual's free will. The third principle is most important in the positive stage. Comte calls these three phases the universal rule in relation to society and its development. Neither the second nor the third phase can be reached without the completion and understanding of the preceding stage. All stages must be completed in progress.
Comte believed that the appreciation of the past and the ability to build on it towards the future was key in transitioning from the theological and metaphysical phases. The idea of progress was central to Comte's new science, sociology. Sociology would "lead to the historical consideration of every science" because "the history of one science, including pure political history, would make no sense unless it was attached to the study of the general progress of all of humanity". As Comte would say: "from science comes prediction; from prediction comes action". It is a philosophy of human intellectual development that culminated in science. The irony of this series of phases is that though Comte attempted to prove that human development has to go through these three stages, it seems that the positivist stage is far from becoming a realization. This is due to two truths: The positivist phase requires having a complete understanding of the universe and world around us and requires that society should never know if it is in this positivist phase. Anthony Giddens argues that since humanity constantly uses science to discover and research new things, humanity never progresses beyond the second metaphysical phase.
Comte's fame today owes in part to Emile Littré, who founded The Positivist Review in 1867. As an approach to the philosophy of history, positivism was appropriated by historians such as Hippolyte Taine. Many of Comte's writings were translated into English by the Whig writer, Harriet Martineau, regarded by some as the first female sociologist. Debates continue to rage as to how much Comte appropriated from the work of his mentor, Saint-Simon. He was nevertheless influential: Brazilian thinkers turned to Comte's ideas about training a scientific elite in order to flourish in the industrialization process. Brazil's national motto, Ordem e Progresso ("Order and Progress") was taken from the positivism motto, "Love as principle, order as the basis, progress as the goal", which was also influential in Poland.
In later life, Comte developed a 'religion of humanity' for positivist societies in order to fulfil the cohesive function once held by traditional worship. In 1849, he proposed a calendar reform called the 'positivist calendar'. For close associate John Stuart Mill, it was possible to distinguish between a "good Comte" (the author of the Course in Positive Philosophy) and a "bad Comte" (the author of the secular-religious system). The system was unsuccessful but met with the publication of Darwin's On the Origin of Species to influence the proliferation of various secular humanist organizations in the 19th century, especially through the work of secularists such as George Holyoake and Richard Congreve. Although Comte's English followers, including George Eliot and Harriet Martineau, for the most part rejected the full gloomy panoply of his system, they liked the idea of a religion of humanity and his injunction to "vivre pour autrui" ("live for others", from which comes the word "altruism").
The early sociology of Herbert Spencer came about broadly as a reaction to Comte; writing after various developments in evolutionary biology, Spencer attempted (in vain) to reformulate the discipline in what we might now describe as socially Darwinistic terms.
Early followers of Comte
Within a few years, other scientific and philosophical thinkers began creating their own definitions for positivism. These included Émile Zola, Emile Hennequin, Wilhelm Scherer, and Dimitri Pisarev. Fabien Magnin was the first working-class adherent to Comte's ideas, and became the leader of a movement known as "Proletarian Positivism". Comte appointed Magnin as his successor as president of the Positive Society in the event of Comte's death. Magnin filled this role from 1857 to 1880, when he resigned. Magnin was in touch with the English positivists Richard Congreve and Edward Spencer Beesly. He established the Cercle des prolétaires positivistes in 1863 which was affiliated to the First International. Eugène Sémérie was a psychiatrist who was also involved in the Positivist movement, setting up a positivist club in Paris after the foundation of the French Third Republic in 1870. He wrote: "Positivism is not only a philosophical doctrine, it is also a political party which claims to reconcile order—the necessary basis for all social activity—with Progress, which is its goal."
Durkheim's positivism
The modern academic discipline of sociology began with the work of Émile Durkheim (1858–1917). While Durkheim rejected much of the details of Comte's philosophy, he retained and refined its method, maintaining that the social sciences are a logical continuation of the natural ones into the realm of human activity, and insisting that they may retain the same objectivity, rationalism, and approach to causality. Durkheim set up the first European department of sociology at the University of Bordeaux in 1895, publishing his Rules of the Sociological Method (1895). In this text he argued: "[o]ur main goal is to extend scientific rationalism to human conduct... What has been called our positivism is but a consequence of this rationalism."
Durkheim's seminal monograph, Suicide (1897), a case study of suicide rates amongst Catholic and Protestant populations, distinguished sociological analysis from psychology or philosophy. By carefully examining suicide statistics in different police districts, he attempted to demonstrate that Catholic communities have a lower suicide rate than Protestants, something he attributed to social (as opposed to individual or psychological) causes. He developed the notion of objective sui generis "social facts" to delineate a unique empirical object for the science of sociology to study. Through such studies, he posited, sociology would be able to determine whether a given society is 'healthy' or 'pathological', and seek social reform to negate organic breakdown or "social anomie". Durkheim described sociology as the "science of institutions, their genesis and their functioning".
David Ashley and David M. Orenstein have alleged, in a consumer textbook published by Pearson Education, that accounts of Durkheim's positivism are possibly exaggerated and oversimplified; Comte was the only major sociological thinker to postulate that the social realm may be subject to scientific analysis in exactly the same way as natural science, whereas Durkheim saw a far greater need for a distinctly sociological scientific methodology. His lifework was fundamental in the establishment of practical social research as we know it today—techniques which continue beyond sociology and form the methodological basis of other social sciences, such as political science, as well of market research and other fields.
Historical positivism
In historiography, historical or documentary positivism is the belief that historians should pursue the objective truth of the past by allowing historical sources to "speak for themselves", without additional interpretation. In the words of the French historian Fustel de Coulanges, as a positivist, "It is not I who am speaking, but history itself". The heavy emphasis placed by historical positivists on documentary sources led to the development of methods of source criticism, which seek to expunge bias and uncover original sources in their pristine state.
The origin of the historical positivist school is particularly associated with the 19th-century German historian Leopold von Ranke, who argued that the historian should seek to describe historical truth "wie es eigentlich gewesen ist" ("as it actually was")—though subsequent historians of the concept, such as Georg Iggers, have argued that its development owed more to Ranke's followers than Ranke himself.
Historical positivism was critiqued in the 20th century by historians and philosophers of history from various schools of thought, including Ernst Kantorowicz in Weimar Germany—who argued that "positivism ... faces the danger of becoming Romantic when it maintains that it is possible to find the Blue Flower of truth without preconceptions"—and Raymond Aron and Michel Foucault in postwar France, who both posited that interpretations are always ultimately multiple and there is no final objective truth to recover. In his posthumously published 1946 The Idea of History, the English historian R. G. Collingwood criticized historical positivism for conflating scientific facts with historical facts, which are always inferred and cannot be confirmed by repetition, and argued that its focus on the "collection of facts" had given historians "unprecedented mastery over small-scale problems", but "unprecedented weakness in dealing with large-scale problems".
Historicist arguments against positivist approaches in historiography include that history differs from sciences like physics and ethology in subject matter and method; that much of what history studies is nonquantifiable, and therefore to quantify is to lose in precision; and that experimental methods and mathematical models do not generally apply to history, so that it is not possible to formulate general (quasi-absolute) laws in history.
Other subfields
In psychology the positivist movement was influential in the development of operationalism. The 1927 philosophy of science book The Logic of Modern Physics in particular, which was originally intended for physicists, coined the term operational definition, which went on to dominate psychological method for the whole century.
In economics, practicing researchers tend to emulate the methodological assumptions of classical positivism, but only in a de facto fashion: the majority of economists do not explicitly concern themselves with matters of epistemology. Economic thinker Friedrich Hayek (see "Law, Legislation and Liberty") rejected positivism in the social sciences as hopelessly limited in comparison to evolved and divided knowledge. For example, much (positivist) legislation falls short in contrast to pre-literate or incompletely defined common or evolved law.
In jurisprudence, "legal positivism" essentially refers to the rejection of natural law; thus its common meaning with philosophical positivism is somewhat attenuated and in recent generations generally emphasizes the authority of human political structures as opposed to a "scientific" view of law.
Logical positivism
Logical positivism (later and more accurately called logical empiricism) is a school of philosophy that combines empiricism, the idea that observational evidence is indispensable for knowledge of the world, with a version of rationalism, the idea that our knowledge includes a component that is not derived from observation.
Logical positivism grew from the discussions of a group called the "First Vienna Circle", which gathered at the Café Central before World War I. After the war Hans Hahn, a member of that early group, helped bring Moritz Schlick to Vienna. Schlick's Vienna Circle, along with Hans Reichenbach's Berlin Circle, propagated the new doctrines more widely in the 1920s and early 1930s.
It was Otto Neurath's advocacy that made the movement self-conscious and more widely known. A 1929 pamphlet written by Neurath, Hahn, and Rudolf Carnap summarized the doctrines of the Vienna Circle at that time. These included the opposition to all metaphysics, especially ontology and synthetic a priori propositions; the rejection of metaphysics not as wrong but as meaningless (i.e., not empirically verifiable); a criterion of meaning based on Ludwig Wittgenstein's early work (which he himself later set out to refute); the idea that all knowledge should be codifiable in a single standard language of science; and above all the project of "rational reconstruction," in which ordinary-language concepts were gradually to be replaced by more precise equivalents in that standard language. However, the project is widely considered to have failed.
After moving to the United States, Carnap proposed a replacement for the earlier doctrines in his Logical Syntax of Language. This change of direction, and the somewhat differing beliefs of Reichenbach and others, led to a consensus that the English name for the shared doctrinal platform, in its American exile from the late 1930s, should be "logical empiricism." While the logical positivist movement is now considered dead, it has continued to influence philosophical development.
Criticism
Historically, positivism has been criticized for its reductionism, i.e., for contending that all "processes are reducible to physiological, physical or chemical events," "social processes are reducible to relationships between and actions of individuals," and that "biological organisms are reducible to physical systems."
The consideration that laws in physics may not be absolute but relative, and, if so, this might be even more true of social sciences, was stated, in different terms, by G. B. Vico in 1725.Giambattista Vico, Principi di scienza nuova, Opere, ed. Fausto Nicolini (Milan: R. Ricciardi, 1953), pp. 365–905. Vico, in contrast to the positivist movement, asserted the superiority of the science of the human mind (the humanities, in other words), on the grounds that natural sciences tell us nothing about the inward aspects of things.
Wilhelm Dilthey fought strenuously against the assumption that only explanations derived from science are valid. He reprised Vico's argument that scientific explanations do not reach the inner nature of phenomena and it is humanistic knowledge that gives us insight into thoughts, feelings and desires. Dilthey was in part influenced by the historism of Leopold von Ranke (1795–1886).
The contesting views over positivism are reflected both in older debates (see the Positivism dispute) and current ones over the proper role of science in the public sphere. Public sociology—especially as described by Michael Burawoy—argues that sociologists should use empirical evidence to display the problems of society so they might be changed.
Antipositivism
At the turn of the 20th century, the first wave of German sociologists formally introduced methodological antipositivism, proposing that research should concentrate on human cultural norms, values, symbols, and social processes viewed from a subjective perspective. Max Weber, one such thinker, argued that while sociology may be loosely described as a 'science' because it is able to identify causal relationships (especially among ideal types), sociologists should seek relationships that are not as "ahistorical, invariant, or generalizable" as those pursued by natural scientists. Weber regarded sociology as the study of social action, using critical analysis and verstehen techniques. The sociologists Georg Simmel, Ferdinand Tönnies, George Herbert Mead, and Charles Cooley were also influential in the development of sociological antipositivism, whilst neo-Kantian philosophy, hermeneutics, and phenomenology facilitated the movement in general.
Critical rationalism and postpositivism
In the mid-twentieth century, several important philosophers and philosophers of science began to critique the foundations of logical positivism. In his 1934 work The Logic of Scientific Discovery, Karl Popper argued against verificationism. A statement such as "all swans are white" cannot actually be empirically verified, because it is impossible to know empirically whether all swans have been observed. Instead, Popper argued that at best an observation can falsify a statement (for example, observing a black swan would prove that not all swans are white). Popper also held that scientific theories talk about how the world really is (not about phenomena or observations experienced by scientists), and critiqued the Vienna Circle in his Conjectures and Refutations.Karl Popper, The Logic of Scientific Discovery, 1934, 1959 (1st English ed.) W. V. O. Quine and Pierre Duhem went even further. The Duhem–Quine thesis states that it is impossible to experimentally test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses); thus, unambiguous scientific falsifications are also impossible. Thomas Kuhn, in his 1962 book The Structure of Scientific Revolutions, put forward his theory of paradigm shifts. He argued that it is not simply individual theories but whole worldviews that must occasionally shift in response to evidence.
Together, these ideas led to the development of critical rationalism and postpositivism. Postpositivism is not a rejection of the scientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability of objective truth, and the use of experimental methodology. Postpositivism of this type is described in social science guides to research methods. Postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed. Postpositivists pursue objectivity by recognizing the possible effects of biases. While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.
In the early 1960s, the positivism dispute arose between the critical theorists (see below) and the critical rationalists over the correct solution to the value judgment dispute (Werturteilsstreit). While both sides accepted that sociology cannot avoid a value judgement that inevitably influences subsequent conclusions, the critical theorists accused the critical rationalists of being positivists; specifically, of asserting that empirical questions can be severed from their metaphysical heritage and refusing to ask questions that cannot be answered with scientific methods. This contributed to what Karl Popper termed the "Popper Legend", a misconception among critics and admirers of Popper that he was, or identified himself as, a positivist.
Critical theory
Although Karl Marx's theory of historical materialism drew upon positivism, the Marxist tradition would also go on to influence the development of antipositivist critical theory. Critical theorist Jürgen Habermas critiqued pure instrumental rationality (in its relation to the cultural "rationalisation" of the modern West) as a form of scientism, or science "as ideology". He argued that positivism may be espoused by "technocrats" who believe in the inevitability of social progress through science and technology.Outhwaite, William, 1988 Habermas: Key Contemporary Thinkers, Polity Press (Second Edition 2009), p. 68 New movements, such as critical realism, have emerged in order to reconcile postpositivist aims with various so-called 'postmodern' perspectives on the social acquisition of knowledge.
Max Horkheimer criticized the classic formulation of positivism on two grounds. First, he claimed that it falsely represented human social action. The first criticism argued that positivism systematically failed to appreciate the extent to which the so-called social facts it yielded did not exist 'out there', in the objective world, but were themselves a product of socially and historically mediated human consciousness. Positivism ignored the role of the 'observer' in the constitution of social reality and thereby failed to consider the historical and social conditions affecting the representation of social ideas. Positivism falsely represented the object of study by reifying social reality as existing objectively and independently of the labour that actually produced those conditions. Secondly, he argued, representation of social reality produced by positivism was inherently and artificially conservative, helping to support the status quo, rather than challenging it. This character may also explain the popularity of positivism in certain political circles. Horkheimer argued, in contrast, that critical theory possessed a reflexive element lacking in the positivistic traditional theory.
Some scholars today hold the beliefs critiqued in Horkheimer's work, but since the time of his writing critiques of positivism, especially from philosophy of science, have led to the development of postpositivism. This philosophy greatly relaxes the epistemological commitments of logical positivism and no longer claims a separation between the knower and the known. Rather than dismissing the scientific project outright, postpositivists seek to transform and amend it, though the exact extent of their affinity for science varies vastly. For example, some postpositivists accept the critique that observation is always value-laden, but argue that the best values to adopt for sociological observation are those of science: skepticism, rigor, and modesty. Just as some critical theorists see their position as a moral commitment to egalitarian values, these postpositivists see their methods as driven by a moral commitment to these scientific values. Such scholars may see themselves as either positivists or antipositivists.
Other criticisms
During the later twentieth century, positivism began to fall out of favor with scientists as well. Later in his career, German theoretical physicist Werner Heisenberg, Nobel laureate for his pioneering work in quantum mechanics, distanced himself from positivism: The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.
In the early 1970s, urbanists of the quantitative school like David Harvey started to question the positivist approach itself, saying that the arsenal of scientific theories and methods developed so far in their camp were "incapable of saying anything of depth and profundity" on the real problems of contemporary cities.
According to the Catholic Encyclopedia, Positivism has also come under fire on religious and philosophical grounds, whose proponents state that truth begins in sense experience, but does not end there. Positivism fails to prove that there are not abstract ideas, laws, and principles, beyond particular observable facts and relationships and necessary principles, or that we cannot know them. Nor does it prove that material and corporeal things constitute the whole order of existing beings, and that our knowledge is limited to them. According to positivism, our abstract concepts or general ideas are mere collective representations of the experimental order—for example; the idea of "man" is a kind of blended image of all the men observed in our experience. This runs contrary to a Platonic or Christian ideal, where an idea can be abstracted from any concrete determination, and may be applied identically to an indefinite number of objects of the same class. From the idea's perspective, Platonism is more precise. Defining an idea as a sum of collective images is imprecise and more or less confused, and becomes more so as the collection represented increases. An idea defined explicitly always remains clear.
Other new movements, such as critical realism, have emerged in opposition to positivism. Critical realism seeks to reconcile the overarching aims of social science with postmodern critiques. Experientialism, which arose with second generation cognitive science, asserts that knowledge begins and ends with experience itself.Lakoff, G., & Johnson, M. (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. Basic books. In other words, it rejects the positivist assertion that a portion of human knowledge is a priori.
Positivism today
Echoes of the "positivist" and "antipositivist" debate persist today, though this conflict is hard to define. Authors writing in different epistemological perspectives do not phrase their disagreements in the same terms and rarely actually speak directly to each other. To complicate the issues further, few practising scholars explicitly state their epistemological commitments, and their epistemological position thus has to be guessed from other sources such as choice of methodology or theory. However, no perfect correspondence between these categories exists, and many scholars critiqued as "positivists" are actually postpositivists. One scholar has described this debate in terms of the social construction of the "other", with each side defining the other by what it is not rather than what it is, and then proceeding to attribute far greater homogeneity to their opponents than actually exists. Thus, it is better to understand this not as a debate but as two different arguments: the "antipositivist" articulation of a social meta-theory which includes a philosophical critique of scientism, and "positivist" development of a scientific research methodology for sociology with accompanying critiques of the reliability and validity of work that they see as violating such standards. Strategic positivism aims to bridge these two arguments.
Social sciences
While most social scientists today are not explicit about their epistemological commitments, articles in top American sociology and political science journals generally follow a positivist logic of argument. It can be thus argued that "natural science and social science [research articles] can therefore be regarded with a good deal of confidence as members of the same genre".
In contemporary social science, strong accounts of positivism have long since fallen out of favour. Practitioners of positivism today acknowledge in far greater detail observer bias and structural limitations. Modern positivists generally eschew metaphysical concerns in favour of methodological debates concerning clarity, replicability, reliability and validity. This positivism is generally equated with "quantitative research" and thus carries no explicit theoretical or philosophical commitments. The institutionalization of this kind of sociology is often credited to Paul Lazarsfeld, who pioneered large-scale survey studies and developed statistical techniques for analyzing them. This approach lends itself to what Robert K. Merton called middle-range theory: abstract statements that generalize from segregated hypotheses and empirical regularities rather than starting with an abstract idea of a social whole.
In the original Comtean usage, the term "positivism" roughly meant the use of scientific methods to uncover the laws according to which both physical and human events occur, while "sociology" was the overarching science that would synthesize all such knowledge for the betterment of society. "Positivism is a way of understanding based on science"; people don't rely on the faith in God but instead on the science behind humanity. "Antipositivism" formally dates back to the start of the twentieth century, and is based on the belief that natural and human sciences are ontologically and epistemologically distinct. Neither of these terms is used any longer in this sense. There are no fewer than twelve distinct epistemologies that are referred to as positivism. Many of these approaches do not self-identify as "positivist", some because they themselves arose in opposition to older forms of positivism, and some because the label has over time become a term of abuse by being mistakenly linked with a theoretical empiricism. The extent of antipositivist criticism has also become broad, with many philosophies broadly rejecting the scientifically based social epistemology and other ones only seeking to amend it to reflect 20th century developments in the philosophy of science. However, positivism (understood as the use of scientific methods for studying society) remains the dominant approach to both the research and the theory construction in contemporary sociology, especially in the United States.
The majority of articles published in leading American sociology and political science journals today are positivist (at least to the extent of being quantitative rather than qualitative).Brett, Paul. 1994. "A genre analysis of the results section of sociology articles". English For Specific Purposes. Vol 13, Num 1:47–59. This popularity may be because research utilizing positivist quantitative methodologies holds a greater prestige in the social sciences than qualitative work; quantitative work is easier to justify, as data can be manipulated to answer any question. Such research is generally perceived as being more scientific and more trustworthy, and thus has a greater impact on policy and public opinion (though such judgments are frequently contested by scholars doing non-positivist work).
Natural sciences
The key features of positivism as of the 1950s, as defined in the "received view", are:
A focus on science as a product, a linguistic or numerical set of statements;
A concern with axiomatization, that is, with demonstrating the logical structure and coherence of these statements;
An insistence on at least some of these statements being testable; that is, amenable to being verified, confirmed, or shown to be false by the empirical observation of reality. Statements that would, by their nature, be regarded as untestable included the teleological; thus positivism rejects much of classical metaphysics.
The belief that science is markedly cumulative;
The belief that science is predominantly transcultural;
The belief that science rests on specific results that are dissociated from the personality and social position of the investigator;
The belief that science contains theories or research traditions that are largely commensurable;
The belief that science sometimes incorporates new ideas that are discontinuous from old ones;
The belief that science involves the idea of the unity of science, that there is, underlying the various scientific disciplines, basically one science about one real world.
The belief that science is nature and nature is science; and out of this duality, all theories and postulates are created, interpreted, evolve, and are applied.
Stephen Hawking was a recent high-profile advocate of positivism in the physical sciences. In The Universe in a Nutshell (p. 31) he wrote:
Any sound scientific theory, whether of time or of any other concept, should in my opinion be based on the most workable philosophy of science: the positivist approach put forward by Karl Popper and others. According to this way of thinking, a scientific theory is a mathematical model that describes and codifies the observations we make. A good theory will describe a large range of phenomena on the basis of a few simple postulates and will make definite predictions that can be tested. ... If one takes the positivist position, as I do, one cannot say what time actually is. All one can do is describe what has been found to be a very good mathematical model for time and say what predictions it makes.
See also
Cliodynamics
Científico
Charvaka
Determinism
Gödel's incompleteness theorems
London Positivist Society
Nature versus nurture
Physics envy
Scientific politics
Sociological naturalism
The New Paul and Virginia Vladimir Solovyov
Notes
References
Armenteros, Carolina. 2017. "The Counterrevolutionary Comte: Theorist of the Two Powers and Enthusiastic Medievalist." In The Anthem Companion to Auguste Comte, edited by Andrew Wernick, 91–116. London: Anthem.
Annan, Noel. 1959. The Curious Strength of Positivism in English Political Thought. London: Oxford University Press.
Ardao, Arturo. 1963. "Assimilation and Transformation of Positivism in Latin America." Journal of the History of Ideas 24 (4):515–22.
Bevir, Mark. 2002. "Sidney Webb: Utilitarianism, Positivism, and Social Democracy." The Journal of Modern History 74 (2):217–252.
Bevir, Mark. 2011. The Making of British Socialism. Princeton. PA: Princeton University Press.
Bourdeau, Michel. 2006. Les trois états: Science, théologie et métaphysique chez Auguste Comte. Paris: Éditions du Cerf.
Bourdeau, Michel, Mary Pickering, and Warren Schmaus, eds. 2018. Love, Order and Progress. Pittsburgh, PA: University of Pittsburgh Press.
Bryant, Christopher G. A. 1985. Positivism in Social Theory and Research. New York: St. Martin's Press.
Claeys, Gregory. 2010. Imperial Sceptics. Cambridge: Cambridge University Press.
Claeys, Gregory. 2018. "Professor Beesly, Positivism and the International: the Patriotism Issue." In "Arise Ye Wretched of the Earth": The First International in a Global Perspective, edited by Fabrice Bensimon, Quinton Deluermoz and Jeanne Moisand. Leiden: Brill.
De Boni, Carlo. 2013. Storia di un'utopia. La religione dell'Umanità di Comte e la sua circolazione nel mondo. Milano: Mimesis.
Dixon, Thomas. 2008. The Invention of Altruism. Oxford: Oxford University Press.
Feichtinger, Johannes, Franz L. Fillafer, and Jan Surman, eds. 2018. The Worlds of Positivism. London: Palgrave Macmillan.
Forbes, Geraldine Handcock. 2003. "The English Positivists and India." In Essays on Indian Renaissance, edited by Raj Kumar, 151–63. Discovery: New Delhi.
Gane, Mike. 2006. Auguste Comte. London: Routledge.
Giddens, Anthony. Positivism and Sociology. Heinemann. London. 1974.
Gilson, Gregory D. and Irving W. Levinson, eds. Latin American Positivism: New Historical and Philosophic Essays (Lexington Books; 2012) 197 pages; Essays on positivism in the intellectual and political life of Brazil, Colombia, and Mexico.
Harp, Gillis J. 1995. Positivist Republic: Auguste Comte and the Reconstruction of American Liberalism, 1865–1920. University Park, PA: Pennsylvania State University Press.
Harrison, Royden. 1965. Before the Socialists. London: Routledge.
Hoecker-Drysdale, Susan. 2001. "Harriet Martineau and the Positivism of Auguste Comte." In Harriet Martineau: Theoretical and Methodological Perspectives, edited by Michael R. Hill and Susan Hoecker-Drysdale, 169–90. London: Routledge.
Kremer-Marietti, Angèle. L'Anthropologie positiviste d'Auguste Comte, Librairie Honoré Champion, Paris, 1980.
Kremer-Marietti, Angèle. Le positivisme, Collection "Que sais-je?", Paris, PUF, 1982.
LeGouis, Catherine. Positivism and Imagination: Scientism and Its Limits in Emile Hennequin, Wilhelm Scherer and Dmitril Pisarev. Bucknell University Press. London: 1997.
Lenzer, Gertrud, ed. 2009. The Essential Writings of Auguste Comte and Positivism. London: Transaction.
"Positivism." Marxists Internet Archive. Web. 23 Feb. 2012.
McGee, John Edwin. 1931. A Crusade for Humanity. London: Watts.
Mill, John Stuart. Auguste Comte and Positivism.
Mises, Richard von. Positivism: A Study In Human Understanding. Harvard University Press. Cambridge, Massachusetts: 1951.
Petit, Annie. Le Système d'Auguste Comte. De la science à la religion par la philosophie. Vrin, Paris (2016).
Pickering, Mary. Auguste Comte: An Intellectual Biography. Cambridge University Press. Cambridge, England; 1993.
Quin, Malcolm. 1924. Memoirs of a Positivist. London: George Allen & Unwin.
Richard Rorty (1982). Consequences of Pragmatism.
Scharff, Robert C. 1995. Comte After Positivism. Cambridge: Cambridge University Press.
Schunk, Dale H. Learning Theories: An Educational Perspective, 5th. Pearson, Merrill Prentice Hall. 1991, 1996, 2000, 2004, 2008.
Simon, W. M. 1963. European Positivism in the Nineteenth Century. Ithaca, NY: Cornell University Press.
Sutton, Michael. 1982. Nationalism, Positivism and Catholicism. Cambridge: Cambridge University Press.
Trindade, Helgio. 2003. "La république positiviste chex Comte." In Auguste Comte: Trajectoires positivistes 1798–1998, edited by Annie Petit, 363–400. Paris: L'Harmattan.
Turner, Mark. 2000. "Defining Discourses: The "Westminster Review", "Fortnightly Review", and Comte's Positivism." Victorian Periodicals Review 33 (3):273–282.
Wernick, Andrew. 2001. Auguste Comte and the Religion of Humanity. Cambridge: Cambridge University Press.
Whatmore, Richard. 2005. "Comte, Auguste (1798–1857)." In Encyclopaedia of Nineteenth-Century Thought, edited by Gregory Claeys, 123–8. London: Routledge.
Whetsell, Travis and Patricia M. Shields. "The Dynamics of Positivism in the Study of Public Administration: A Brief Intellectual History and Reappraisal", Administration & Society. .
Wils, Kaat. 2005. De omweg van de wetenschap: het positivisme en de Belgische en Nederlandse intellectuele cultuur, 1845–1914. Amsterdam: Amsterdam University Press.
Wilson, Matthew. 2018. "British Comtism and Modernist Design." Modern Intellectual History x (xx):1–32.
Wilson, Matthew. 2018. Moralising Space: the Utopian Urbanism of the British Positivists, 1855–1920. London: Routledge.
Wilson, Matthew. 2020. "Rendering sociology: on the utopian positivism of Harriet Martineau and the ‘Mumbo Jumbo club." Journal of Interdisciplinary History of Ideas 8 (16):1–42.
Woll, Allen L. 1976. "Positivism and History in Nineteenth-Century Chile." Journal of the History of Ideas 37 (3):493–506.
Woodward, Ralph Lee, ed. 1971. Positivism in Latin America, 1850–1900. Lexington: Heath.
Wright, T. R. 1986. The Religion of Humanity. Cambridge: Cambridge University Press.
Wright, T. R. 1981. "George Eliot and Positivism: A Reassessment." The Modern Language Review 76 (2):257–72.
Wunderlich, Roger. 1992. Low Living and High Thinking at Modern Times, New York. Syracuse, NY: Syracuse University Press.
Zea, Leopoldo. 1974. Positivism in Mexico''. Austin: University of Texas Press.
External links
The full text of the 1911 Encyclopædia Britannica article "Positivism" at Wikisource
Parana, Brazil
Porto Alegre, Brazil
Rio de Janeiro, Brazil
Posnan, Poland
Positivists Worldwide
Maison d'Auguste Comte, France
Philosophy of science
Philosophy of social science
Epistemological theories
20th century in philosophy
19th century in philosophy
Philosophy of law
Sociological theories | 0.787399 | 0.998822 | 0.786471 |
Pluralism (philosophy) | Pluralism is a term used in philosophy, referring to a worldview of multiplicity, often used in opposition to monism (the view that all is one) or dualism (the view that all is two). The term has different meanings in metaphysics, ontology, epistemology and logic. In metaphysics, it is the view that there are in fact many different substances in nature that constitute reality. In ontology, pluralism refers to different ways, kinds, or modes of being. For example, a topic in ontological pluralism is the comparison of the modes of existence of things like 'humans' and 'cars' with things like 'numbers' and some other concepts as they are used in science.
In epistemology, pluralism is the position that there is not one consistent means of approaching truths about the world, but rather many. Often this is associated with pragmatism, or conceptual, contextual, or cultural relativism. In the philosophy of science it may refer to the acceptance of co-existing scientific paradigms which though accurately describing their relevant domains are nonetheless incommensurable. In logic, pluralism is the relatively novel view that there is no one correct logic, or alternatively, that there is more than one correct logic. Such as using classical logic in most cases, but using paraconsistent logic to deal with certain paradoxes.
Metaphysical pluralism
Metaphysical pluralism in philosophy is the multiplicity of metaphysical models of the structure and content of reality, both as it appears and as logic dictates that it might be, as is exhibited by the four related models in Plato's Republic and as developed in the contrast between phenomenalism and physicalism. Pluralism is in contrast to the concept of monism in metaphysics, while dualism is a limited form, a pluralism of exactly two models, structures, elements, or concepts. A distinction is made between the metaphysical identification of realms of reality and the more restricted sub-fields of ontological pluralism (that examines what exists in each of these realms) and epistemological pluralism (that deals with the methodology for establishing knowledge about these realms).
Ancient pluralism
In ancient Greece, Empedocles wrote that they were fire, air, water and earth, although he used the word "root" rather than "element" (στοιχεῖον; stoicheion), which appeared later in Plato. From the association (φιλία; philia) and separation (νεῖκος; neikos) of these indestructible and unchangeable root elements, all things came to be in a fullness (πλήρωμα; pleroma) of ratio (λόγος; logos) and proportion (ἀνάλογος; analogos).
Similar to Empedocles, Anaxagoras was another Classical Greek philosopher with links to pluralism. His metaphysical system is centered around mechanically necessitated nous which governs, combines and diffuses the various "roots" of reality (known as homoioneroi). Unlike Empedocles' four "root elements" and similar to Democritus' multitude of atoms (yet not physical in nature), these homoioneroi are used by Anaxagoras to explain the multiplicity in reality and becoming. This pluralist theory of being influenced later thinkers such as Gottfried Wilhelm Leibniz's theory of monads and Julius Bahnsen's idea of will henades. The notion of a governing nous would also be used by Socrates and Plato, but they will assign it a more active and rational role in their philosophical systems.
Aristotle incorporated these elements, but his substance pluralism was not material in essence. His hylomorphic theory allowed him to maintain a reduced set of basic material elements as per the Milesians, while answering for the ever-changing flux of Heraclitus and the unchanging unity of Parmenides. In his Physics, due to the continuum of Zeno's paradoxes, as well as both logical and empirical considerations for natural science, he presented numerous arguments against the atomism of Leucippus and Democritus, who posited a basic duality of void and atoms. The atoms were an infinite variety of irreducibles, of all shapes and sizes, which randomly collide and mechanically hook together in the void, thus providing a reductive account of changeable figure, order and position as aggregates of the unchangeable atoms.
Ontological pluralism
The topic of ontological pluralism discusses different ways, kinds, or modes of being. Recent attention in ontological pluralism is due to the work of Kris McDaniel, who defends ontological pluralism in a number of papers. The name for the doctrine is due to Jason Turner, who, following McDaniel, suggests that "In contemporary guise, it is the doctrine that a logically perspicuous description of reality will use multiple quantifiers which cannot be thought of as ranging over a single domain." "There are numbers, fictional characters, impossible things, and holes. But, we don't think these things all exist in the same sense as cars and human beings."
It is common to refer to a film, novel or otherwise fictitious or virtual narrative as not being 'real'. Thus, the characters in the film or novel are not real, where the 'real world' is the everyday world in which we live. However, some authors may argue that fiction informs our concept of reality, and so has some kind of reality.
One reading of Ludwig Wittgenstein's notion of language-games argues that there is no overarching, single, fundamental ontology, but only a patchwork of overlapping interconnected ontologies ineluctably leading from one to another. For example, Wittgenstein discusses 'number' as technical vocabulary and in more general usage:
Wittgenstein suggests that it is not possible to identify a single concept underlying all versions of 'number', but that there are many interconnected meanings that transition one to another; vocabulary need not be restricted to technical meanings to be useful, and indeed technical meanings are 'exact' only within some proscribed context.
Eklund has argued that Wittgenstein's conception includes as a special case the technically constructed, largely autonomous, forms of language or linguistic frameworks of Carnap and Carnapian ontological pluralism. He places Carnap's ontological pluralism in the context of other philosophers, such as Eli Hirsch and Hilary Putnam.
Epistemological pluralism
Epistemological pluralism is a term used in philosophy and in other fields of study to refer to different ways of knowing things, different epistemological methodologies for attaining a full description of a particular field. In the philosophy of science epistemological pluralism arose in opposition to reductionism to express the contrary view that at least some natural phenomena cannot be fully explained by a single theory or fully investigated using a single approach.
Logical pluralism
Logical pluralism can be defined a number of ways: the position that there is more than one correct account of logical consequence (or no single, 'correct' account at all), that there is more than one correct set of logical constants or even that the 'correct' logic depends on the relevant logical questions under consideration (a sort of logical instrumentalism). Pluralism about logical consequence says that because different logical systems have different logical consequence relations, there is therefore more than one correct logic. For example, classical logic holds that the argument from explosion is a valid argument, but in Graham Priest's paraconsistent logic—LP, the 'Logic of Paradox'—it is an invalid argument. However, logical monists may respond that a plurality of logical theories does not mean that no single one of the theories is the correct one. After all, there are and have been a multitude of theories in physics, but that hasn't been taken to mean that all of them are correct.
Pluralists of the instrumentalist sort hold if a logic can be correct at all, it based on its ability to answer the logical questions under consideration. If one wants to understand vague propositions, one may need a many-valued logic. Or if one wants to know what the truth-value of the Liar Paradox is, a dialetheic paraconsistent logic may be required. Rudolf Carnap held to a version of logical pluralism:
See also
Anekantavada
Legal pluralism
Nelson Goodman
Panarchism
Pantheism
Pluralism in political philosophy
Pluralism in political theory
Postmodernism
Quantifier variance
Religious pluralism
Value pluralism
Notes
Further reading
Goodman, Nelson, 1978, Ways of Worldmaking, Hackett, , paperback
Epistemological theories
Metaphysical theories
Metaphysics of mind | 0.79436 | 0.989547 | 0.786057 |
Theory of forms | In philosophy and specifically metaphysics, the theory of Forms, theory of Ideas, Platonic idealism, or Platonic realism is a theory widely credited to the Classical Greek philosopher Plato. The theory suggests that the physical world is not as real or true as "Forms". According to this theory, Forms—conventionally capitalized and also commonly translated as "Ideas"—are the non-physical, timeless, absolute, and unchangeable essences of all things, of which objects and matter in the physical world are merely imitations. Plato speaks of these entities only through the characters (primarily Socrates) in his dialogues who sometimes suggest that these Forms are the only objects of study that can provide knowledge.
Scriptures written by Pythagoras suggest that he developed a similar theory earlier than Plato, with Pythagoras's theory specifically proposing that the world is entirely composed of numbers. The early Greek concept of form precedes attested philosophical usage, and is represented by a number of words which mainly relate to vision, sight, and appearance. Plato uses these aspects of sight and appearance from the early Greek concept in his dialogues to explain his Forms, including the Form of the Good. The theory itself is contested by characters within Plato's dialogues, and it remains a general point of controversy in philosophy. Nonetheless, it is considered to be a classical solution to the problem of universals.
Forms
The original meaning of the term , "visible form", and related terms , "shape", and , "appearances", from , "shine", Indo-European *bʰeh₂- or *bhā- remained stable over the centuries until the beginning of Western philosophy, when they became equivocal, acquiring additional specialized philosophic meanings. Plato used the terms and interchangeably.
The pre-Socratic philosophers, starting with Thales, noted that appearances change, and began to ask what the thing that changes "really" is. The answer was substance, which stands under the changes and is the actually existing thing being seen. The status of appearances now came into question. What is the form really and how is that related to substance?
The Forms are expounded upon in Plato's dialogues and general speech, in that every object or quality in reality—dogs, human beings, mountains, colors, courage, love, and goodness—has a form. Form answers the question, "What is that?" Plato was going a step further and asking what Form itself is. He supposed that the object was essentially or "really" the Form and that the phenomena were mere shadows mimicking the Form; that is, momentary portrayals of the Form under different circumstances. The problem of universals – how can one thing in general be many things in particular – was solved by presuming that Form was a distinct singular thing but caused plural representations of itself in particular objects. For example, in the dialogue Parmenides, Socrates states: "Nor, again, if a person were to show that all is one by partaking of one, and at the same time many by partaking of many, would that be very astonishing. But if he were to show me that the absolute one was many, or the absolute many one, I should be truly amazed." Matter is considered particular in itself. For Plato, forms, such as beauty, are more real than any objects that imitate them. Though the forms are timeless and unchanging, physical things are in a constant change of existence. Where forms are unqualified perfection, physical things are qualified and conditioned.
These Forms are the essences of various objects: they are that without which a thing would not be the kind of thing it is. For example, there are countless tables in the world but the Form of tableness is at the core; it is the essence of all of them. Plato's Socrates held that the world of Forms is transcendent to our own world (the world of substances) and also is the essential basis of reality. Super-ordinate to matter, Forms are the most pure of all things. Furthermore, he believed that true knowledge/intelligence is the ability to grasp the world of Forms with one's mind.
A Form is aspatial (transcendent to space) and atemporal (transcendent to time). In the world of Plato, atemporal means that it does not exist within any time period, rather it provides the formal basis for time. It therefore formally grounds beginning, persisting and ending. It is neither eternal in the sense of existing forever, nor mortal, of limited duration. It exists transcendent to time altogether. Forms are aspatial in that they have no spatial dimensions, and thus no orientation in space, nor do they even (like the point) have a location. They are non-physical, but they are not in the mind. Forms are extra-mental (i.e. real in the strictest sense of the word).
A Form is an objective "blueprint" of perfection. The Forms are perfect and unchanging representations of objects and qualities. For example, the Form of beauty or the Form of a triangle. For the form of a triangle say there is a triangle drawn on a blackboard. A triangle is a polygon with 3 sides. The triangle as it is on the blackboard is far from perfect. However, it is only the intelligibility of the Form "triangle" that allows us to know the drawing on the chalkboard is a triangle, and the Form "triangle" is perfect and unchanging. It is exactly the same whenever anyone chooses to consider it; however, time only affects the observer and not the triangle. It follows that the same attributes would exist for the Form of beauty and for all Forms.
Plato explains how we are always many steps away from the idea or Form. The idea of a perfect circle can have us defining, speaking, writing, and drawing about particular circles that are always steps away from the actual being. The perfect circle, partly represented by a curved line, and a precise definition, cannot be drawn. Even the ratio of pi is an irrational number, that only partly helps to fully describe the perfect circle. The idea of the perfect circle is discovered, not invented.
Intelligible realm and separation of the Forms
Plato often invokes, particularly in his dialogues Phaedo, Republic and Phaedrus, poetic language to illustrate the mode in which the Forms are said to exist. Near the end of the Phaedo, for example, Plato describes the world of Forms as a pristine region of the physical universe located above the surface of the Earth (Phd. 109a–111c). In the Phaedrus the Forms are in a "place beyond heaven" (hyperouranios topos) (Phdr. 247c ff); and in the Republic the sensible world is contrasted with the intelligible realm (noēton topon) in the famous Allegory of the Cave.
It would be a mistake to take Plato's imagery as positing the intelligible world as a literal physical space apart from this one. Plato emphasizes that the Forms are not beings that extend in space (or time), but subsist apart from any physical space whatsoever. Thus we read in the Symposium of the Form of Beauty: "It is not anywhere in another thing, as in an animal, or in earth, or in heaven, or in anything else, but itself by itself with itself," (211b). And in the Timaeus Plato writes: "Since these things are so, we must agree that that which keeps its own form unchangingly, which has not been brought into being and is not destroyed, which neither receives into itself anything else from anywhere else, nor itself enters into anything anywhere, is one thing," (52a, emphasis added).
Ambiguities of the theory
Plato's conception of Forms actually differs from dialogue to dialogue, and in certain respects it is never fully explained, so many aspects of the theory are open to interpretation. Forms are first introduced in the Phaedo, but in that dialogue the concept is simply referred to as something the participants are already familiar with, and the theory itself is not developed. Similarly, in the Republic, Plato relies on the concept of Forms as the basis of many of his arguments but feels no need to argue for the validity of the theory itself or to explain precisely what Forms are. Commentators have been left with the task of explaining what Forms are and how visible objects participate in them, and there has been no shortage of disagreement. Some scholars advance the view that Forms are paradigms, perfect examples on which the imperfect world is modeled. Others interpret Forms as universals, so that the Form of Beauty, for example, is that quality that all beautiful things share. Yet others interpret Forms as "stuffs," the conglomeration of all instances of a quality in the visible world. Under this interpretation, we could say there is a little beauty in one person, a little beauty in another – all the beauty in the world put together is the Form of Beauty. Plato himself was aware of the ambiguities and inconsistencies in his Theory of Forms, as is evident from the incisive criticism he makes of his own theory in the Parmenides.
Evidence of Forms
Human perception
In Cratylus, Plato writes:But if the very nature of knowledge changes, at the time when the change occurs there will be no knowledge, and, according to this view, there will be no one to know and nothing to be known: but if that which knows and that which is known exist ever, and the beautiful and the good and every other thing also exist, then I do not think that they can resemble a process of flux, as we were just now supposing.
Plato believed that long before our bodies ever existed, our souls existed and inhabited heaven, where they became directly acquainted with the forms themselves. Real knowledge, to him, was knowledge of the forms. But knowledge of the forms cannot be gained through sensory experience because the forms are not in the physical world. Therefore, our real knowledge of the forms must be the memory of our initial acquaintance with the forms in heaven. Therefore, what we seem to learn is in fact just remembering.
Perfection
No one has ever seen a perfect circle, nor a perfectly straight line, yet everyone knows what a circle and a straight line are. Plato uses the tool-maker's blueprint as evidence that Forms are real:... when a man has discovered the instrument which is naturally adapted to each work, he must express this natural form, and not others which he fancies, in the material ....
Perceived circles or lines are not exactly circular or straight, and true circles and lines could never be detected since by definition they are sets of infinitely small points. But if the perfect ones were not real, how could they direct the manufacturer?
Criticisms of Platonic Forms
Self-criticism
One difficulty lies in the conceptualization of the "participation" of an object in a form (or Form). The young Socrates conceives of his solution to the problem of the universals in another metaphor:
Nay, but the idea may be like the day which is one and the same in many places at once, and yet continuous with itself; in this way each idea may be one and the same in all at the same time.
But exactly how is a Form like the day in being everywhere at once? The solution calls for a distinct form, in which the particular instances, which are not identical to the form, participate; i.e., the form is shared out somehow like the day to many places. The concept of "participate", represented in Greek by more than one word, is as obscure in Greek as it is in English. Plato hypothesized that distinctness meant existence as an independent being, thus opening himself to the famous third man argument of Parmenides, which proves that forms cannot independently exist and be participated.
If universal and particulars – say man or greatness – all exist and are the same then the Form is not one but is multiple. If they are only like each other then they contain a form that is the same and others that are different. Thus if we presume that the Form and a particular are alike then there must be another, or third Form, man or greatness by possession of which they are alike. An infinite regression would then result; that is, an endless series of third men. The ultimate participant, greatness, rendering the entire series great, is missing. Moreover, any Form is not unitary but is composed of infinite parts, none of which is the proper Form.
The young Socrates did not give up the Theory of Forms over the Third Man but took another tack, that the particulars do not exist as such. Whatever they are, they "mime" the Forms, appearing to be particulars. This is a clear dip into representationalism, that we cannot observe the objects as they are in themselves but only their representations. That view has the weakness that if only the mimes can be observed then the real Forms cannot be known at all and the observer can have no idea of what the representations are supposed to represent or that they are representations.
Socrates' later answer would be that men already know the Forms because they were in the world of Forms before birth. The mimes only recall these Forms to memory.
Aristotelian criticism
The topic of Aristotle's criticism of Plato's Theory of Forms is a large one and continues to expand. Rather than quote Plato, Aristotle often summarized. Classical commentaries thus recommended Aristotle as an introduction to Plato, even when in disagreement; the Platonist Syrianus used Aristotelian critiques to further refine the Platonic position on forms in use in his school, a position handed down to his student Proclus. As a historian of prior thought, Aristotle was invaluable, however this was secondary to his own dialectic and in some cases he treats purported implications as if Plato had actually mentioned them, or even defended them. In examining Aristotle's criticism of The Forms, it is helpful to understand Aristotle's own hylomorphic forms, by which he intends to salvage much of Plato's theory.
Plato distinguished between real and non-real "existing things", where the latter term is used of substance. The figures that the artificer places in the gold are not substance, but gold is. Aristotle stated that, for Plato, all things studied by the sciences have Form and asserted that Plato considered only substance to have Form. Uncharitably, this leads him to something like a contradiction: Forms existing as the objects of science, but not-existing as substance. Scottish philosopher W.D. Ross objects to this as a mischaracterization of Plato.
Plato did not claim to know where the line between Form and non-Form is to be drawn. As Cornford points out, those things about which the young Socrates (and Plato) asserted "I have often been puzzled about these things" (in reference to Man, Fire and Water), appear as Forms in later works. However, others do not, such as Hair, Mud, Dirt. Of these, Socrates is made to assert, "it would be too absurd to suppose that they have a Form."
Ross also objects to Aristotle's criticism that Form Otherness accounts for the differences between Forms and purportedly leads to contradictory forms: the Not-tall, the Not-beautiful, etc. That particulars participate in a Form is for Aristotle much too vague to permit analysis. By one way in which he unpacks the concept, the Forms would cease to be of one essence due to any multiple participation. As Ross indicates, Plato didn't make that leap from "A is not B" to "A is Not-B." Otherness would only apply to its own particulars and not to those of other Forms. For example, there is no Form Not-Greek, only particulars of Form Otherness that somehow suppress Form Greek.
Regardless of whether Socrates meant the particulars of Otherness yield Not-Greek, Not-tall, Not-beautiful, etc., the particulars would operate specifically rather than generally, each somehow yielding only one exclusion.
Plato had postulated that we know Forms through a remembrance of the soul's past lives and Aristotle's arguments against this treatment of epistemology are compelling. For Plato, particulars somehow do not exist, and, on the face of it, "that which is non-existent cannot be known". See Metaphysics III 3–4.
Scholastic criticism
Nominalism (from Latin nomen, "name") says that ideal universals are mere names, human creations; the blueness shared by sky and blue jeans is a shared concept, communicated by our word "blueness". Blueness is held not to have any existence beyond that which it has in instances of blue things. This concept arose in the Middle Ages, as part of Scholasticism.
Scholasticism was a highly multinational, polyglottal school of philosophy, and the nominalist argument may be more obvious if an example is given in more than one language. For instance, colour terms are strongly variable by language; some languages consider blue and green the same colour, others have monolexemic terms for several shades of blue, which are considered different; other languages, like the Mandarin qing denote both blue and black. The German word "Stift" means a pen or a pencil, and also anything of the same shape. The English "pencil" originally meant "small paintbrush"; the term later included the silver rod used for silverpoint. The German "Bleistift" and "Silberstift" can both be called "Stift", but this term also includes felt-tip pens, which are clearly not pencils.
The shifting and overlapping nature of these concepts makes it easy to imagine them as mere names, with meanings not rigidly defined, but specific enough to be useful for communication. Given a group of objects, how is one to decide if it contains only instances of a single Form, or several mutually exclusive Forms?
See also
Archetype
Analogy of the Divided Line
Dmuta in Mandaeism
Exaggerated realism
Form of the Good
Hyperuranion
Idealism
Jungian archetypes
Map–territory relation
Nominalism
Plotinus
Problem of universals
Substantial form
Platonic solid
Plato's unwritten doctrines, for debates over Forms and Plato's higher, esoteric theories
Realism (disambiguation)
True form (Taoism)
Notes
Dialogues that discuss Forms
The theory is presented in the following dialogues:
Meno: 71–81, 85–86: The discovery (or "recollection") of knowledge as latent in the soul, pointing forward to the theory of Forms
Phaedo
73–80: The theory of recollection restated as knowledge of the Forms in soul before birth in the body,109–111: The myth of the afterlife, 100c: The theory of absolute beauty
Symposium: 210–211: The archetype of Beauty.
Phaedrus: 248–250: Reincarnation according to knowledge of the true, 265–266: The unity problem in thought and nature.
Cratylus: 389–390: The archetype as used by craftsmen, 439–440: The problem of knowing the Forms.
Theaetetus: 184–186: Universals understood by mind and not perceived by senses.
Sophist: 246–259: True essence a Form. Effective solution to participation problem. The problem with being as a Form; if it is participatory then non-being must exist and be being.
Parmenides: 129–135: Participatory solution of unity problem. Things partake of archetypal like and unlike, one and many, etc. The nature of the participation (Third man argument). Forms not actually in the thing. The problem of their unknowability.
Republic
Book III: 402–403: Education the pursuit of the Forms.
Book V: 472–483: Philosophy the love of the Forms. The philosopher-king must rule.
Books VI–VII: 500–517: Philosopher-guardians as students of the Beautiful and Just implement archetypical order, Metaphor of the Sun: The sun is to sight as Good is to understanding, Allegory of the Cave: The struggle to understand forms like men in cave guessing at shadows in firelight.
Books IX–X, 589–599: The ideal state and its citizens. Extensive treatise covering citizenship, government and society with suggestions for laws imitating the Good, the True, the Just, etc. Metaphor of the three beds.
Timaeus: 27–52: The design of the universe, including numbers and physics. Some of its patterns. Definition of matter.
Philebus: 14-18: Unity problem: one and many, parts and whole.
Seventh Letter: 342–345: The epistemology of Forms. The Seventh Letter is possibly spurious.
Bibliography
Reviewed by
Matía Cubillo, Gerardo Óscar (2021). "Suggestions on How to Combine the Platonic Forms to Overcome the Interpretative Difficulties of the Parmenides Dialogue", Revista de Filosofía de la Universidad de Costa Rica, vol. 60, 156: 157–171.
External links
Idealism
Platonism
Theories in ancient Greek philosophy | 0.786792 | 0.999016 | 0.786018 |
Practical philosophy | Practical philosophy concerns itself mainly with subjects that have applications in life, like the study of values, norms, politics, art, etc. The modern division of philosophy into theoretical philosophy and practical philosophy has its origin in Aristotle's categories of natural and moral philosophy. The one has theory for its object and the other practice.
Subjects of practical philosophy
Examples of practical philosophy subjects are:
Ethics
Aesthetics
Decision theory
Political philosophy
Philosophical counseling
Practical philosophy is also the use of philosophy and philosophical techniques in everyday life. This can take a number of forms including reflective practice, personal philosophical thinking, and philosophical counseling.
Examples of philosophical counseling subjects include:
Philosophical counseling
Philosophy of education
Philosophy of law
Philosophy of religion
Philosophy of history
Philosophy of social science
Value theory
Reflective practice
University education
In Sweden and Finland courses in theoretical and practical philosophy are taught separately, and are separate degrees. Other countries may use a similar scheme—some Scottish universities, for example, divide philosophy into logic, metaphysics, and ethics—but in most universities around the world philosophy is taught as a single subject. There is also a unified philosophy subject in some Swedish universities, such as Södertörns Högskola.
See also
Applied philosophy
References
Pr | 0.80206 | 0.979557 | 0.785664 |
Philosophical logic | Understood in a narrow sense, philosophical logic is the area of logic that studies the application of logical methods to philosophical problems, often in the form of extended logical systems like modal logic. Some theorists conceive philosophical logic in a wider sense as the study of the scope and nature of logic in general. In this sense, philosophical logic can be seen as identical to the philosophy of logic, which includes additional topics like how to define logic or a discussion of the fundamental concepts of logic. The current article treats philosophical logic in the narrow sense, in which it forms one field of inquiry within the philosophy of logic.
An important issue for philosophical logic is the question of how to classify the great variety of non-classical logical systems, many of which are of rather recent origin. One form of classification often found in the literature is to distinguish between extended logics and deviant logics. Logic itself can be defined as the study of valid inference. Classical logic is the dominant form of logic and articulates rules of inference in accordance with logical intuitions shared by many, like the law of excluded middle, the double negation elimination, and the bivalence of truth.
Extended logics are logical systems that are based on classical logic and its rules of inference but extend it to new fields by introducing new logical symbols and the corresponding rules of inference governing these symbols. In the case of alethic modal logic, these new symbols are used to express not just what is true simpliciter, but also what is possibly or necessarily true. It is often combined with possible worlds semantics, which holds that a proposition is possibly true if it is true in some possible world while it is necessarily true if it is true in all possible worlds. Deontic logic pertains to ethics and provides a formal treatment of ethical notions, such as obligation and permission. Temporal logic formalizes temporal relations between propositions. This includes ideas like whether something is true at some time or all the time and whether it is true in the future or in the past. Epistemic logic belongs to epistemology. It can be used to express not just what is the case but also what someone believes or knows to be the case. Its rules of inference articulate what follows from the fact that someone has these kinds of mental states. Higher-order logics do not directly apply classical logic to certain new sub-fields within philosophy but generalize it by allowing quantification not just over individuals but also over predicates.
Deviant logics, in contrast to these forms of extended logics, reject some of the fundamental principles of classical logic and are often seen as its rivals. Intuitionistic logic is based on the idea that truth depends on verification through a proof. This leads it to reject certain rules of inference found in classical logic that are not compatible with this assumption. Free logic modifies classical logic in order to avoid existential presuppositions associated with the use of possibly empty singular terms, like names and definite descriptions. Many-valued logics allow additional truth values besides true and false. They thereby reject the principle of bivalence of truth. Paraconsistent logics are logical systems able to deal with contradictions. They do so by avoiding the principle of explosion found in classical logic. Relevance logic is a prominent form of paraconsistent logic. It rejects the purely truth-functional interpretation of the material conditional by introducing the additional requirement of relevance: for the conditional to be true, its antecedent has to be relevant to its consequent.
Definition and related fields
The term "philosophical logic" is used by different theorists in slightly different ways. When understood in a narrow sense, as discussed in this article, philosophical logic is the area of philosophy that studies the application of logical methods to philosophical problems. This usually happens in the form of developing new logical systems to either extend classical logic to new areas or to modify it to include certain logical intuitions not properly addressed by classical logic. In this sense, philosophical logic studies various forms of non-classical logics, like modal logic and deontic logic. This way, various fundamental philosophical concepts, like possibility, necessity, obligation, permission, and time, are treated in a logically precise manner by formally expressing the inferential roles they play in relation to each other. Some theorists understand philosophical logic in a wider sense as the study of the scope and nature of logic in general. On this view, it investigates various philosophical problems raised by logic, including the fundamental concepts of logic. In this wider sense, it can be understood as identical to the philosophy of logic, where these topics are discussed. The current article discusses only the narrow conception of philosophical logic. In this sense, it forms one area of the philosophy of logic.
Central to philosophical logic is an understanding of what logic is and what role philosophical logics play in it. Logic can be defined as the study of valid inferences. An inference is the step of reasoning in which it moves from the premises to a conclusion. Often the term "argument" is also used instead. An inference is valid if it is impossible for the premises to be true and the conclusion to be false. In this sense, the truth of the premises ensures the truth of the conclusion. This can be expressed in terms of rules of inference: an inference is valid if its structure, i.e. the way its premises and its conclusion are formed, follows a rule of inference. Different systems of logic provide different accounts for when an inference is valid. This means that they use different rules of inference. The traditionally dominant approach to validity is called classical logic. But philosophical logic is concerned with non-classical logic: it studies alternative systems of inference. The motivations for doing so can roughly be divided into two categories. For some, classical logic is too narrow: it leaves out many philosophically interesting issues. This can be solved by extending classical logic with additional symbols to give a logically strict treatment of further areas. Others see some flaw with classical logic itself and try to give a rival account of inference. This usually leads to the development of deviant logics, each of which modifies the fundamental principles behind classical logic in order to rectify their alleged flaws.
Classification of logics
Modern developments in the area of logic have resulted in a great proliferation of logical systems. This stands in stark contrast to the historical dominance of Aristotelian logic, which was treated as the one canon of logic for over two thousand years. Treatises on modern logic often treat these different systems as a list of separate topics without providing a clear classification of them. However, one classification frequently mentioned in the academic literature is due to Susan Haack and distinguishes between classical logic, extended logics, and deviant logics. This classification is based on the idea that classical logic, i.e. propositional logic and first-order logic, formalizes some of the most common logical intuitions. In this sense, it constitutes a basic account of the axioms governing valid inference. Extended logics accept this basic account and extend it to additional areas. This usually happens by adding new vocabulary, for example, to express necessity, obligation, or time. These new symbols are then integrated into the logical mechanism by specifying which new rules of inference apply to them, like that possibility follows from necessity. Deviant logics, on the other hand, reject some of the basic assumptions of classical logic. In this sense, they are not mere extensions of it but are often formulated as rival systems that offer a different account of the laws of logic.
Expressed in a more technical language, the distinction between extended and deviant logics is sometimes drawn in a slightly different manner. On this view, a logic is an extension of classical logic if two conditions are fulfilled: (1) all well-formed formulas of classical logic are also well-formed formulas in it and (2) all valid inferences in classical logic are also valid inferences in it. For a deviant logic, on the other hand, (a) its class of well-formed formulas coincides with that of classical logic, while (b) some valid inferences in classical logic are not valid inferences in it. The term quasi-deviant logic is used if (i) it introduces new vocabulary but all well-formed formulas of classical logic are also well-formed formulas in it and (ii) even when it is restricted to inferences using only the vocabulary of classical logic, some valid inferences in classical logic are not valid inferences in it. The term "deviant logic" is often used in a sense that includes quasi-deviant logics as well.
A philosophical problem raised by this plurality of logics concerns the question of whether there can be more than one true logic. Some theorists favor a local approach in which different types of logic are applied to different areas. Early intuitionists, for example, saw intuitionistic logic as the correct logic for mathematics but allowed classical logic in other fields. But others, like Michael Dummett, prefer a global approach by holding that intuitionistic logic should replace classical logic in every area. Monism is the thesis that there is only one true logic. This can be understood in different ways, for example, that only one of all the suggested logical systems is correct or that the correct logical system is yet to be found as a system underlying and unifying all the different logics. Pluralists, on the other hand, hold that a variety of different logical systems can all be correct at the same time.
A closely related problem concerns the question of whether all of these formal systems actually constitute logical systems. This is especially relevant for deviant logics that stray very far from the common logical intuitions associated with classical logic. In this sense, it has been argued, for example, that fuzzy logic is a logic only in name but should be considered a non-logical formal system instead since the idea of degrees of truth is too far removed from the most fundamental logical intuitions. So not everyone agrees that all the formal systems discussed in this article actually constitute logics, when understood in a strict sense.
Classical logic
Classical logic is the dominant form of logic used in most fields. The term refers primarily to propositional logic and first-order logic. Classical logic is not an independent topic within philosophical logic. But a good familiarity with it is still required since many of the logical systems of direct concern to philosophical logic can be understood either as extensions of classical logic, which accept its fundamental principles and build on top of it, or as modifications of it, rejecting some of its core assumptions. Classical logic was initially created in order to analyze mathematical arguments and was applied to various other fields only afterward. For this reason, it neglects many topics of philosophical importance not relevant to mathematics, like the difference between necessity and possibility, between obligation and permission, or between past, present, and future. These and similar topics are given a logical treatment in the different philosophical logics extending classical logic. Classical logic by itself is only concerned with a few basic concepts and the role these concepts play in making valid inferences. The concepts pertaining to propositional logic include propositional connectives, like "and", "or", and "if-then". Characteristic of the classical approach to these connectives is that they follow certain laws, like the law of excluded middle, the double negation elimination, the principle of explosion, and the bivalence of truth. This sets classical logic apart from various deviant logics, which deny one or several of these principles.
In first-order logic, the propositions themselves are made up of subpropositional parts, like predicates, singular terms, and quantifiers. Singular terms refer to objects and predicates express properties of objects and relations between them. Quantifiers constitute a formal treatment of notions like "for some" and "for all". They can be used to express whether predicates have an extension at all or whether their extension includes the whole domain. Quantification is only allowed over individual terms but not over predicates, in contrast to higher-order logics.
Extended logics
Alethic modal
Alethic modal logic has been very influential in logic and philosophy. It provides a logical formalism to express what is possibly or necessarily true. It constitutes an extension of first-order logic, which by itself is only able to express what is true simpliciter. This extension happens by introducing two new symbols: for possibility and for necessity. These symbols are used to modify propositions. For example, if stands for the proposition "Socrates is wise", then expresses the proposition "it is possible that Socrates is wise". In order to integrate these symbols into the logical formalism, various axioms are added to the existing axioms of first-order logic. They govern the logical behavior of these symbols by determining how the validity of an inference depends on the fact that these symbols are found in it. They usually include the idea that if a proposition is necessary then its negation is impossible, i.e. that is equivalent to . Another such principle is that if something is necessary, then it must also be possible. This means that follows from . There is disagreement about exactly which axioms govern modal logic. The different forms of modal logic are often presented as a nested hierarchy of systems in which the most fundamental systems, like system K, include only the most fundamental axioms while other systems, like the popular system S5, build on top of it by including additional axioms. In this sense, system K is an extension of first-order logic while system S5 is an extension of system K. Important discussions within philosophical logic concern the question of which system of modal logic is correct. It is usually advantageous to have the strongest system possible in order to be able to draw many different inferences. But this brings with it the problem that some of these additional inferences may contradict basic modal intuitions in specific cases. This usually motivates the choice of a more basic system of axioms.
Possible worlds semantics is a very influential formal semantics in modal logic that brings with it system S5. A formal semantics of a language characterizes the conditions under which the sentences of this language are true or false. Formal semantics play a central role in the model-theoretic conception of validity. They are able to provide clear criteria for when an inference is valid or not: an inference is valid if and only if it is truth-preserving, i.e. if whenever its premises are true then its conclusion is also true. Whether they are true or false is specified by the formal semantics. Possible worlds semantics specifies the truth conditions of sentences expressed in modal logic in terms of possible worlds. A possible world is a complete and consistent way how things could have been. On this view, a sentence modified by the -operator is true if it is true in at least one possible world while a sentence modified by the -operator is true if it is true in all possible worlds. So the sentence (it is possible that Socrates is wise) is true since there is at least one world where Socrates is wise. But (it is necessary that Socrates is wise) is false since Socrates is not wise in every possible world. Possible world semantics has been criticized as a formal semantics of modal logic since it seems to be circular. The reason for this is that possible worlds are themselves defined in modal terms, i.e. as ways how things could have been. In this way, it itself uses modal expressions to determine the truth of sentences containing modal expressions.
Deontic
Deontic logic extends classical logic to the field of ethics. Of central importance in ethics are the concepts of obligation and permission, i.e. which actions the agent has to do or is allowed to do. Deontic logic usually expresses these ideas with the operators and . So if stands for the proposition "Ramirez goes jogging", then means that Ramirez has the obligation to go jogging and means that Ramirez has the permission to go jogging.
Deontic logic is closely related to alethic modal logic in that the axioms governing the logical behavior of their operators are identical. This means that obligation and permission behave in regards to valid inference just like necessity and possibility do. For this reason, sometimes even the same symbols are used as operators. Just as in alethic modal logic, there is a discussion in philosophical logic concerning which is the right system of axioms for expressing the common intuitions governing deontic inferences. But the arguments and counterexamples here are slightly different since the meanings of these operators differ. For example, a common intuition in ethics is that if the agent has the obligation to do something then they automatically also have the permission to do it. This can be expressed formally through the axiom schema . Another question of interest to philosophical logic concerns the relation between alethic modal logic and deontic logic. An often discussed principle in this respect is that ought implies can. This means that the agent can only have the obligation to do something if it is possible for the agent to do it. Expressed formally: .
Temporal
Temporal logic, or tense logic, uses logical mechanisms to express temporal relations. In its most simple form, it contains one operator to express that something happened at one time and another to express that something is happening all the time. These two operators behave in the same way as the operators for possibility and necessity in alethic modal logic. Since the difference between past and future is of central importance to human affairs, these operators are often modified to take this difference into account. Arthur Prior's tense logic, for example, realizes this idea using four such operators: (it was the case that...), (it will be the case that...), (it has always been the case that...), and (it will always be the case that...). So to express that it will always be rainy in London one could use . Various axioms are used to govern which inferences are valid depending on the operators appearing in them. According to them, for example, one can deduce (it will be rainy in London at some time) from . In more complicated forms of temporal logic, also binary operators linking two propositions are defined, for example, to express that something happens until something else happens.
Temporal modal logic can be translated into classical first-order logic by treating time in the form of a singular term and increasing the arity of one's predicates by one. For example, the tense-logic-sentence (it is dark, it was light, and it will be light again) can be translated into pure first-order logic as . While similar approaches are often seen in physics, logicians usually prefer an autonomous treatment of time in terms of operators. This is also closer to natural languages, which mostly use grammar, e.g. by conjugating verbs, to express the pastness or futurity of events.
Epistemic
Epistemic logic is a form of modal logic applied to the field of epistemology. It aims to capture the logic of knowledge and belief. The modal operators expressing knowledge and belief are usually expressed through the symbols and . So if stands for the proposition "Socrates is wise", then expresses the proposition "the agent knows that Socrates is wise" and expresses the proposition "the agent believes that Socrates is wise". Axioms governing these operators are then formulated to express various epistemic principles. For example, the axiom schema expresses that whenever something is known, then it is true. This reflects the idea that one can only know what is true, otherwise it is not knowledge but another mental state. Another epistemic intuition about knowledge concerns the fact that when the agent knows something, they also know that they know it. This can be expressed by the axiom schema . An additional principle linking knowledge and belief states that knowledge implies belief, i.e. . Dynamic epistemic logic is a distinct form of epistemic logic that focuses on situations in which changes in belief and knowledge happen.
Higher-order
Higher-order logics extend first-order logic by including new forms of quantification. In first-order logic, quantification is restricted to singular terms. It can be used to talk about whether a predicate has an extension at all or whether its extension includes the whole domain. This way, propositions like (there are some apples that are sweet) can be expressed. In higher-order logics, quantification is allowed not just over individual terms but also over predicates. This way, it is possible to express, for example, whether certain individuals share some or all of their predicates, as in (there are some qualities that Mary and John share). Because of these changes, higher-order logics have more expressive power than first-order logic. This can be helpful for mathematics in various ways since different mathematical theories have a much simpler expression in higher-order logic than in first-order logic. For example, Peano arithmetic and Zermelo-Fraenkel set theory need an infinite number of axioms to be expressed in first-order logic. But they can be expressed in second-order logic with only a few axioms.
But despite this advantage, first-order logic is still much more widely used than higher-order logic. One reason for this is that higher-order logic is incomplete. This means that, for theories formulated in higher-order logic, it is not possible to prove every true sentence pertaining to the theory in question. Another disadvantage is connected to the additional ontological commitments of higher-order logics. It is often held that the usage of the existential quantifier brings with it an ontological commitment to the entities over which this quantifier ranges. In first-order logic, this concerns only individuals, which is usually seen as an unproblematic ontological commitment. In higher-order logic, quantification concerns also properties and relations. This is often interpreted as meaning that higher-order logic brings with it a form of Platonism, i.e. the view that universal properties and relations exist in addition to individuals.
Deviant logics
Intuitionistic
Intuitionistic logic is a more restricted version of classical logic. It is more restricted in the sense that certain rules of inference used in classical logic do not constitute valid inferences in it. This concerns specifically the law of excluded middle and the double negation elimination. The law of excluded middle states that for every sentence, either it or its negation are true. Expressed formally: . The law of double negation elimination states that if a sentence is not not true, then it is true, i.e. . Due to these restrictions, many proofs are more complicated and some proofs otherwise accepted become impossible.
These modifications of classical logic are motivated by the idea that truth depends on verification through a proof. This has been interpreted in the sense that "true" means "verifiable". It was originally only applied to the area of mathematics but has since then been used in other areas as well. On this interpretation, the law of excluded middle would involve the assumption that every mathematical problem has a solution in the form of a proof. In this sense, the intuitionistic rejection of the law of excluded middle is motivated by the rejection of this assumption. This position can also be expressed by stating that there are no unexperienced or verification-transcendent truths. In this sense, intuitionistic logic is motivated by a form of metaphysical idealism. Applied to mathematics, it states that mathematical objects exist only to the extent that they are constructed in the mind.
Free
Free logic rejects some of the existential presuppositions found in classical logic. In classical logic, every singular term has to denote an object in the domain of quantification. This is usually understood as an ontological commitment to the existence of the named entity. But many names are used in everyday discourse that do not refer to existing entities, like "Santa Claus" or "Pegasus". This threatens to preclude such areas of discourse from a strict logical treatment. Free logic avoids these problems by allowing formulas with non-denoting singular terms. This applies to proper names as well as definite descriptions, and functional expressions. Quantifiers, on the other hand, are treated in the usual way as ranging over the domain. This allows for expressions like (Santa Claus does not exist) to be true even though they are self-contradictory in classical logic. It also brings with it the consequence that certain valid forms of inference found in classical logic are not valid in free logic. For example, one may infer from (Santa Claus has a beard) that (something has a beard) in classical logic but not in free logic. In free logic, often an existence-predicate is used to indicate whether a singular term denotes an object in the domain or not. But the usage of existence-predicates is controversial. They are often opposed, based on the idea that existence is required if any predicates should apply to the object at all. In this sense, existence cannot itself be a predicate.
Karel Lambert, who coined the term "free logic", has suggested that free logic can be understood as a generalization of classical predicate logic just as predicate logic is a generalization of Aristotelian logic. On this view, classical predicate logic introduces predicates with an empty extension while free logic introduces singular terms of non-existing things.
An important problem for free logic consists in how to determine the truth value of expressions containing empty singular terms, i.e. of formulating a formal semantics for free logic. Formal semantics of classical logic can define the truth of their expressions in terms of their denotation. But this option cannot be applied to all expressions in free logic since not all of them have a denotation. Three general approaches to this issue are often discussed in the literature: negative semantics, positive semantics, and neutral semantics. Negative semantics hold that all atomic formulas containing empty terms are false. On this view, the expression is false. Positive semantics allows that at least some expressions with empty terms are true. This usually includes identity statements, like . Some versions introduce a second, outer domain for non-existing objects, which is then used to determine the corresponding truth values. Neutral semantics, on the other hand, hold that atomic formulas containing empty terms are neither true nor false. This is often understood as a three-valued logic, i.e. that a third truth value besides true and false is introduced for these cases.
Many-valued
Many-valued logics are logics that allow for more than two truth values. They reject one of the core assumptions of classical logic: the principle of the bivalence of truth. The most simple versions of many-valued logics are three-valued logics: they contain a third truth value. In Stephen Cole Kleene's three-valued logic, for example, this third truth value is "undefined". According to Nuel Belnap's four-valued logic, there are four possible truth values: "true", "false", "neither true nor false", and "both true and false". This can be interpreted, for example, as indicating the information one has concerning whether a state obtains: information that it does obtain, information that it does not obtain, no information, and conflicting information. One of the most extreme forms of many-valued logic is fuzzy logic. It allows truth to arise in any degree between 0 and 1. 0 corresponds to completely false, 1 corresponds to completely true, and the values in between correspond to truth in some degree, e.g. as a little true or very true. It is often used to deal with vague expressions in natural language. For example, saying that "Petr is young" fits better (i.e. is "more true") if "Petr" refers to a three-year-old than if it refers to a 23-year-old. Many-valued logics with a finite number of truth-values can define their logical connectives using truth tables, just like classical logic. The difference is that these truth tables are more complex since more possible inputs and outputs have to be considered. In Kleene's three-valued logic, for example, the inputs "true" and "undefined" for the conjunction-operator result in the output "undefined". The inputs "false" and "undefined", on the other hand, result in "false".
Paraconsistent
Paraconsistent logics are logical systems that can deal with contradictions without leading to all-out absurdity. They achieve this by avoiding the principle of explosion found in classical logic. According to the principle of explosion, anything follows from a contradiction. This is the case because of two rules of inference, which are valid in classical logic: disjunction introduction and disjunctive syllogism. According to the disjunction introduction, any proposition can be introduced in the form of a disjunction when paired with a true proposition. So since it is true that "the sun is bigger than the moon", it is possible to infer that "the sun is bigger than the moon or Spain is controlled by space-rabbits". According to the disjunctive syllogism, one can infer that one of these disjuncts is true if the other is false. So if the logical system also contains the negation of this proposition, i.e. that "the sun is not bigger than the moon", then it is possible to infer any proposition from this system, like the proposition that "Spain is controlled by space-rabbits". Paraconsistent logics avoid this by using different rules of inference that make inferences in accordance with the principle of explosion invalid.
An important motivation for using paraconsistent logics is dialetheism, i.e. the belief that contradictions are not just introduced into theories due to mistakes but that reality itself is contradictory and contradictions within theories are needed to accurately reflect reality. Without paraconsistent logics, dialetheism would be hopeless since everything would be both true and false. Paraconsistent logics make it possible to keep contradictions local, without exploding the whole system. But even with this adjustment, dialetheism is still highly contested. Another motivation for paraconsistent logic is to provide a logic for discussions and group beliefs where the group as a whole may have inconsistent beliefs if its different members are in disagreement.
Relevance
Relevance logic is one type of paraconsistent logic. As such, it also avoids the principle of explosion even though this is usually not the main motivation behind relevance logic. Instead, it is usually formulated with the goal of avoiding certain unintuitive applications of the material conditional found in classical logic. Classical logic defines the material conditional in purely truth-functional terms, i.e. is false if is true and is false, but otherwise true in every case. According to this formal definition, it does not matter whether and are relevant to each other in any way. For example, the material conditional "if all lemons are red then there is a sandstorm inside the Sydney Opera House" is true even though the two propositions are not relevant to each other.
The fact that this usage of material conditionals is highly unintuitive is also reflected in informal logic, which categorizes such inferences as fallacies of relevance. Relevance logic tries to avoid these cases by requiring that for a true material conditional, its antecedent has to be relevant to the consequent. A difficulty faced for this issue is that relevance usually belongs to the content of the propositions while logic only deals with formal aspects. This problem is partially addressed by the so-called variable sharing principle. It states that antecedent and consequent have to share a propositional variable. This would be the case, for example, in but not in . A closely related concern of relevance logic is that inferences should follow the same requirement of relevance, i.e. that it is a necessary requirement of valid inferences that their premises are relevant to their conclusion.
References | 0.798899 | 0.983326 | 0.785578 |
Demarcation problem | In philosophy of science and epistemology, the demarcation problem is the question of how to distinguish between science and non-science. It also examines the boundaries between science, pseudoscience and other products of human activity, like art and literature and beliefs. The debate continues after more than two millennia of dialogue among philosophers of science and scientists in various fields. The debate has consequences for what can be termed "scientific" in topics such as education and public policy.
The ancients
An early attempt at demarcation can be seen in the efforts of Greek natural philosophers and medical practitioners to distinguish their methods and their accounts of nature from the mythological or mystical accounts of their predecessors and contemporaries.
G. E. R. Lloyd noted that there was a sense in which the groups engaged in various forms of inquiry into nature attempt to "legitimate their own positions", laying "claim to a new kind of wisdom ... that purported to yield superior enlightenment, even superior practical effectiveness". Medical writers of the Hippocratic tradition maintained that their discussions were based on demonstration of logical necessity, a theme developed by Aristotle in his Posterior Analytics. One element of this polemic for science was an insistence on a clear and unequivocal presentation of arguments, rejecting the imagery, analogy, and myth of the old wisdom. Some of their claimed naturalistic explanations of phenomena have been found to be quite fanciful, with little reliance on actual observations.
Cicero's De Divinatione implicitly used five criteria of scientific demarcation that are also used by modern philosophers of science.
Logical positivism
Logical positivism, formulated during the 1920s, is the idea that only statements about matters of fact or logical relations between concepts are meaningful. All other statements lack sense and are labelled "metaphysics" (see the verifiability theory of meaning also known as verificationism).
According to A. J. Ayer, metaphysicians make statements which claim to have "knowledge of a reality which [transcends] the phenomenal world". Ayer, a member of the Vienna Circle and a noted English logical-positivist, argued that making any statements about the world beyond one's immediate sense-perception is impossible. This is because even metaphysicians' first premises will necessarily begin with observations made through sense-perception.
Ayer implied that the demarcation occurs when statements become "factually significant". To be "factually significant", a statement must be verifiable. In order to be verifiable, the statement must be verifiable in the observable world, or facts that can be induced from "derived experience". This is referred to as the "verifiability" criterion.
This distinction between science, which in the opinion of the Vienna Circle possessed empirically verifiable statements, and what they pejoratively termed "metaphysics", which lacked such statements, can be considered as representing another aspect of the demarcation problem. Logical positivism is often discussed in the context of the demarcation between science and non-science or pseudoscience. However, "The verificationist proposals had the aim of solving a distinctly different demarcation problem, namely that between science and metaphysics."
Falsifiability
Karl Popper considered demarcation as a major problem of the philosophy of science. Popper articulates the problem of demarcation as:
Falsifiability is the demarcation criterion proposed by Popper as opposed to verificationism: "statements or systems of statements, in order to be ranked as scientific, must be capable of conflicting with possible, or conceivable observations."
Against verifiability
Popper rejected solutions to the problem of demarcation that are grounded in inductive reasoning, and so rejected logical-positivist responses to the problem of demarcation. He argued that logical-positivists want to create a demarcation between the metaphysical and the empirical because they believe that empirical claims are meaningful and metaphysical ones are not. Unlike the Vienna Circle, Popper stated that his proposal was not a criterion of "meaningfulness".
Popper argued that the Humean induction problem shows that there is no way to make meaningful universal statements on the basis of any number of empirical observations. Therefore, empirical statements are no more "verifiable" than metaphysical statements.
This creates a problem for the demarcation the positivists wanted to define between the empirical and the metaphysical. By their very own "verifiability criterion", Popper argued, the empirical is subsumed into the metaphysical, and the demarcation between the two becomes non-existent.
The solution of falsifiability
In Popper's later work, he stated that falsifiability is both a necessary and sufficient criterion for demarcation. He described falsifiability as a property of "the logical structure of sentences and classes of sentences", so that a statement's scientific or non-scientific status does not change over time. This has been summarized as a statement being falsifiable "if and only if it logically contradicts some (empirical) sentence that describes a logically possible event that it would be logically possible to observe".
Kuhnian postpositivism
Thomas Kuhn, an American historian and philosopher of science, is often associated with what has been termed postpositivism or postempiricism. In his 1962 book The Structure of Scientific Revolutions, Kuhn divided the process of doing science into two different endeavors, which he termed normal science and extraordinary science (sometimes known as "revolutionary science"), the latter of which introduces a new "paradigm" that solves new problems while continuing to provide solutions to the problems solved by the preceding paradigm.
Popper criticized Kuhn's demarcation criterion, saying that astrologers are engaged in puzzle solving, and that therefore Kuhn's criterion recognized astrology as a science. He stated that Kuhn's criterion results in a "major disaster ... [the] replacement of a rational criterion of science by a sociological one".
Feyerabend and Lakatos
Kuhn's work largely called into question Popper's demarcation, and emphasized the human, subjective quality of scientific change. Paul Feyerabend was concerned that the very question of demarcation was insidious: science itself had no need of a demarcation criterion, but instead some philosophers were seeking to justify a special position of authority from which science could dominate public discourse. Feyerabend argued that science is not in fact special in terms of either its logic or method, and no claim to special authority made by scientists can be sustained. He argued that, within the history of scientific practice, no rule or method can be found that has not been violated or circumvented at some point in order to advance scientific knowledge. Both Imre Lakatos and Feyerabend suggest that science is not an autonomous form of reasoning, but is inseparable from the larger body of human thought and inquiry.
Thagard
Paul R. Thagard proposed another set of principles to try to overcome these difficulties, and argued that it is important for society to find a way of doing so. According to Thagard's method, a theory is not scientific if it satisfies two conditions:
The theory has been less progressive than alternative theories over a long period of time, and has many unsolved problems; and...
The community of practitioners makes little attempt to develop the theory towards solutions of the problems, shows no concern for attempts to evaluate the theory in relation to others, and is selective in considering confirmations and disconfirmations.
Thagard specified that sometimes theories will spend some time as merely "unpromising" before they truly deserve the title of pseudoscience. He cited astrology as an example: it was stagnant compared to advances in physics during the 17th century, and only later became "pseudoscience" in the advent of alternative explanations provided by psychology during the 19th century.
Thagard also stated that his criteria should not be interpreted so narrowly as to allow willful ignorance of alternative explanations, or so broadly as to discount our modern science compared to science of the future. His definition is a practical one, which generally seeks to distinguish pseudoscience as areas of inquiry which are stagnant and without active scientific investigation.
Some historians' perspectives
Many historians of science are concerned with the development of science from its primitive origins; consequently they define science in sufficiently broad terms to include early forms of natural knowledge. In the article on science in the eleventh edition of the Encyclopædia Britannica, the scientist and historian William Cecil Dampier Whetham defined science as "ordered knowledge of natural phenomena and of the relations between them". In his study of Greek science, Marshall Clagett defined science as "first, the orderly and systematic comprehension, description and/or explanation of natural phenomena and, secondly, the [mathematical and logical] tools necessary for the undertaking". A similar definition appeared more recently in David Pingree's study of early science: "Science is a systematic explanation of perceived or imaginary phenomena, or else is based on such an explanation. Mathematics finds a place in science only as one of the symbolical languages in which scientific explanations may be expressed." These definitions tend to emphasize the subject matter of science rather than its method and from these perspectives, the philosophical concern to establish a demarcation between science and non-science becomes "problematic, if not futile".
Laudan
Larry Laudan concluded, after examining various historical attempts to establish a demarcation criterion, that "philosophy has failed to deliver the goods" in its attempts to distinguish science from non-science—to distinguish science from pseudoscience. None of the past attempts would be accepted by a majority of philosophers nor, in his opinion, should they be accepted by them or by anyone else. He stated that many well-founded beliefs are not scientific and, conversely, many scientific conjectures are not well-founded. He also stated that demarcation criteria were historically used as in polemical disputes between "scientists" and "pseudo-scientists". Advancing a number of examples from everyday practice of football and carpentry and non-scientific scholarship such as literary criticism and philosophy, he considered the question of whether a belief is well-founded or not to be more practically and philosophically significant than whether it is scientific or not. In his judgment, the demarcation between science and non-science was a pseudo-problem that would best be replaced by examining the distinction between reliable and unreliable knowledge, without bothering to ask whether that knowledge is scientific or not. He would consign phrases like "pseudo-science" or "unscientific" to the rhetoric of politicians or sociologists.
After Laudan
Others have disagreed with Laudan. Sebastian Lutz, for example, argued that demarcation does not have to be a single necessary and sufficient condition as Laudan implied. Rather, Laudan's reasoning at most establishes that there has to be one necessary criterion and one possibly different sufficient criterion.
Various typologies or taxonomies of sciences versus nonsciences, and reliable knowledge versus illusory knowledge, have been proposed. Ian Hacking, Massimo Pigliucci, and others have noted that the sciences generally conform to Ludwig Wittgenstein's concept of family resemblances.
Other critics have argued for multiple demarcation criteria, some suggesting that there should be one set of criteria for the natural sciences, another set of criteria for the social sciences, and claims involving the supernatural could have a set of pseudoscientific criteria. Anthropologist Sean M. Rafferty of the University at Albany, SUNY in his text Misanthropology: Science, Pseudoscience, and the Study of Humanity contrasts science and pseudoscience within his discipline thusly:[E]ven for those subfields where there is a significant element of interpretation, those interpretations are still based on and constrained by physical evidence. And interpretations are always provisional, pending possible refutation by contradictory evidence.... Pseudoscience, by comparison, is scornful of evidence. The pseudoscientist reaches a preferred conclusion in advance, then selects evidence, often removed from any relevant context, to lend supposed support for their conclusions. Often the preconceived conclusion is one that justifies some closely held identity or ideology. Contradictory evidence is waved away or ignored, and as a last resort, one can always claim conspiracy to keep pseudoscientific ideas suppressed.
Significance
Concerning science education, Michael D. Gordin wrote:
Discussions of the demarcation problem concern the rhetoric of science and promote critical thinking, which is important for democracy. For example, Gordin stated: "Demarcation remains essential for the enormously high political stakes of climate-change denial and other anti-regulatory fringe doctrines".
Philosopher noted:
Concern for informed human nutrition resulted in the following note in 1942:
The demarcation problem has been compared to the problem of differentiating fake news from real news, which became prominent during the 2016 United States presidential election.
See also
Boundary-work
Idealism
Relativism
Scientism
References
External links
Philosophy of science
Science studies
Religion and science
Dichotomies
Philosophical problems | 0.792232 | 0.99133 | 0.785363 |
Platonic epistemology | In philosophy, Plato's epistemology is a theory of knowledge developed by the Greek philosopher Plato and his followers.
Platonic epistemology holds that knowledge of Platonic Ideas is innate, so that learning is the development of ideas buried deep in the soul, often under the midwife-like guidance of an interrogator. In several dialogues by Plato, the character Socrates presents the view that each soul existed before birth with the Form of the Good and a perfect knowledge of Ideas. Thus, when an Idea is "learned" it is actually just "recalled".
Plato drew a sharp distinction between knowledge, which is certain, and mere true opinion, which is not certain. Opinions derive from the shifting world of sensation; knowledge derives from the world of timeless Forms, or essences. In The Republic, these concepts were illustrated using the metaphor of the Sun, the analogy of the divided line, and the allegory of the cave.
Platonic doctrine of recollection
The Platonic doctrine of recollection, or anamnesis is the view that we are born possessing all knowledge and our realization of that knowledge is contingent on our discovery of it. Whether the doctrine should be taken literally or not is a subject of debate. The soul is trapped in the body. The soul was once directly acquainted with the Forms, but it is now embodied. It once knew all of the Forms, but forgot them. Recollection is the process of bringing to our attention this knowledge that we have forgotten. This doctrine implies that nothing is ever learned, it is simply recalled or remembered. In short it says that all that we know already comes pre-loaded on birth and our senses enable us to identify and recognize the stratified information in our mind. Recollection involves, on the one hand, overcoming the deceptions and distractions of the body, but, on the other hand, productively using the body's deceptions to occasion or trigger the episodes of recollection. The main texts that develop the theory of recollection are the Phaedo and Meno, although the theory also plays an important role in the Phaedrus.
Metaphor of the Sun
In the Republic (VI 507b-509c), Plato's character, Socrates, uses the Sun as a metaphor for the source of "intellectual illumination," which he held to be The Form of the Good. The metaphor is about the nature of ultimate reality and how we come to know it. It starts with the eye, which Socrates says is unusual among the sense organs in that it needs a medium, namely light, in order to operate. The strongest and best source of light is the Sun; with it, we can discern objects clearly. Analogously, for intelligible objects The Form of the Good is necessary in order to understand any particular thing. Thus, if we attempt to understand why things are as they are, and what general categories can be used to understand various particulars around us, without reference to any forms (universals) we will fail completely. In contrast, "the domain where truth and reality shine resplendent" is none other than Plato's world of forms, illuminated by the highest of the Forms, that of the Good.
The divided line
In the sixth book of the Republic, the divided line has two parts that represent the intelligible world and the smaller visible world. Each of those two parts is divided, the segments within the intelligible world represent higher and lower forms and the segments within the visible world represent ordinary visible objects and their shadows, reflections, and other representations. The line segments are unequal and their lengths represent "their comparative clearness and obscurity" and their comparative "reality and truth," as well as whether we have knowledge or instead mere opinion of the objects.
Allegory of the cave
In his best-known dialogue, The Republic, Plato drew an analogy between human sensation and the shadows that pass along the wall of a cave - an allegory known as Plato's allegory of the cave.
Charioteer myth
Along with these other allegories, Plato's charioteer myth in the Phaedrus (245c-257b) certainly also deserves mention. The ascent of the mind to celestial and trans-celestial realms is likened to a charioteer and a chariot drawn by two winged horses, one dark and one white. Figuratively represented is the famous Platonic tripartite model of the soul: the charioteer represents reason, or intellect, the dark horse appetitive passions, and the white horse irascible nature. Only by taming and controlling the two horses can the charioteer ascend to the heavens and enjoy a banquet of divine knowledge. Key epistemological features of the charioteer myth are (1) an emphasis, as with the cave allegory, upon true knowledge as ascent, (2) and the need to tame one's passionate nature to obtain true knowledge.
An example: love and wisdom
A good example of how Plato presents the acquiring of knowledge is contained in the Ladder of Love. In Symposium (210a-211b), Plato's Socrates cites the priestess Diotima as defining a "lover" as someone who loves and love as a desire for something that one does not have. According to this ladder model of love, a lover progresses from rung to rung from the basest love to the pure form of love as follows:
A beautiful body - The lover begins here at the most obvious form of love.
All beautiful bodies - If the lover examines his love and does some investigating, he/she will find that the beauty contained in this beautiful body is not original, that it is shared by every beautiful body.
Beautiful souls - After most likely attempting to have every beautiful body, the lover should realize that if a single love does not satisfy, there is no reason to think that many ones will satisfy. Thus, the "lover of every body" must, in the words of Plato, "bring his passion for the one into due proportion by deeming it of little or of no importance." Instead, the passion is transferred to a more appropriate object: the soul.
The beauty of laws and institutions - The next logical step is for the lover to love all beautiful souls and then to transfer that love to that which is responsible for their existence: a moderate, harmonious and just social order.
The beauty of knowledge - Once proceeding down this path, the lover will naturally long for that which produces and makes intelligible good social institutions: knowledge.
Beauty itself - This is the Form of Beauty. It is not a particular thing that is beautiful, but is instead the essence of beauty. Plato describes this level of love as a "wondrous vision," an "everlasting loveliness which neither comes nor ages, which neither flowers nor fades." It is eternal and isn't "anything that is of the flesh" nor "words" nor "knowledge" but consists "of itself and by itself in an eternal oneness, while every lovely thing partakes of it."
Knowledge concerning other things is similarly gained by progressing from a base reality (or shadow) of the thing sought (red, tall, thin, keen, etc.) to the eventual form of the thing sought, or the thing sought itself. Such steps follow the same pattern as Plato's metaphor of the sun, his allegory of the cave and his divided line; progress brings one closer and closer to reality as each step explains the relative reality of the past.
References
Epistemological theories
Platonism
Ancient Greek epistemology | 0.795297 | 0.987417 | 0.78529 |
Applied science | Applied science is the application of the scientific method and scientific knowledge to attain practical goals. It includes a broad range of disciplines, such as engineering and medicine. Applied science is often contrasted with basic science, which is focused on advancing scientific theories and laws that explain and predict natural or other phenomena.
There are applied natural sciences, as well as applied formal and social sciences. Applied science examples include genetic epidemiology which applies statistics and probability theory, and applied psychology, including criminology.
Applied research
Applied research is the use of empirical methods to collect data for practical purposes. It accesses and uses accumulated theories, knowledge, methods, and techniques for a specific state, business, or client-driven purpose. In contrast to engineering, applied research does not include analyses or optimization of business, economics, and costs. Applied research can be better understood in any area when contrasting it with basic or pure research. Basic geographical research strives to create new theories and methods that aid in explaining the processes that shape the spatial structure of physical or human environments. Instead, applied research utilizes existing geographical theories and methods to comprehend and address particular empirical issues. Applied research usually has specific commercial objectives related to products, procedures, or services. The comparison of pure research and applied research provides a basic framework and direction for businesses to follow.
Applied research deals with solving practical problems and generally employs empirical methodologies. Because applied research resides in the messy real world, strict research protocols may need to be relaxed. For example, it may be impossible to use a random sample. Thus, transparency in the methodology is crucial. Implications for the interpretation of results brought about by relaxing an otherwise strict canon of methodology should also be considered.
Moreover, this type of research method applies natural sciences to human conditions:
Action research: aids firms in identifying workable solutions to issues influencing them.
Evaluation research: researchers examine available data to assist clients in making wise judgments.
Industrial research: create new goods/services that will satisfy the demands of a target market. (Industrial development would be scaling up production of the new goods/services for mass consumption to satisfy the economic demand of the customers while maximizing the ratio of the good/service output rate to resource input rate, the ratio of good/service revenue to material & energy costs, and the good/service quality. Industrial development would be considered engineering. Industrial development would fall outside the scope of applied research.)
Since applied research has a provisional close-to-the-problem and close-to-the-data orientation, it may also use a more provisional conceptual framework, such as working hypotheses or pillar questions. The OECD's Frascati Manual describes applied research as one of the three forms of research, along with basic research & experimental development.
Due to its practical focus, applied research information will be found in the literature associated with individual disciplines.
Branches
Applied research is a method of problem-solving and is also practical in areas of science, such as its presence in applied psychology. Applied psychology uses human behavior to grab information to locate a main focus in an area that can contribute to finding a resolution. More specifically, this study is applied in the area of criminal psychology. With the knowledge obtained from applied research, studies are conducted on criminals alongside their behavior to apprehend them. Moreover, the research extends to criminal investigations. Under this category, research methods demonstrate an understanding of the scientific method and social research designs used in criminological research. These reach more branches along the procedure towards the investigations, alongside laws, policy, and criminological theory.
Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems.The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering. Some scientific subfields used by engineers include thermodynamics, heat transfer, fluid mechanics, statics, dynamics, mechanics of materials, kinematics, electromagnetism, materials science, earth sciences, and engineering physics.
Medical sciences, such as medical microbiology, pharmaceutical research, and clinical virology, are applied sciences that apply biology and chemistry to medicine.
In education
In Canada, the Netherlands, and other places, the Bachelor of Applied Science (BASc) is sometimes equivalent to the Bachelor of Engineering and is classified as a professional degree. This is based on the age of the school where applied science used to include boiler making, surveying, and engineering. There are also Bachelor of Applied Science degrees in Child Studies. The BASc tends to focus more on the application of the engineering sciences. In Australia and New Zealand, this degree is awarded in various fields of study and is considered a highly specialized professional degree.
In the United Kingdom's educational system, Applied Science refers to a suite of "vocational" science qualifications that run alongside "traditional" General Certificate of Secondary Education or A-Level Sciences. Applied Science courses generally contain more coursework (also known as portfolio or internally assessed work) compared to their traditional counterparts. These are an evolution of the GNVQ qualifications offered up to 2005. These courses regularly come under scrutiny and are due for review following the Wolf Report 2011; however, their merits are argued elsewhere.
In the United States, The College of William & Mary offers an undergraduate minor as well as Master of Science and Doctor of Philosophy degrees in "applied science". Courses and research cover varied fields, including neuroscience, optics, materials science and engineering, nondestructive testing, and nuclear magnetic resonance. University of Nebraska–Lincoln offers a Bachelor of Science in applied science, an online completion Bachelor of Science in applied science, and a Master of Applied Science. Coursework is centered on science, agriculture, and natural resources with a wide range of options, including ecology, food genetics, entrepreneurship, economics, policy, animal science, and plant science. In New York City, the Bloomberg administration awarded the consortium of Cornell-Technion $100 million in City capital to construct the universities' proposed Applied Sciences campus on Roosevelt Island.
See also
Applied mathematics
Basic research
Exact sciences
Hard and soft science
Invention
Secondary research
References
External links
Branches of science | 0.788075 | 0.996006 | 0.784927 |
Epistemic virtue | The epistemic virtues, as identified by virtue epistemologists, reflect their contention that belief is an ethical process, and thus susceptible to intellectual virtue or vice. Some epistemic virtues have been identified by W. Jay Wood, based on research into the medieval tradition. Epistemic virtues are sometimes also called intellectual virtues.
Foundations of epistemology
The foundation for epistemic virtues is epistemology, the theory of what we know to be true according to our own perception in relation to reality. Philosophers are interested in how the mind relates to reality and the overall nature of knowledge. Epistemology battles with skepticism by trying to come up with a base from which all knowledge and science can be built up. Skepticism promotes an impasse to this because we must doubt what we know in order to know if what we know is indeed true.
Epistemic virtues and well-being
Virtues in general are characteristic habits or ways of relating to the world that exhibit or promote human flourishing. Epistemic virtues are those characteristic habits that promote the acquisition of and utilization of true knowledge.
There is potential tension between these two concepts because learning the truth can sometimes make a person worse off, and so remaining ignorant can arguably be the better option. An example of this would be a person being better off not knowing that their significant other is being unfaithful; some people would prefer to live in the lie because it would affect them less.
Overview
Being an epistemically virtuous person is often equated with being a critical thinker and focuses on the human agent and the kind of practices that make it possible to arrive at the best accessible approximation of the truth.
Epistemic virtues include conscientiousness as well as the following:
attentiveness
benevolence (principle of charity)
creativity
curiosity (see below)
discernment
honesty
humility
objectivity
parsimony
passion (rational)
studiousness/assiduity
scrutiny
understanding
warranty
wisdom
These can be contrasted to the epistemic vices such as:
curiosity (see below)
denial / wishful thinking
dishonesty
dogmatism (irrational)
epistemic blindness
folly
gullibility
hubris
laziness
passion (irrational)
obtuseness
superficiality of thought
superstition
willful naïveté
anti-intellectualism
apathy
Note that, in the vice context, curiosity bears the medieval connotation of attraction to unwholesome things, in contrast to the positive studious (or perhaps inquisitive).
See also
Egocentrism
Intellectual virtue
Notes
Further reading
External links
Epistemic akrasia (irrationality) as a deficit of virtue by Christopher Hookway
Is Inclusion an Epistemic Virtue? by Harvey Siegel
Review of James Montmarquet's Epistemic Virtue and Doxastic Responsibility by Jonathan L. Kvanvig
Concepts in epistemology
Virtue | 0.813078 | 0.965334 | 0.784892 |
Social constructionism | Social constructionism is a term used in sociology, social ontology, and communication theory. The term can serve somewhat different functions in each field; however, the foundation of this theoretical framework suggests various facets of social reality—such as concepts, beliefs, norms, and values—are formed through continuous interactions and negotiations among society's members, rather than empirical observation of physical reality. The theory of social constructionism posits that much of what individuals perceive as 'reality' is actually the outcome of a dynamic process of construction influenced by social conventions and structures.
Unlike phenomena that are innately determined or biologically predetermined, these social constructs are collectively formulated, sustained, and shaped by the social contexts in which they exist. These constructs significantly impact both the behavior and perceptions of individuals, often being internalized based on cultural narratives, whether or not these are empirically verifiable. In this two-way process of reality construction, individuals not only interpret and assimilate information through their social relations but also contribute to shaping existing societal narratives.
Examples of social constructs range widely, encompassing the assigned value of money, conceptions of concept of self/self-identity, beauty standards, gender, language, race, ethnicity, social class, social hierarchy, nationality, religion, social norms, the modern calendar and other units of time, marriage, education, citizenship, stereotypes, femininity and masculinity, social institutions, and even the idea of 'social construct' itself. These constructs are not universal truths but are flexible entities that can vary dramatically across different cultures and societies. They arise from collaborative consensus and are shaped and maintained through collective human interactions, cultural practices, and shared beliefs. This articulates the view that people in society construct ideas or concepts that may not exist without the existence of people or language to validate those concepts, meaning without a society these constructs would cease to exist.
Overview
A social construct or construction is the meaning, notion, or connotation placed on an object or event by a society, and adopted by that society with respect to how they view or deal with the object or event.
The social construction of target populations refers to the cultural characterizations or popular images of the persons or groups whose behavior and well-being are affected by public policy.
Social constructionism posits that the meanings of phenomena do not have an independent foundation outside the mental and linguistic representation that people develop about them throughout their history, and which becomes their shared reality. From a linguistic viewpoint, social constructionism centres meaning as an internal reference within language (words refer to words, definitions to other definitions) rather than to an external reality.
Origins
In the 16th century, Michel de Montaigne wrote that, "We need to interpret interpretations more than to interpret things." In 1886 or 1887, Friedrich Nietzsche put it similarly: "Facts do not exist, only interpretations." In his 1922 book Public Opinion, Walter Lippmann said, "The real environment is altogether too big, too complex, and too fleeting for direct acquaintance" between people and their environment. Each person constructs a pseudo-environment that is a subjective, biased, and necessarily abridged mental image of the world, and to a degree, everyone's pseudo-environment is a fiction. People "live in the same world, but they think and feel in different ones." Lippman's "environment" might be called "reality", and his "pseudo-environment" seems equivalent to what today is called "constructed reality".
Social constructionism has more recently been rooted in "symbolic interactionism" and "phenomenology". With Berger and Luckmann's The Social Construction of Reality published in 1966, this concept found its hold. More than four decades later, much theory and research pledged itself to the basic tenet that people "make their social and cultural worlds at the same time these worlds make them." It is a viewpoint that uproots social processes "simultaneously playful and serious, by which reality is both revealed and concealed, created and destroyed by our activities." It provides a substitute to the "Western intellectual tradition" where the researcher "earnestly seeks certainty in a representation of reality by means of propositions."
In social constructionist terms, "taken-for-granted realities" are cultivated from "interactions between and among social agents"; furthermore, reality is not some objective truth "waiting to be uncovered through positivist scientific inquiry." Rather, there can be "multiple realities that compete for truth and legitimacy." Social constructionism understands the "fundamental role of language and communication" and this understanding has "contributed to the linguistic turn" and more recently the "turn to discourse theory". The majority of social constructionists abide by the belief that "language does not mirror reality; rather, it constitutes [creates] it."
A broad definition of social constructionism has its supporters and critics in the organizational sciences. A constructionist approach to various organizational and managerial phenomena appear to be more commonplace and on the rise.
Andy Lock and Tom Strong trace some of the fundamental tenets of social constructionism back to the work of the 18th-century Italian political philosopher, rhetorician, historian, and jurist Giambattista Vico.
Berger and Luckmann give credit to Max Scheler as a large influence as he created the idea of sociology of knowledge which influenced social construction theory.
According to Lock and Strong, other influential thinkers whose work has affected the development of social constructionism are: Edmund Husserl, Alfred Schutz, Maurice Merleau-Ponty, Martin Heidegger, Hans-Georg Gadamer, Paul Ricoeur, Jürgen Habermas, Emmanuel Levinas, Mikhail Bakhtin, Valentin Volosinov, Lev Vygotsky, George Herbert Mead, Ludwig Wittgenstein, Gregory Bateson, Harold Garfinkel, Erving Goffman, Anthony Giddens, Michel Foucault, Ken Gergen, Mary Gergen, Rom Harre, and John Shotter.
Applications
Personal construct psychology
Since its appearance in the 1950s, personal construct psychology (PCP) has mainly developed as a constructivist theory of personality and a system of transforming individual meaning-making processes, largely in therapeutic contexts. It was based around the notion of persons as scientists who form and test theories about their worlds. Therefore, it represented one of the first attempts to appreciate the constructive nature of experience and the meaning persons give to their experience. Social constructionism (SC), on the other hand, mainly developed as a form of a critique, aimed to transform the oppressing effects of the social meaning-making processes. Over the years, it has grown into a cluster of different approaches, with no single SC position. However, different approaches under the generic term of SC are loosely linked by some shared assumptions about language, knowledge, and reality.
A usual way of thinking about the relationship between PCP and SC is treating them as two separate entities that are similar in some aspects, but also very different in others. This way of conceptualizing this relationship is a logical result of the circumstantial differences of their emergence. In subsequent analyses these differences between PCP and SC were framed around several points of tension, formulated as binary oppositions: personal/social; individualist/relational; agency/structure; constructivist/constructionist. Although some of the most important issues in contemporary psychology are elaborated in these contributions, the polarized positioning also sustained the idea of a separation between PCP and SC, paving the way for only limited opportunities for dialogue between them.
Reframing the relationship between PCP and SC may be of use in both the PCP and the SC communities. On one hand, it extends and enriches SC theory and points to benefits of applying the PCP "toolkit" in constructionist therapy and research. On the other hand, the reframing contributes to PCP theory and points to new ways of addressing social construction in therapeutic conversations.
Educational psychology
Like social constructionism, social constructivism states that people work together to construct artifacts. While social constructionism focuses on the artifacts that are created through the social interactions of a group, social constructivism focuses on an individual's learning that takes place because of his or her interactions in a group.
Social constructivism has been studied by many educational psychologists, who are concerned with its implications for teaching and learning. For more on the psychological dimensions of social constructivism, see the work of Lev Vygotsky, Ernst von Glasersfeld and A. Sullivan Palincsar.
Systemic therapy
Some of the systemic models that use social constructionism include narrative therapy and solution-focused therapy.
Poverty
Max Rose and Frank R. Baumgartner (2013), in Framing the Poor: Media Coverage and U.S. Poverty Policy, 1960-2008, examine how media has framed the poor in the U.S. and how negative framing has caused a shift in government spending. Since 1960, the government has decreasingly spent money on social services such as welfare. Evidence shows the media framing the poor more negatively since 1960, with more usage of words such as lazy and fraud.
Crime
Potter and Kappeler (1996), in their introduction to Constructing Crime: Perspective on Making News And Social Problems wrote, "Public opinion and crime facts demonstrate no congruence. The reality of crime in the United States has been subverted to a constructed reality as ephemeral as swamp gas."
Criminology has long focussed on why and how society defines criminal behavior and crime in general. While looking at crime through a social constructionism lens, there is evidence to support that criminal acts are a social construct where abnormal or deviant acts become a crime based on the views of society. Another explanation of crime as it relates to social constructionism are individual identity constructs that result in deviant behavior. If someone has constructed the identity of a "madman" or "criminal" for themselves based on a society's definition, it may force them to follow that label, resulting in criminal behavior.
History and development
Berger and Luckmann
Constructionism became prominent in the U.S. with Peter L. Berger and Thomas Luckmann's 1966 book, The Social Construction of Reality. Berger and Luckmann argue that all knowledge, including the most basic, taken-for-granted common-sense knowledge of everyday reality, is derived from and maintained by social interactions. In their model, people interact on the understanding that their perceptions of everyday life are shared with others, and this common knowledge of reality is in turn reinforced by these interactions. Since this common-sense knowledge is negotiated by people, human typifications, significations and institutions come to be presented as part of an objective reality, particularly for future generations who were not involved in the original process of negotiation. For example, as parents negotiate rules for their children to follow, those rules confront the children as externally produced "givens" that they cannot change. Berger and Luckmann's social constructionism has its roots in phenomenology. It links to Heidegger and Edmund Husserl through the teaching of Alfred Schutz, who was also Berger's PhD adviser.
Narrative turn
During the 1970s and 1980s, social constructionist theory underwent a transformation as constructionist sociologists engaged with the work of Michel Foucault and others as a narrative turn in the social sciences was worked out in practice. This particularly affected the emergent sociology of science and the growing field of science and technology studies. In particular, Karin Knorr-Cetina, Bruno Latour, Barry Barnes, Steve Woolgar, and others used social constructionism to relate what science has typically characterized as objective facts to the processes of social construction. Their goal was to show that human subjectivity imposes itself on the facts taken as objective, not solely the other way around. A particularly provocative title in this line of thought is Andrew Pickering's Constructing Quarks: A Sociological History of Particle Physics. At the same time, social constructionism shaped studies of technologythe Sofield, especially on the social construction of technology, or SCOT, and authors as Wiebe Bijker, Trevor Pinch, Maarten van Wesel, etc. Despite its common perception as objective, mathematics is not immune to social constructionist accounts. Sociologists such as Sal Restivo and Randall Collins, mathematicians including Reuben Hersh and Philip J. Davis, and philosophers including Paul Ernest have published social constructionist treatments of mathematics.
Postmodernism
Within the social constructionist strand of postmodernism, the concept of socially constructed reality stresses the ongoing mass-building of worldviews by individuals in dialectical interaction with society at a time. The numerous realities so formed comprise, according to this view, the imagined worlds of human social existence and activity. These worldviews are gradually crystallized by habit into institutions propped up by language conventions; given ongoing legitimacy by mythology, religion and philosophy; maintained by therapies and socialization; and subjectively internalized by upbringing and education. Together, these become part of the identity of social citizens.
In the book The Reality of Social Construction, the British sociologist Dave Elder-Vass places the development of social constructionism as one outcome of the legacy of postmodernism. He writes "Perhaps the most widespread and influential product of this process [coming to terms with the legacy of postmodernism] is social constructionism, which has been booming [within the domain of social theory] since the 1980s."
Criticisms
Critics argue that social constructionism rejects the influences of biology on behaviour and culture, or suggests that they are unimportant to achieve an understanding of human behaviour. Scientific estimates of nature versus nurture and gene–environment interactions have shown almost always substantial influences of both genetics and social, often in an inseparable manner. Claims that genetics does not affect humans are seen as outdated by most contemporary scholars of human development.
Social constructionism has also been criticized for having an overly narrow focus on society and culture as a causal factor in human behavior, excluding the influence of innate biological tendencies. This criticism has been explored by psychologists such as Steven Pinker in The Blank Slate as well as by Asian studies scholar Edward Slingerland in What Science Offers the Humanities. John Tooby and Leda Cosmides used the term standard social science model to refer to social theories that they believe fail to take into account the evolved properties of the brain.
In 1996, to illustrate what he believed to be the intellectual weaknesses of social constructionism and postmodernism, physics professor Alan Sokal submitted an article to the academic journal Social Text deliberately written to be incomprehensible but including phrases and jargon typical of the articles published by the journal. The submission, which was published, was an experiment to see if the journal would "publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions." In 1999, Sokal, with coauthor Jean Bricmont published the book Fashionable Nonsense, which criticized postmodernism and social constructionism.
Philosopher Paul Boghossian has also written against social constructionism. He follows Ian Hacking's argument that many adopt social constructionism because of its potentially liberating stance: if things are the way that they are only because of human social conventions, as opposed to being so naturally, then it should be possible to change them into how people would rather have them be. He then states that social constructionists argue that people should refrain from making absolute judgements about what is true and instead state that something is true in the light of this or that theory. Countering this, he states:
Woolgar and Pawluch argue that constructionists tend to "ontologically gerrymander" social conditions in and out of their analysis.
Alan Sokal also criticize social constructionism for contradicting itself on the knowability of the existence of societies. The argument is that if there was no knowable objective reality, there would be no way of knowing whether or not societies exist and if so, what their rules and other characteristics are. One example of the contradiction is that the claim that "phenomena must be measured by what is considered average in their respective cultures, not by an objective standard." Since there are languages that have no word for average and therefore the whole application of the concept of "average" to such cultures contradict social constructionism's own claim that cultures can only be measured by their own standards. Social constructionism is a diverse field with varying stances on these matters. Some social constructionists do acknowledge the existence of an objective reality but argue that human understanding and interpretation of that reality are socially constructed. Others might contend that while the term average may not exist in all languages, equivalent or analogous concepts might still be applied within those cultures, thereby not completely invalidating the principle of cultural relativity in measuring phenomena.
See also
References
Further reading
Books
Boghossian, P. Fear of Knowledge: Against Relativism and Constructivism. Oxford University Press, 2006. Online review: Fear of Knowledge: Against Relativism and Constructivism
Berger, P. L. and Luckmann, T., The Social Construction of Reality : A Treatise in the Sociology of Knowledge (Anchor, 1967; ).
Best, J. Images of Issues: Typifying Contemporary Social Problems, New York: Gruyter, 1989
Burr, V. Social Constructionism, 2nd ed. Routledge 2003.
Ellul, J. Propaganda: The Formation of Men's Attitudes. Trans. Konrad Kellen & Jean Lerner. New York: Knopf, 1965. New York: Random House/ Vintage 1973
Ernst, P., (1998), Social Constructivism as a Philosophy of Mathematics; Albany, New York: State University of New York Press
Gergen, K., An Invitation to Social Construction. Los Angeles: Sage, 2015 (3d edition, first 1999).
Glasersfeld, E. von, Radical Constructivism: A Way of Knowing and Learning. London: RoutledgeFalmer, 1995.
Hacking, I., The Social Construction of What? Cambridge: Harvard University Press, 1999;
Hibberd, F. J., Unfolding Social Constructionism. New York: Springer, 2005.
Kukla, A., Social Constructivism and the Philosophy of Science, London: Routledge, 2000.
Lawrence, T. B. and Phillips, N. Constructing Organizational Life: How Social-Symbolic Work Shapes Selves, Organizations, and Institutions. Oxford University Press, 2019.
Lowenthal, P., & Muth, R. Constructivism. In E. F. Provenzo, Jr. (Ed.), Encyclopedia of the social and cultural foundations of education (pp. 177–179). Thousand Oaks, CA: Sage, 2008.
McNamee, S. and Gergen, K. (Eds.). Therapy as Social Construction. London: Sage, 1992 .
McNamee, S. and Gergen, K. Relational Responsibility: Resources for Sustainable Dialogue. Thousand Oaks, California: Sage, 2005. .
Penman, R. Reconstructing communicating. Mahwah, NJ: Lawrence Erlbaum, 2000.
Poerksen, B. The Certainty of Uncertainty: Dialogues Introducing Constructivism. Exeter: Imprint-Academic, 2004.
Restivo, S. and Croissant, J., "Social Constructionism in Science and Technology Studies" (Handbook of Constructionist Research, ed. J.A. Holstein & J.F. Gubrium) Guilford, NY 2008, 213–229;
Schmidt, S. J., Histories and Discourses: Rewriting Constructivism. Exeter: Imprint-Academic, 2007.
Searle, J., The Construction of Social Reality. New York: Free Press, 1995; .
Shotter, J. Conversational realities: Constructing life through language. Thousand Oaks, CA: Sage, 1993.
Stewart, J., Zediker, K. E., & Witteborn, S. Together: Communicating interpersonally – A social construction approach (6th ed). Los Angeles, CA: Roxbury, 2005.
Weinberg, D. Contemporary Social Constructionism: Key Themes. Philadelphia, PA: Temple University Press, 2014.
Willard, C. A., Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy Chicago: University of Chicago Press, 1996; .
Wilson, D. S. (2005), "Evolutionary Social Constructivism". In J. Gottshcall and D. S. Wilson, (Eds.), The Literary Animal: Evolution and the Nature of Narrative. Evanston, IL, Northwestern University Press; . Full text
Articles
Drost, Alexander. "Borders. A Narrative Turn – Reflections on Concepts, Practices and their Communication", in: Olivier Mentz and Tracey McKay (eds.), Unity in Diversity. European Perspectives on Borders and Memories, Berlin 2017, pp. 14–33.
Mallon, R, "Naturalistic Approaches to Social Construction", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.).
Shotter, J., & Gergen, K. J., Social construction: Knowledge, self, others, and continuing the conversation. In S. A. Deetz (Ed.), Communication Yearbook, 17 (pp. 3–33). Thousand Oaks, CA: Sage, 1994.
External links
Communication theory
Consensus reality
Human behavior
Human communication
Social concepts
Social epistemology
Sociology of knowledge
Sociological theories | 0.786988 | 0.997279 | 0.784846 |
Colloquialism | Colloquialism (also called colloquial language, everyday language, or general parlance) is the linguistic style used for casual (informal) communication. It is the most common functional style of speech, the idiom normally employed in conversation and other informal contexts. Colloquialism is characterized by wide usage of interjections and other expressive devices; it makes use of non-specialist terminology, and has a rapidly changing lexicon. It can also be distinguished by its usage of formulations with incomplete logical and syntactic ordering.
A specific instance of such language is termed a colloquialism. The most common term used in dictionaries to label such an expression is colloquial.
Explanation
Colloquialism or general parlance is distinct from formal speech or formal writing. It is the form of language that speakers typically use when they are relaxed and not especially self-conscious. An expression is labeled colloq. for "colloquial" in dictionaries when a different expression is preferred in formal usage, but this does not mean that the colloquial expression is necessarily slang or non-standard.
Some colloquial language contains a great deal of slang, but some contains no slang at all. Slang is often used in colloquial speech, but this particular register is restricted to particular in-groups, and it is not a necessary element of colloquialism. Other examples of colloquial usage in English include contractions or profanity.
"Colloquial" should also be distinguished from "non-standard". The difference between standard and non-standard is not necessarily connected to the difference between formal and colloquial. Formal, colloquial, and vulgar language are more a matter of stylistic variation and diction, rather than of the standard and non-standard dichotomy. The term "colloquial" is also equated with "non-standard" at times, in certain contexts and terminological conventions.
A colloquial name or familiar name is a name or term commonly used to identify a person or thing in non-specialist language, in place of another usually more formal or technical name.
In the philosophy of language, "colloquial language" is ordinary natural language, as distinct from specialized forms used in logic or other areas of philosophy. In the field of logical atomism, meaning is evaluated in a different way than with more formal propositions.
Distinction from other styles
Colloquialisms are distinct from slang or jargon. Slang refers to words used only by specific social groups, such as demographics based on region, age, or socio-economic identity. In contrast, jargon is most commonly used within specific occupations, industries, activities, or areas of interest. Colloquial language includes slang, along with abbreviations, contractions, idioms, turns-of-phrase, and other informal words and phrases known to most native speakers of a language or dialect.
Jargon is terminology that is explicitly defined in relationship to a specific activity, profession, or group. The term refers to the language used by people who work in a particular area or who have a common interest. Similar to slang, it is shorthand used to express ideas, people, and things that are frequently discussed between members of a group. Unlike slang, it is often developed deliberately. While a standard term may be given a more precise or unique usage amongst practitioners of relevant disciplines, it is often reported that jargon is a barrier to communication for those people unfamiliar with the respective field.
See also
Eye dialect
Oral history
Vernacular
References
External links
Colloquial Spanish – Dictionary of Colloquial Spanish.
Tractatus Logico-Philosophicus, Ludwig Wittgenstein (archived 17 May 1997)
Youth culture
Language varieties and styles | 0.786555 | 0.997683 | 0.784732 |
Ethical dilemma | In philosophy, an ethical dilemma, also called an ethical paradox or moral dilemma, is a situation in which two or more conflicting moral imperatives, none of which overrides the other, confront an agent. A closely related definition characterizes an ethical dilemma as a situation in which every available choice is wrong. The term is also used in a wider sense in everyday language to refer to ethical conflicts that may be resolvable, to psychologically difficult choices or to other types of difficult ethical problems.
This article concerns ethical dilemmas in the strict philosophical sense, often referred to as genuine ethical dilemmas. Various examples have been proposed but there is disagreement as to whether these constitute genuine or merely apparent ethical dilemmas. The central debate around ethical dilemmas concerns the question of whether there are any. Defenders often point to apparent examples while their opponents usually aim to show their existence contradicts very fundamental ethical principles. Ethical dilemmas come in various types. An important distinction concerns the difference between epistemic dilemmas, which give a possibly false impression to the agent of an unresolvable conflict, and actual or ontological dilemmas. There is broad agreement that there are epistemic dilemmas but the main interest in ethical dilemmas takes place on the ontological level. Traditionally, philosophers held that it is a requirement for good moral theories to be free from ethical dilemmas. But this assumption has been questioned in contemporary philosophy.
Definition
A person is in an ethical dilemma if they stand under several conflicting moral obligations and no obligation overrides the others. Two ethical requirements are conflicting if the agent can do one or the other but not both: the agent has to choose one over the other. Two conflicting ethical requirements do not override each other if they have the same strength or if there is no sufficient ethical reason to choose one over the other. Only this type of situation constitutes an ethical dilemma in the strict philosophical sense, often referred to as a genuine ethical dilemma. Other cases of ethical conflicts are resolvable and are therefore not ethical dilemmas strictly speaking. This applies to many instances of conflict of interest as well. For example, a businessman hurrying along the shore of a lake to a meeting is in an ethical conflict when he spots a drowning child close to the shore. But this conflict is not a genuine ethical dilemma since it has a clear resolution: jumping into the water to save the child significantly outweighs the importance of making it to the meeting on time. Also excluded from this definition are cases in which it is merely psychologically difficult for the agent to make a choice, for example, because of personal attachments or because the knowledge of the consequences of the different alternatives is lacking.
Ethical dilemmas are sometimes defined not in terms of conflicting obligations but in terms of not having a right course of action, of all alternatives being wrong. The two definitions are equivalent for many but not all purposes. For example, it is possible to hold that in cases of ethical dilemmas, the agent is free to choose either course of action, that either alternative is right. Such a situation still constitutes an ethical dilemma according to the first definition, since the conflicting requirements are unresolved, but not according to the second definition, since there is a right course of action.
Examples
Various examples of ethical dilemmas have been proposed but there is disagreement as to whether these constitute genuine or merely apparent ethical dilemmas. One of the oldest examples is due to Plato, who sketches a situation in which the agent has promised to return a weapon to a friend, who is likely to use it to harm someone since he is not in his right mind. In this example, the duty to keep a promise stands in conflict with the duty to prevent that others are harmed. It is questionable whether this case constitutes a genuine ethical dilemma since the duty to prevent harms seems to clearly outweigh the promise. Another well-known example comes from Jean-Paul Sartre, who describes the situation of one of his students during the German occupation of France. This student faced the choice of either fighting to liberate his country from the Germans or staying with and caring for his mother, for whom he was the only consolation left after the death of her other son. The conflict, in this case, is between a personal duty to his mother and the duty to his country. The novel Sophie's Choice by William Styron presents one more widely discussed example. In it, a Nazi guard forces Sophie to choose one of her children to be executed, adding that both will be executed if she refuses to choose. This case is different from the other examples in which the conflicting duties are of different types. This type of case has been labeled symmetrical since the two duties have the same type.
Types
Ethical dilemmas come in different types. The distinctions between these types are often important for disagreements about whether there are ethical dilemmas or not. Certain arguments for or against their existence may apply only to some types but not to other types. And only some types, if any, may constitute genuine ethical dilemmas.
Epistemic vs ontological
In epistemic ethical dilemmas, it is not clear to the agent what should be done because the agent is unable to discern which moral requirement takes precedence. Many decisions in everyday life, from a trivial choice between differently packaged cans of beans in the supermarket to life-altering career-choices, involve this form of uncertainty. But unresolvable conflicts on the epistemic level can exist without there actually being unresolvable conflicts and vice versa.
The main interest in ethical dilemmas is concerned with on the ontological level: whether there actually are genuine dilemmas in the form of unresolvable conflicts between moral requirements, not just whether the agent believes so. The ontological level is also where most of the theoretical disagreements happen since both proponents and opponents of ethical dilemmas usually agree that there are epistemic ethical dilemmas. This distinction is sometimes used to argue against the existence of ethical dilemmas by claiming that all apparent examples are in truth epistemic in nature. In some cases, this can be shown by how the conflict is resolved once the relevant information is obtained. But there may be other cases in which the agent is unable to acquire information that would settle the issue, sometimes referred to as stable epistemic ethical dilemmas.
Self-imposed vs world-imposed
The difference between self-imposed and world-imposed ethical dilemmas concerns the source of the conflicting requirements. In the self-imposed case, the agent is responsible for the conflict. A common example in this category is making two incompatible promises, for example, to attend two events happening at distant places at the same time. In the world-imposed case, on the other hand, the agent is thrown into the dilemma without being responsible for it occurring. The difference between these two types is relevant for moral theories. Traditionally, most philosophers held that ethical theories should be free from ethical dilemmas, that moral theories that allow or entail the existence of ethical dilemmas are flawed. In the weak sense, this prohibition is only directed at the world-imposed dilemmas. This means that all dilemmas are avoided by agents who strictly follow the moral theory in question. Only agents who diverge from the theory's recommendations may find themselves in ethical dilemmas. But some philosophers have argued that this requirement is too weak, that the moral theory should be able to provide guidance in any situation. This line of thought follows the intuition that it is not relevant how the situation came about for how to respond to it. So e.g. if the agent finds themselves in the self-imposed ethical dilemma of having to choose which promise to break, there should be some considerations why it is right to break one promise rather than the other. Utilitarians, for example, could argue that this depends on which broken promise results in the least harm to all concerned.
Obligation vs prohibition
An obligation is an ethical requirement to act in a certain way while a prohibition is an ethical requirement to not act in a certain way. Most discussions of ethical dilemmas focus on obligation dilemmas: they involve two conflicting actions that the agent is ethically required to perform. Prohibition dilemmas, on the other hand, are situations in which no course of action is allowed. It has been argued that many arguments against ethical dilemmas are only successful in regard to obligation dilemmas but not against prohibition dilemmas.
Single-agent vs multi-agent
Ethical dilemmas involve two courses of action that are both obligatory but stand in conflict with each other: it is not possible to perform both actions. In regular single-agent cases, a single agent has both conflicting obligations. In multi-agent cases, the actions are still incompatible but the obligations concern different people. For example, two contestants engaged in a competition may have both the duty to win if that is what they promised to their families. These two obligations belonging to different people are conflicting since there can be only one winner.
Other types
Ethical dilemmas can be divided according to the types of obligations that are in conflict with each other. For example, Rushworth Kidder suggests that four patterns of conflict can be discerned: "truth versus loyalty, individual versus community, short term versus long term, and justice versus virtue". These cases of conflicts between different types of duties can be contrasted with conflicts in which one type of duty conflicts with itself, for example, if there is a conflict between two long-term obligations. Such cases are often called symmetric cases. The term "problem of dirty hands" refers to another form of ethical dilemmas, which specifically concerns political leaders who find themselves faced with the choice of violating commonly accepted morality in order to bring about some greater overall good.
Existence of ethical dilemmas
The problem of the existence of ethical dilemmas concerns the question of whether there are any genuine ethical dilemmas, as opposed to, for example, merely apparent epistemic dilemmas or resolvable conflicts. The traditional position denies their existence but there are various defenders of their existence in contemporary philosophy. There are various arguments for and against both sides. Defenders of ethical dilemmas often point to apparent examples of dilemmas while their opponents usually aim to show their existence contradicts very fundamental ethical principles. Both sides face the challenge of reconciling these contradictory intuitions.
Arguments in favor
A common way to argue in favor of ethical dilemmas is to cite concrete examples. Such examples are quite common and can include cases from everyday life, stories, or thought experiments, like Sartre's student or Sophie's Choice discussed in the section on examples. The strength of arguments based on examples rests on the intuition that these cases actually are examples of genuine ethical dilemmas. Opponents of ethical dilemmas often reject this argument based on the claim that the initial intuitions in such cases are misleading. For example, it may turn out that the proposed situation is impossible, that one choice is objectively better than the other or that there is an additional choice that was not mentioned in the description of the example. But for the argument of the defenders to succeed, it is sufficient to have at least one genuine case. This constitutes a considerable difficulty for the opponents since they would have to show that our intuitions are mistaken not just about some of these cases but about all of them. Some opponents have responded to this difficulty by arguing that all these cases merely constitute epistemic but not genuine dilemmas, i.e. that the conflict merely seems unresolvable because of the agent's lack of knowledge. This position is often defended by utilitarians. Support for it comes from the fact that the consequence of even simple actions are often too vast for us to properly anticipate. According to this interpretation, we mistake our uncertainty about which course of action outweighs the other for the idea that this conflict is not resolvable on the ontological level. Defenders of ethical dilemmas usually agree that there are many cases of epistemic dilemmas that are resolvable but seem unresolvable. However, they reject that this claim can be generalized to apply to all examples.
The argument from moral residue is another argument in favor of ethical dilemmas. Moral residue, in this context, refers to backward-looking emotions like guilt or remorse. These emotions are due to the impression of having done something wrong, of having failed to live up to one's obligations. In some cases of moral residue, the agent is responsible herself because she made a bad choice which she regrets afterward. But in the case of an ethical dilemma, this is forced on the agent no matter how she decides. Going through the experience of moral residue is not just something that happens to the agent but it even seems to be the appropriate emotional response. The argument from moral residue uses this line of thought to argue in favor of ethical dilemmas by holding that the existence of ethical dilemmas is the best explanation for why moral residue in these cases is the appropriate response. Opponents can respond by arguing that the appropriate response is not guilt but regret, the difference being that regret is not dependent on the agent's previous choices. By cutting the link to the possibly dilemmatic choice, the initial argument loses its force. Another counter-argument allows that guilt is the appropriate emotional response but denies that this indicates the existence of an underlying ethical dilemma. This line of argument can be made plausible by pointing to other examples, e.g. cases in which guilt is appropriate even though no choice whatsoever was involved.
Arguments against
Some of the strongest arguments against ethical dilemmas start from very general ethical principles and try to show that these principles are incompatible with the existence of ethical dilemmas, that their existence would therefore involve a contradiction.
One such argument proceeds from the agglomeration principle and the principle that ought implies can. According to the agglomeration principle, if an agent ought to do one thing and ought to do another thing then this agent ought to do both things. According to ought implies can, if an agent ought to do both things then the agent can do both things. But if the agent can do both things, there is no conflict between the two courses of action and therefore no dilemma. It may be necessary for defenders to deny either the agglomeration principle or the principle that ought implies can. Either choice is problematic since these principles are quite fundamental.
Another line of argumentation denies that there are unresolvable ethical conflicts. Such a view may accept that we have various duties, which may conflict with each other at times. But this is not problematic as long as there is always one duty that outweighs the others. It has been proposed that the different types of duties can be ordered into a hierarchy. So in cases of conflict, the higher duty would always take precedent over the lower one, for example, that telling the truth is always more important than keeping a promise. One problem with this approach is that it fails to solve symmetric cases: when two duties of the same type stand in conflict with each other. Another problem for such a position is that the weight of the different types of duties seems to be situation-specific: in some cases of conflict we should tell the truth rather than keep a promise, but in other cases the reverse is true. This is, for example, W. D. Ross's position, according to which we stand under a number of different duties and have to decide on their relative weight based on the specific situation. But without a further argument, this line of thought just begs the question against the defender of ethical dilemmas, who may simply deny the claim that all conflicts can be resolved this way.
A different type of argument proceeds from the nature of moral theories. According to various authors, it is a requirement for good moral theories that they should be action-guiding by being able to recommend what should be done in any situation. But this is not possible when ethical dilemmas are involved. So these intuitions about the nature of good moral theories indirectly support the claim that there are no ethical dilemmas.
See also
Abortion debate
Adultery
Euthanasia
Execution
Graded absolutism
Samaritan's dilemma
Situational ethics
Suicide
The Right and the Good
Trolley problem
Value theory
References
External links
A database of user-contributed moral dilemma questions
The Generalized Structure of Ethical Dilemmas
The Stanford Encyclopedia of Philosophy entry
Saxe (MIT) Do the Right Thing
dilemma
Dilemmas | 0.786982 | 0.996961 | 0.78459 |
Applied philosophy | Applied philosophy (philosophy from Greek: φιλοσοφία, philosophia, 'love of wisdom') is a branch of philosophy that studies philosophical problems of practical concern. The topic covers a broad spectrum of issues in environment, medicine, science, engineering, policy, law, politics, economics and education. The term was popularised in 1982 by the founding of the Society for Applied Philosophy by Brenda Almond, and its subsequent journal publication Journal of Applied Philosophy edited by Elizabeth Brake. Methods of applied philosophy are similar to other philosophical methods including questioning, dialectic, critical discussion, rational argument, systematic presentation, thought experiments and logical argumentation.
Applied philosophy is differentiated from pure philosophy primarily by dealing with specific topics of practical concern, whereas pure philosophy does not take an object; metaphorically it is philosophy applied to itself; exploring standard philosophical problems and philosophical objects (e.g. metaphysical properties) such as the fundamental nature of reality, epistemology and morality among others. Applied philosophy is therefore a subsection of philosophy, broadly construed it does not deal with topics in the purely abstract realm, but takes a specific object of practical concern.
Definitions
General definition
Due to the recent coinage of the term, the full scope and meaning of Applied Philosophy is at times still quite ambiguous and contentious, but generally does interact with the several other general definitions of philosophy. A Companion of Applied Philosophy provides three introductory articles by Kasper Lippert-Rasmussen, David Archard and Suzanne Uniacke that outline general definitions and parameters for the field of Applied Philosophy.
In the first chapter, Lippert-Rasmussen article “The Nature of Applied Philosophy” begins by unpacking the term “applied philosophy”, outlining that to apply is a verb that takes an object, therefore if one were doing philosophy and not applying it to something then one would be grammatically or conceptually confused to say that one is doing applied philosophy. Lippert-Rasmussen provides seven conceptions of Applied Philosophy: the relevance conception, the specificity conception, the practical conception, the activist conception, the methodological conception, the empirical facts conception, the audience conception. These definitions are specified in terms of necessary and sufficient conditions, making the different conceptions incompatible with one another. Lippert-Rasmussen stresses that applied philosophy is much larger than that of applied ethics, therefore applied philosophers should strive beyond just proposing normative moral frameworks, allowing for Applied Philosophy to offer metaphysical frameworks for understanding contemporary results in other sciences and disciplines.
In the third chapter of A Companion of Applied Philosophy, Suzanne Uniacke's article “The Value of Applied Philosophy”, Uniacke outlines that applied philosophy is really a field of philosophical inquiry, differentiating itself from pure philosophy by claiming the former can provide practical guidance on issues beyond the philosophical domain. Within applied philosophy there are generally two modes of focus, it can be academically focused (for an academic audience), or it can be in “out-reach mode” (for a non-academic audience). In drawing on philosophical subdisciplines such as metaphysics, epistemology and ethics, applied philosophers shape their contributions and analysis on issues of practical concern. In this intersection between philosophical theories, principles, and concepts with issues beyond that of the purely philosophical domain (out-reach mode), these problems may give valuable challenges to traditionally accepted philosophies, providing a stress test, feedback or friction on principles that are so often confined within the idealistic philosophical framework.
Kasper Lippert-Rasmussen: seven conceptions
Relevance conception: Claims that philosophy is applied if and only if it is relevant to important questions of everyday life. To be clear this conception claims that applied philosophy need not answer the important questions of everyday life, yet it needs to philosophically explore or at least be relevant to them. There is no requirement on what type of everyday life questions are relevant, it can vary across time and audience, some questions may be relevant to some people at one time and to others at another.
Specificity conception: Philosophy is applied if, and only if, it addresses a comparatively specific question within the branch of philosophy, e.g., metaphysics, epistemology or moral philosophy, to which it belongs. Establishes philosophical principles to then apply and explore their implications in the applied (non-philosophical) specific domains of inquiry.
Practical conception: Philosophy is applied if, and only if, it justifies an answer to comparatively specific questions within its relevant branch of philosophy about what we ought to do.
Activist conception: Philosophy is applied if, and only if, it is motivated by an ambition of having a certain causal effect on the world. Whether that causal effect be to educate, elucidate or edify on a given topic, with real world consequences thereof. As Lippert-Rasmussen points out, much of pure philosophy has quite an impact on the world and yet one would still claim it to be pure rather than applied philosophy, however the distinction of the activist conception lies in their goal, the activist conception has greater emphasis on being an educator and having a causal impact on the world, changing their primary philosophical commitment from “knowledge and truth” to having a causal impact. The change of commitment and goals may result in the change of methods in order to realize their goal.
Methodological conception: Philosophy is applied if, and only if, it involves the use of specifically philosophical methods to explore issues outside the narrow set of philosophical problems.
Empirical facts conception: Philosophy is applied if, and only if, it is significantly informed by empirical evidence – in particular, that provided by empirical sciences. Stresses the interdisciplinary nature of applied philosophy, characterising applied philosophy as drawing on the results of empirical sciences and the evidence thereof to be sufficiently informed in contributing philosophical analysis and input.
Audience conception: Philosophy is applied if, and only if, its intended audience is non-philosophers. Despite the audience conception not always requiring background knowledge of the given audience, it is prudent for philosophers who are engaging with specific scientific disciplines to be well read on the empirical facts of those disciplines the philosopher addresses, thus re-iterating the value of being empirically informed with the facts, especially when engaging with interdisciplinary studies of philosophy with some other subject.
Applied moral philosophy
Applied moral philosophy (or applied ethics) is the branch of moral philosophy concerned with philosophical inquiry into moral issues that arise in everyday contexts and institutional design frameworks (e.g. how social institutions are structured). Applied moral philosophy involves the use of philosophical theories and methods of analysis to treat fundamentally moral problems in non-philosophical subjects, such as technology, public policy, and medicine. This includes the use of fundamental moral principles and theories to assess particular social practices, arrangements, and norms prevailing in particular societies at particular times. Some key topics in applied moral philosophy are business ethics, bioethics, feminist ethics, environmental ethics, and medical ethics. Beauchamp (1984) notes where applied moral philosophy and theoretical ethics diverge is not in their methodologies, but rather, in the content of their analysis and assessment.
Although interest in topics of applied ethics, such as civil disobedience, suicide, and free speech, can be traced all the way back to antiquity, applied moral philosophy gained mainstream popularity recently. However, the history of philosophy still shows a tradition of moral philosophy more concerned with its theoretical concerns, such as justifying fundamental moral principles and examining the nature of moral judgements. Applied ethics first gained mainstream popularity in 1967, as many professions such as law, medicine and engineering were profoundly affected by social issues and injustices at the time. For example, various environmental movements sparked political conversations about humanities relationship to the natural world, which led to the development of important philosophical arguments against anthropocentrism. As awareness of these social concerns grew, so did discussions of them in academic philosophy. By the 1970s and 1980s, there was a surge in publications devoted to philosophical inquiry of subjects in applied ethics, which were initially directed at biomedical ethics, and later business ethics.
Sub-disciplines of moral philosophy
Moral philosophy is the branch of philosophy concerned with examining the nature of right and wrong. It seeks to provide a framework for what constitutes morally right and wrong actions, and analyses issues surrounding moral principles, concepts and dilemmas. There are three main sub-disciplines of moral philosophy: meta-ethics, normative ethics and applied ethics.
Meta-ethics is the branch of moral philosophy which analyses the nature and status of ethical terms and concepts. It deals with abstract questions about the nature of morality, including whether or not morality actually exists, whether moral judgements are truth-apt (capable of being binary true/false), and if they are, investigating whether the properties of moral statements make them truth-apt in the same way that mathematical and descriptive statements are.
Normative ethics deals with the construction and justification of fundamental moral principles that ought to guide human behaviour. There are three main branches of normative ethical theories: consequentialism, deontology and virtue-based ethics. Consequentialism argues that an action is morally permissible if and only if it maximizes some intrinsic overall good. Deontological theories place rights and duties as the fundamental determinates of what we ought to do, by determining what rights and duties are justifiable constraints on behaviour. Finally, virtue-based theories argue that what one ought to do is what the ideally virtuous person would do.
Applied ethics uses philosophical methods of inquiry to address the moral permissibility of specific actions and practices in particular circumstances. However, applied ethics still requires theories and concepts found in meta-ethics and normative ethics to adequately address applied ethical problems. For example, one cannot confidently assert the moral permissibility of abortion without also assuming that there is such a thing as morally permissible actions, which is a fundamental meta-ethical question. Similarly, the moral permissibility of an action can be justified using a fundamental moral theory or principle found in normative ethics. A conception of these disciplines as such allows for significant overlap in the questions they address, along with their moral theories and ideas.
Methodologies
Applied ethics uses philosophical theories and concepts to tackle moral issues found in non-philosophical contexts. However, there is significant debate over the particular methodology that should be used when determining the moral permissibility of actions and practices during applied ethical inquiry.
One possible methodology involves the application of moral principles and theories to particular issues in applied ethics, and is known as the top-down model of philosophical analysis. Under this model, one must first determine the set of fundamental moral principles which should hold necessarily and universally, in order to apply them to particular issues in applied ethics. The next step is articulating the relevant empirical facts of a situation to better understand how these principles should be applied in that particular context, which then determines the moral permissibility of an action. There are significant issues with this model of how to resolve issues in applied moral philosophy, as it requires certainty in a definitive set of moral principles to guide human behaviour. However, there is universal disagreement over which principles this definitive set consists of, if any, creating issues for a conception of applied ethics using the top-down model. On the other hand, the bottom-up model involves formulating intuitive responses to questions about what one ought to do in particular situations, and then developing philosophical understandings or judgements based on the intuitions one has about a case. We can then revise intuitions in light of these philosophical judgements to reach an appropriate resolution on what one ought to do in a given situation. This model faces similar problems as the previous one, where disagreements about particular judgements and intuitions require us to have some other mechanism to examine the validity of intuitive judgements.
The Reflective Equilibrium model combines the top-down and bottom-up approaches, where one should reflect on their current beliefs, and revise them in light of their general and particular moral judgements. A general belief may be rejected in light of specific situations to which it is applied when the belief recommends an action one finds morally unacceptable. A particular belief can likewise be rejected if it conflicts with general moral beliefs one takes to be plausible, and which justifies many of their other moral beliefs about what one ought to do in a given situation. An agent can then reach a state of equilibrium where the set containing their general and particular moral judgements is coherent and consistent.
Business ethics
Business ethics is the study of moral issues that arise when human beings exchange goods and services, where such exchanges are fundamental to daily existence. A major contemporary issue in business ethics is about the social responsibility of corporate executives. One theory proposed by Friedman (2008) describes the sole responsibility of a CEO (Chief Executive Officer) being profit maximization through their business abilities and knowledge. This is known at stockholder theory, which says promoting the interests of stockholders is the sole responsibility of corporate executives.
Freeman (1998) presents a competing theory of corporate social responsibility by appealing to pre-theoretical commitments about the moral significance of assessing who an action affects and how. Proponents of stakeholder theory argue that corporate executives have moral responsibilities to all stakeholders in their business operations, including consumers, employees, and communities. Thus, a business decision may maximize profits for stockholders, but it is not morally permissible unless it does not conflict with the demands of other stakeholders in the company. Freeman (2008) takes a Rawlsian approach to mediate conflicts amongst stakeholders, where the right action is that which will promote the well-being of the stakeholders who are the least well-off. Other decision-making principles can also be appealed to, and an adequate stakeholder theory will be assessed according to the decision making theory it employs to mediate between conflicting demands, the plausibility of the theory, and its ability to achieve results in particular cases.
Another key issue in business ethics, questions the moral status of corporations. If corporations are the kind of thing capable of being morally evaluated, then they can be assigned moral responsibility. Otherwise, there remains a question of whom to ascribe moral blame towards for morally wrong business practices. French (2009) argues that corporations are moral agents, and that their “corporate internal decision structure” can be morally evaluated, as it has the required intentionality for moral blameworthiness. Danley (1980) disagrees and says that corporations cannot be moral agents merely because they are intentional, but that other considerations, such as the ability to be punished, must obtain when assigning moral responsibility to an agent.
Bioethics
Bioethics is the study of human conduct towards the animate and inanimate natural world against a background of life sciences. It provides a disciplinary framework for a wide array of moral questions in life sciences that concern humans, the environment, and animals. There are 3 main sub-disciplines of bioethics: medical ethics, animal ethics, and environmental ethics.
Medical ethics
Medical ethics can be traced back to the Hippocratic Oath in 500 B.C.E., making it the oldest sub-discipline of bioethics. Medical ethics concerns itself with questions of what one ought to do in particular moral situations arising in medical contexts. There are a number of key issues in medical ethics, such as end-of-life and beginning-of-life debates, physician-patient relationships, and adequate healthcare accessibility.
The abortion debate remains one of the most widely discussed issues in medical ethics, which concerns the conditions under which an abortion is morally permissible, if any. Thomson (1971) revolutionized philosophical understanding of issues in the abortion debate, by questioning the widespread belief that because a fetus is a person, that it is morally wrong to kill them. She uses the violinist thought experiment to show that even if a fetus is a person, their right to life is not absolute, and therefore provided non-theistic and rational justification for the moral permissibility of abortion under certain conditions. Frances Kamm (1992) takes a deontological approach in order to expand on Thomson's argument, where she argues that factors such as third-party intervention and morally responsible creation support its permissibility.
Another debate in medical ethics is about the moral permissibility of euthanasia, and under what conditions euthanasia is morally acceptable. Euthanasia is the intentional killing of another person in order to benefit them. One influential argument in favour of voluntary active euthanasia and voluntary passive euthanasia is put forth by Rachels (1975), who is not only able to show the permissibility of the latter in cases where someone's life is no longer worth living, but that there mere fact that active euthanasia involves killing someone and passive euthanasia involves letting them die does not make it more just to do one over the other. He presents his argument in response to critics who argue that it is morally worse to kill someone than to merely let them die. However, he considers a case where a husband wants his wife to die, and in one case, does so by putting lethal poison in her wine, and in the second case, walks in on her drowning in a bathtub and lets her die. He argues that his thought experiment shows why killing someone is not always morally worse than letting them die, forcing defenders of only passive euthanasia to also commit themselves to the moral permissibility of active euthanasia, unless they can show why only the former option is morally acceptable.
Environmental ethics
Environmental ethics is the discipline of applied ethics that studies the moral relationship of human beings to the environment and its non-human contents. The practical goals of environmental ethics are to provide a moral grounds for social policies aimed at protecting the environment and remedying its degradation. It questions the status of the environment independently of human beings, and categorizes the different positions on its status as anthropocentrism and non-anthropocentrism. Anthropocentrism is the view that value is human-centered and all other entities are means to human ends. This bears on the question of the value of the environment, and whether or not the environment has intrinsic value independent of human beings. By taking a non-anthropocentric view that it does have intrinsic value, one should question why humanity would try to destroy something with intrinsic value rather than preserve it, under the assumption that agents will try to preserve things with value.
Feminism has an important relationship to environmental ethics where, as King (1989) argues, human exploitation of nature can be seen as a manifestation and extension of the oppression of women. She argues that humanity's destruction of nature is a result of associating nature with the feminine, where feminine agents have historically and systemically been inferiorized and oppressed by a male-dominating culture. King (1989) motivates her argument by examining the historical domination of women in society, and then argues that all other domination and hierarchies flow from this. Her argument justifies the moral wrongness of environmental degradation and human exploitation of nature not by arguing in favour of its intrinsic value, but by appealing to the moral wrongness of female oppression by a male-dominating culture.
Applied political and legal philosophy
Applied political and legal philosophy conducts investigation and analysis using philosophical methods and theories into specific and concrete political and legal issues. Historically, much of the work in political and legal philosophy has pursued more general issues, such as questions about the nature of justice, ideal forms of democracy, and how to organize political and legal institutions. Applied political and legal philosophy uses the insights of political and legal philosophy to critically examine more concrete issues within the disciplines. Some examples include philosophical inquiry into family-based immigration policies, understanding the conceptual structure of civil disobedience, and discussing the bounds of prosecutorial discretion in domestic violence cases. Dempsey and Lister (2016) identify three activist approaches to applied political and legal philosophy.
Activist approaches
The standard activist approach is used when a philosopher presents arguments directed primarily at other philosophers, defending or critiquing a policy or some set of policies. If a policy maker happens to come across the argument, and is sufficiently persuaded to make public policy changes supporting the philosophers desired outcome, then the standard activist philosopher will be satisfied. However, their main goal is to articulate a sound argument in favour of their position on some policy or set of political/legal issues, regardless of if it actually influences public policy.
Conceptual activism is when arguments are directed primarily to other philosophers, and critically analyses and clarifies some concept, where the arguments presented may be relevant in future policy making. The goal of conceptual activists is to motivate a particular understanding of concepts which may later inform policy making. Westen's (2017) work on consent is paradigmatic of this approach, where his analysis of the concept of consent unpacks confusions amongst not only philosophers and academics, but also policy makers, as to the nature and limitations of consent.
Extreme activism is when a philosopher acts as an expert consultant and presents an argument directly to policy makers in favour of some view. Although they still aim to present a sound argument about what should happen in the world, as the standard activist does, this goal is just as important as their goal to persuade policy makers in order to bring about the desired outcome of their work. Thus, the measure of success for an extreme activist consists not only of doing good philosophy, but also of their direct causal contribution to the world. However, the tension between their political and philosophical goals has potentially negative outcomes, such as wasting policy makers' time, who are not convinced by philosophical arguments, the possibility of corruption for philosophers placed in this position, and the potential to undermine the value of philosophy.
Feminist political philosophy
Feminist political philosophy involves understanding and critiquing political philosophy's inattention to feminist concerns, and instead articulates ways for political theory to be reconstructed to further feminist aims. Feminist political philosophy has been instrumental in reorganising political institutions and practices, as well as developing new political ideals and practices which justify their reorganisation. Work in feminist political philosophy uses the various activist approaches to causally affect public policies and political institutions. For example, liberal feminist theorising, whose main concerns are protecting and enhancing women's political rights and personal autonomy, has consistently used conceptual activism to further their aims.
Applied epistemology
While epistemology—the study of knowledge and justified belief—used to primarily be concerned with the seeking of truth and have an individualistic orientation in the task of doing so, recent developments in this branch of philosophy do not only highlight the social ways in which knowledge is generated, but also its practical and normative dimensions. Applied epistemology is the branch of applied philosophy that precisely explores and addresses these considerations.
For instance, even if traditional epistemology often investigates what we are justified in believing—the most paradigmatic case being the tripartite analysis of knowledge, i.e. that S knows that p if and only if p is true, S believes that p, and S is justified in believing that p—, applied epistemologists have argued that those questions are equivalent to queries about what we ought to believe: stressing that epistemology is fundamentally a normative subject. Coady (2016) claims so by recognizing that this branch of philosophy is not merely interested in how things are, but also in how they ought to be. As a result, there might be different (and more preferable) methods for acquiring knowledge depending on whose values guide one's orientation in life or what goals direct their pursuit of truth.
Social epistemology, in its focus on the social dimensions of knowledge and the ways that institutions mediate its acquisition, often overlaps with and can be seen as a part of applied epistemology. But one cannot equate those fields of research as social epistemology has been, so far, a lot more investigated through a consequentialist lens—i.e. it has been exploring the epistemic consequences of our social institutions that generate knowledge—than other normative predispositions. Coady (2016) claims that social epistemology has not sufficiently addressed questions of what individuals ought to believe and how they should pursue knowledge. And, while this social and consequentialist orientation has great value, applied epistemology also encompasses other normative orientations—like deontology, utilitarianism, and virtue ethics, amongst others—and explores individualistic questions of practical epistemic concerns.
The potential topics of applied epistemology include, but are not limited to: feminist epistemology, the epistemology of deliberative democracy, freedom of expression and diversity, conspiracy theories, the epistemological dimensions and implications of sexual consent, information markets, and more.
Feminist epistemology
Feminist epistemology studies how gendered practices and norms contribute to social oppression—including, but not limited to the enforcement of heteropatriarchy, racism, ableism, and classism—and proposes ways for agents to revise them in light of this. This branch of feminist philosophy also contributes to the scope of social epistemology as it identifies several ways in which conventional knowledge practices and processes disadvantage women, such as excluding them from inquiry, denying them epistemic authority, and producing theories of women which misrepresent them to serve patriarchal interests.
Feminist epistemology is not only applied in the sense that its liberatory goals are explicitly political and, as a result, seek a certain causal effect on the world. But this branch of epistemology is also greatly relevant since, to be effective activists, Wylie (2001) stresses that it is necessary “to understand the conditions that disadvantage women with as much empirical accuracy and explanatory power as possible.”
A demonstration of this necessity is that, since the Black and lesbian feminist theorizing of the 1980s, it is not effective anymore for feminists to investigate the conceptions and phenomena related to gender without the concept of intersectionality: which does not only make visible how the lived experiences of an individual and social group are shaped by their interdependent and overlapping identities, but also that, ultimately, their access to power and privilege is structured by those. Crenshaw (1989) coined this interpretative framework by investigating the failures of the legal courts in DeGraffenreid v General Motors (1976), Moore v Hughes Helicopter (1983), and Payne v Travenol (1976) to recognize that Black women were both discriminated on the basis of gender and of race.
Epistemology of deliberative democracy
The deliberative conception of democracy claims that public deliberation is necessary for the justification of this political system and the legitimacy of its decision-making processes. Broadly put, public deliberations refer to the open spaces where free and equal citizens share and discuss their reasons for supporting different policy proposals and societal ideas. This emphasis on public deliberation differentiates the deliberative conception from the aggregative conception of democracy: which understands the democratic process as a tool to gather and track the preferences and beliefs of citizenry at the moment of voting. This area of applied epistemology explores the epistemic values, virtues, and vices that underscore and can be observed to emerge from deliberative decision-making.
For instance, in the case of a referendum or an election, the Condorcet jury theorem (CJT) articulates that: if each voter is more likely than not to be correct on a topic on which they are asked to vote (i.e. the competence condition) and if each votes independently from one another (i.e. the independence condition), then it is not only the case that a majority of people is more likely to be correct than a single individual on the outcome of their vote. But it is also the case that the probability that a majority will vote for the correct outcome increases with the number of voters.
On the one hand, the CJT provides a solid defense and empirical argument for the importance of voting in democracy: this procedure leads decision-making bodies to make better decisions because of more accurate epistemic inputs. On the other hand, it also creates a debate on the influence of public deliberations on the well-functioning of voting procedures. While Estlund (1989) and Waldron (1989) claim that public deliberation, in its exchange of reasons and information about the outcomes under discussion, improves the competence condition of the votes, Dietrich and Spiekermann (2013) raise concerns about the fact that: if voters engage with one another too much prior to or at the moment of making a decision, the independence condition of the CJT is undermined and its optimistic results become distorted.
Many have also raised the ‘public ignorance’ objection to the deliberative conception of democracy: holding that most people are too ignorant for deliberative democracy to be an effective and viable practice. Talisse (2004) responds to this proposed limitation by claiming that it is unclear about what exactly ‘ignorance’ refers to — according to him, this objection conflates the states of being uninformed, misinformed, and uninterested — and that attributing culpability to those that ‘do not know’ takes responsibility away from the democratic institutions (like media and academia) that fail them.
While social epistemology takes a closer look at the roles of institutional practices to generate, mediate, or prevent knowledge acquisition, a substantial debate in the epistemology of deliberative democracy concerns the legal sanctions on speech, behavior and freedom of expression. The contribution of J.S. Mill (1859) is frequently referenced on this issue, notably supporting that free speech and public deliberation help to eliminate wrong opinions, permit correct beliefs to prevail, and, as a result, promote truth. Censorship and strict limitations of the public sphere would prevent different parties, in a disagreement, from even perceiving the truthful elements of their opponent's argument and could reinforce dogmatic tendencies in a given society. Landemore (2013) also supports that diversity is epistemically beneficial in deliberative democracies; there are higher chances to reach a correct truth if considering more diverse perspectives than few. Even if one can think of the dimensions of diversity that social and feminist epistemologies refer to, Kappel, Hallsson, and Møller (2016) also bring to the forefront of the discussion: diversity of knowledge, diversity of opinion, cognitive diversity, epistemic norm diversity, and non-epistemic value diversity.
If diversity may help to neutralize biases, ‘enclave deliberations’—a communicative process amongst like-minded people “who talk or even live, much of the time, in isolated enclaves”— can lead to group polarization. Sunstein (2002) defines group polarization as the phenomenon in which members of a group move towards a more extreme position during the process of deliberating with their peers than before doing so. For him, two reasons explain the statistical regularity of this phenomenon. On the one hand, he points to the fact that people do not usually discuss with groups that share different inclinations and predispositions on a particular topic: greatly limiting their ‘argument pool’. On the other hand, he acknowledges that group polarization also arises from the desire of group members to be perceived favourably by their peers.
Sunstein also points to the empirical evidence that diverse and heterogeneous groups tend to give less weight to the views of low-status members — the latter also being frequently more quiet in deliberative bodies. This area of deliberative inequalities overlaps with applied political philosophy and can be explored in the works of: Bohman (1996) and Young (2000), amongst others.
Applied philosophers also propose epistemic virtues and valuable practices to cope with these epistemic vices and deliberative dysfunction. Starting from the premise that there is nothing wrong with changing and being convinced by others about our views, Peter (2013) suggests that it is how one navigates disagreements that matters. For her, well-conducted deliberations are those in which participants treat each other as epistemic peers, that is, they recognize that they are as likely to make a mistake along the way as their peers. As a result, they should not be closed to revising their original beliefs (especially if they realize that their arguments are not sufficiently robust) while holding themselves mutually accountable to one another.
Other topics that explore the epistemology of deliberative democracy include, but are not limited to: epistemic proceduralism, the value or disvalue of disagreement, epistocracy, and social integration, among others.
Applied Ontology
Applied ontology involves the application of ontology to practical pursuits. This can involve adopting ontological principles in the creation of controlled, representational vocabularies. These vocabularies, referred to as 'ontologies', can be compiled to organize scientific information in a computer-friendly format.
One of the primary uses of ontologies is improving interoperability of data systems. Data within and between organizations can sometimes be trapped within data silos. Ontologies can improve data integration by offering a representative structure which diverse data systems can link up to. By representing our knowledge about domains through classes and the relations between them, ontologies can also be used to improve information retrieval and discovery from databases.
When an ontology is limited to representing entities from a specific subject or domain, it is called a domain ontology. An upper-level ontology (or top-level ontology) represents entities at a highly general level of abstraction. The classes and relations of an upper-level ontology are applicable to many different domain ontologies. Criteria to count as an upper level ontology are defined by ISO/IEC 21838-1:2021. Some examples of upper-level ontologies include Basic Formal Ontology (BFO), Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE), and TUpper. There are also mid-level ontologies, which define terms that are used in different domains and are less general than those in upper-level ontologies, which they extend from and conform to, such as the Common Core Ontologies.
References | 0.804644 | 0.974492 | 0.784119 |
Evidence | Evidence for a proposition is what supports the proposition. It is usually understood as an indication that the supported proposition is true. What role evidence plays and how it is conceived varies from field to field.
In epistemology, evidence is what justifies beliefs or what makes it rational to hold a certain doxastic attitude. For example, a perceptual experience of a tree may act as evidence that justifies the belief that there is a tree. In this role, evidence is usually understood as a private mental state. Important topics in this field include the questions of what the nature of these mental states is, for example, whether they have to be propositional, and whether misleading mental states can still qualify as evidence.
In phenomenology, evidence is understood in a similar sense. Here, however, it is limited to intuitive knowledge that provides immediate access to truth and is therefore indubitable. In this role, it is supposed to provide ultimate justifications for basic philosophical principles and thus turn philosophy into a rigorous science. However, it is highly controversial whether evidence can meet these requirements.
In philosophy of science, evidence is understood as that which confirms or disconfirms scientific hypotheses. Measurements of Mercury's "anomalous" orbit, for example, are seen as evidence that confirms Einstein's theory of general relativity. In order to play the role of neutral arbiter between competing theories, it is important that scientific evidence is public and uncontroversial, like observable physical objects or events, so that the proponents of the different theories can agree on what the evidence is. This is ensured by following the scientific method and tends to lead to an emerging scientific consensus through the gradual accumulation of evidence. Two issues for the scientific conception of evidence are the problem of underdetermination, i.e. that the available evidence may support competing theories equally well, and theory-ladenness, i.e. that what some scientists consider the evidence to be may already involve various theoretical assumptions not shared by other scientists. It is often held that there are two kinds of evidence: intellectual evidence or what is self-evident and empirical evidence or evidence accessible through the senses.
Other fields, including the sciences and the law, tend to emphasize more the public nature of evidence (for example, scientists tend to focus on how the data used during statistical inference are generated).
In order for something to act as evidence for a hypothesis, it has to stand in the right relation to it. In philosophy, this is referred to as the "evidential relation" and there are competing theories about what this relation has to be like. Probabilistic approaches hold that something counts as evidence if it increases the probability of the supported hypothesis. According to hypothetico-deductivism, evidence consists in observational consequences of the hypothesis. The positive-instance approach states that an observation sentence is evidence for a universal hypothesis if the sentence describes a positive instance of this hypothesis. The evidential relation can occur in various degrees of strength. These degrees range from direct proof of the truth of a hypothesis to weak evidence that is merely consistent with the hypothesis but does not rule out other, competing hypotheses, as in circumstantial evidence. In law, rules of evidence govern the types of evidence that are admissible in a legal proceeding. Types of legal evidence include testimony, documentary evidence, and physical evidence. The parts of a legal case that are not in controversy are known, in general, as the "facts of the case." Beyond any facts that are undisputed, a judge or jury is usually tasked with being a trier of fact for the other issues of a case. Evidence and rules are used to decide questions of fact that are disputed, some of which may be determined by the legal burden of proof relevant to the case. Evidence in certain cases (e.g. capital crimes) must be more compelling than in other situations (e.g. minor civil disputes), which drastically affects the quality and quantity of evidence necessary to decide a case.
Nature of evidence
Notion
Understood in its broadest sense, evidence for a proposition is what supports this proposition. Traditionally, the term is sometimes understood in a narrower sense: as the intuitive knowledge of facts that are considered indubitable. In this sense, only the singular form is used. This meaning is found especially in phenomenology, in which evidence is elevated to one of the basic principles of philosophy, giving philosophy the ultimate justifications that are supposed to turn it into a rigorous science. In a more modern usage, the plural form is also used. In academic discourse, evidence plays a central role in epistemology and in the philosophy of science. Reference to evidence is made in many different fields, like in science, in the legal system, in history, in journalism and in everyday discourse. A variety of different attempts have been made to conceptualize the nature of evidence. These attempts often proceed by starting with intuitions from one field or in relation to one theoretical role played by evidence and go on to generalize these intuitions, leading to a universal definition of evidence.
One important intuition is that evidence is what justifies beliefs. This line of thought is usually followed in epistemology and tends to explain evidence in terms of private mental states, for example, as experiences, other beliefs or knowledge. This is closely related to the idea that how rational someone is, is determined by how they respond to evidence. Another intuition, which is more dominant in the philosophy of science, focuses on evidence as that which confirms scientific hypotheses and arbitrates between competing theories. On this view, it is essential that evidence is public so that different scientists can share the same evidence. This leaves publicly observable phenomena like physical objects and events as the best candidates for evidence, unlike private mental states. One problem with these approaches is that the resulting definitions of evidence, both within a field and between fields, vary a lot and are incompatible with each other. For example, it is not clear what a bloody knife and a perceptual experience have in common when both are treated as evidence in different disciplines. This suggests that there is no unitary concept corresponding to the different theoretical roles ascribed to evidence, i.e. that we do not always mean the same thing when we talk of evidence.
Characteristics
On the other hand, Aristotle, phenomenologists, and numerous scholars accept that there could be several degrees of evidence. For instance, while the outcome of a complex equation may become more or less evident to a mathematician after hours of deduction, yet with little doubts about it, a simpler formula would appear more evident to them.
Riofrio has detected some characteristics that are present in evident arguments and proofs. The more they are evident, the more these characteristics will be present. There are six intrinsic characteristics of evidence:
The truth lies in what is evident, while falsehood or irrationality, although it may appear evident at times, lacks true evidence.
What is evident aligns coherently with other truths acquired through knowledge. Any insurmountable incoherence would indicate the presence of error or falsehood.
Evident truths are based on necessary reasoning.
The simplest truths are the most evident. They are self-explanatory and do not require argumentation to be understood by the intellect. However, for those lacking education, certain complex truths require rational discourse to become evident.
Evident truths do not need justification; they are indubitable. They are intuitively grasped by the intellect, without the need for further discourse, arguments, or proof.
Evident truths are clear, translucent, and filled with light.
In addition, four subjective or external characteristics can be detected over those things that are more or less evident:
The evident instills certainty and grants the knower a subjective sense of security, as they believe to have aligned with the truth
Initially, evident truths are perceived as natural and effortless, as Aristotle highlighted. They are innately present within the intellect, fostering a peaceful and harmonious understanding.
Consequently, evident truths appear to be widely shared, strongly connected to common sense, which comprises generally accepted beliefs.
Evident truths are fertile ground: they provide a solid foundation for other branches of scientific knowledge to flourish.
These ten characteristics of what is evident allowed Riofrio to formulate a test of evidence to detect the level of certainty or evidence that one argument or proof could have.
Different approaches to evidence
Important theorists of evidence include Bertrand Russell, Willard Van Orman Quine, the logical positivists, Timothy Williamson, Earl Conee and Richard Feldman. Russell, Quine and the logical positivists belong to the empiricist tradition and hold that evidence consists in sense data, stimulation of one's sensory receptors and observation statements, respectively. According to Williamson, all and only knowledge constitute evidence. Conee and Feldman hold that only one's current mental states should be considered evidence.
In epistemology
The guiding intuition within epistemology concerning the role of evidence is that it is what justifies beliefs. For example, Phoebe's auditory experience of the music justifies her belief that the speakers are on. Evidence has to be possessed by the believer in order to play this role. So Phoebe's own experiences can justify her own beliefs but not someone else's beliefs. Some philosophers hold that evidence possession is restricted to conscious mental states, for example, to sense data. This view has the implausible consequence that many of simple everyday-beliefs would be unjustified. The more common view is that all kinds of mental states, including stored beliefs that are currently unconscious, can act as evidence. It is sometimes argued that the possession of a mental state capable of justifying another is not sufficient for the justification to happen. The idea behind this line of thought is that justified belief has to be connected to or grounded in the mental state acting as its evidence. So Phoebe's belief that the speakers are on is not justified by her auditory experience if the belief is not based in this experience. This would be the case, for example, if Phoebe has both the experience and the belief but is unaware of the fact that the music is produced by the speakers.
It is sometimes held that only propositional mental states can play this role, a position known as "propositionalism". A mental state is propositional if it is an attitude directed at a propositional content. Such attitudes are usually expressed by verbs like "believe" together with a that-clause, as in "Robert believes that the corner shop sells milk". Such a view denies that sensory impressions can act as evidence. This is often held as an argument against this view since sensory impressions are commonly treated as evidence. Propositionalism is sometimes combined with the view that only attitudes to true propositions can count as evidence. On this view, the belief that the corner shop sells milk only constitutes evidence for the belief that the corner shop sells dairy products if the corner shop actually sells milk. Against this position, it has been argued that evidence can be misleading but still count as evidence.
This line of thought is often combined with the idea that evidence, propositional or otherwise, determines what it is rational for us to believe. But it can be rational to have a false belief. This is the case when we possess misleading evidence. For example, it was rational for Neo in the Matrix movie to believe that he was living in the 20th century because of all the evidence supporting his belief despite the fact that this evidence was misleading since it was part of a simulated reality. This account of evidence and rationality can also be extended to other doxastic attitudes, like disbelief and suspension of belief. So rationality does not just demand that we believe something if we have decisive evidence for it, it also demands that we disbelieve something if we have decisive evidence against it and that we suspend belief if we lack decisive evidence either way.
In phenomenology
The meaning of the term "evidence" in phenomenology shows many parallels to its epistemological usage, but it is understood in a narrower sense. Thus, evidence here specifically refers to intuitive knowledge, which is described as "self-given". This contrasts with empty intentions, in which one refers to states of affairs through a certain opinion, but without an intuitive presentation. This is why evidence is often associated with the controversial thesis that it constitutes an immediate access to truth. In this sense, the evidently given phenomenon guarantees its own truth and is therefore considered indubitable. Due to this special epistemological status of evidence, it is regarded in phenomenology as the basic principle of all philosophy. In this form, it represents the lowest foundation of knowledge, which consists of indubitable insights upon which all subsequent knowledge is built. This evidence-based method is meant to make it possible for philosophy to overcome many of the traditionally unresolved disagreements and thus become a rigorous science. This far-reaching claim of phenomenology, based on absolute certainty, is one of the focal points of criticism by its opponents. Thus, it has been argued that even knowledge based on self-evident intuition is fallible. This can be seen, for example, in the fact that even among phenomenologists, there is much disagreement about the basic structures of experience.
In philosophy of science
In the sciences, evidence is understood as what confirms or disconfirms scientific hypotheses. The term "confirmation" is sometimes used synonymously with that of "evidential support". Measurements of Mercury's "anomalous" orbit, for example, are seen as evidence that confirms Einstein's theory of general relativity. This is especially relevant for choosing between competing theories. So in the case above, evidence plays the role of neutral arbiter between Newton's and Einstein's theory of gravitation. This is only possible if scientific evidence is public and uncontroversial so that proponents of competing scientific theories agree on what evidence is available. These requirements suggest scientific evidence consists not of private mental states but of public physical objects or events.
It is often held that evidence is in some sense prior to the hypotheses it confirms. This was sometimes understood as temporal priority, i.e. that we come first to possess the evidence and later form the hypothesis through induction. But this temporal order is not always reflected in scientific practice, where experimental researchers may look for a specific piece of evidence in order to confirm or disconfirm a pre-existing hypothesis. Logical positivists, on the other hand, held that this priority is semantic in nature, i.e. that the meanings of the theoretical terms used in the hypothesis are determined by what would count as evidence for them. Counterexamples for this view come from the fact that our idea of what counts as evidence may change while the meanings of the corresponding theoretical terms remain constant. The most plausible view is that this priority is epistemic in nature, i.e. that our belief in a hypothesis is justified based on the evidence while the justification for the belief in the evidence does not depend on the hypothesis.
A central issue for the scientific conception of evidence is the problem of underdetermination, i.e. that the evidence available supports competing theories equally well. So, for example, evidence from our everyday life about how gravity works confirms Newton's and Einstein's theory of gravitation equally well and is therefore unable to establish consensus among scientists. But in such cases, it is often the gradual accumulation of evidence that eventually leads to an emerging consensus. This evidence-driven process towards consensus seems to be one hallmark of the sciences not shared by other fields.
Another problem for the conception of evidence in terms of confirmation of hypotheses is that what some scientists consider the evidence to be may already involve various theoretical assumptions not shared by other scientists. This phenomenon is known as theory-ladenness. Some cases of theory-ladenness are relatively uncontroversial, for example, that the numbers output by a measurement device need additional assumptions about how this device works and what was measured in order to count as meaningful evidence. Other putative cases are more controversial, for example, the idea that different people or cultures perceive the world through different, incommensurable conceptual schemes, leading them to very different impressions about what is the case and what evidence is available. Theory-ladenness threatens to impede the role of evidence as neutral arbiter since these additional assumptions may favor some theories over others. It could thereby also undermine a consensus to emerge since the different parties may be unable to agree even on what the evidence is. When understood in the widest sense, it is not controversial that some form of theory-ladenness exists. But it is questionable whether it constitutes a serious threat to scientific evidence when understood in this sense.
Nature of the evidential relation
Philosophers in the 20th century started to investigate the "evidential relation", the relation between evidence and the proposition supported by it. The issue of the nature of the evidential relation concerns the question of what this relation has to be like in order for one thing to justify a belief or to confirm a hypothesis. Important theories in this field include the probabilistic approach, hypothetico-deductivism and the positive-instance approach.
Probabilistic approaches, also referred to as Bayesian confirmation theory, explain the evidential relation in terms of probabilities. They hold that all that is necessary is that the existence of the evidence increases the likelihood that the hypothesis is true. This can be expressed mathematically as . In words: a piece of evidence (E) confirms a hypothesis (H) if the conditional probability of this hypothesis relative to the evidence is higher than the unconditional probability of the hypothesis by itself. Smoke (E), for example, is evidence that there is a fire (H), because the two usually occur together, which is why the likelihood of fire given that there is smoke is higher than the likelihood of fire by itself. On this view, evidence is akin to an indicator or a symptom of the truth of the hypothesis. Against this approach, it has been argued that it is too liberal because it allows accidental generalizations as evidence. Finding a nickel in one's pocket, for example, raises the probability of the hypothesis that "All the coins in my pockets are nickels". But, according to Alvin Goldman, it should not be considered evidence for this hypothesis since there is no lawful connection between this one nickel and the other coins in the pocket.
Hypothetico-deductivism is a non-probabilistic approach that characterizes the evidential relations in terms of deductive consequences of the hypothesis. According to this view, "evidence for a hypothesis is a true observational consequence of that hypothesis". One problem with the characterization so far is that hypotheses usually contain relatively little information and therefore have few if any deductive observational consequences. So the hypothesis by itself that there is a fire does not entail that smoke is observed. Instead, various auxiliary assumptions have to be included about the location of the smoke, the fire, the observer, the lighting conditions, the laws of chemistry, etc. In this way, the evidential relation becomes a three-place relation between evidence, hypothesis and auxiliary assumptions. This means that whether a thing is evidence for a hypothesis depends on the auxiliary assumptions one holds. This approach fits well with various scientific practices. For example, it is often the case that experimental scientists try to find evidence that would confirm or disconfirm a proposed theory. The hypothetico-deductive approach can be used to predict what should be observed in an experiment if the theory was true. It thereby explains the evidential relation between the experiment and the theory. One problem with this approach is that it cannot distinguish between relevant and certain irrelevant cases. So if smoke is evidence for the hypothesis "there is fire", then it is also evidence for conjunctions including this hypothesis, for example, "there is fire and Socrates was wise", despite the fact that Socrates's wisdom is irrelevant here.
According to the positive-instance approach, an observation sentence is evidence for a universal hypothesis if the sentence describes a positive instance of this hypothesis. For example, the observation that "this swan is white" is an instance of the universal hypothesis that "all swans are white". This approach can be given a precise formulation in first-order logic: a proposition is evidence for a hypothesis if it entails the "development of the hypothesis". Intuitively, the development of the hypothesis is what the hypothesis states if it was restricted to only the individuals mentioned in the evidence. In the case above, we have the hypothesis "" (all swans are white) which, when restricted to the domain "{a}", containing only the one individual mentioned in the evidence, entails the evidence, i.e. "" (this swan is white). One important shortcoming of this approach is that it requires that the hypothesis and the evidence are formulated in the same vocabulary, i.e. use the same predicates, like "" or "" above. But many scientific theories posit theoretical objects, like electrons or strings in physics, that are not directly observable and therefore cannot show up in the evidence as conceived here.
Empirical evidence (in science)
In scientific research evidence is accumulated through observations of phenomena that occur in the natural world, or which are created as experiments in a laboratory or other controlled conditions. Scientists tend to focus on how the data used during statistical inference are generated. Scientific evidence usually goes towards supporting or rejecting a hypothesis.
The burden of proof is on the person making a contentious claim. Within science, this translates to the burden resting on presenters of a paper, in which the presenters argue for their specific findings. This paper is placed before a panel of judges where the presenter must defend the thesis against all challenges.
When evidence is contradictory to predicted expectations, the evidence and the ways of making it are often closely scrutinized (see experimenter's regress) and only at the end of this process is the hypothesis rejected: this can be referred to as 'refutation of the hypothesis'. The rules for evidence used by science are collected systematically in an attempt to avoid the bias inherent to anecdotal evidence.
Law
In law, the production and presentation of evidence depend first on establishing on whom the burden of proof lies. Admissible evidence is that which a court receives and considers for the purposes of deciding a particular case. Two primary burden-of-proof considerations exist in law. The first is on whom the burden rests. In many, especially Western, courts, the burden of proof is placed on the prosecution in criminal cases and the plaintiff in civil cases. The second consideration is the degree of certitude proof must reach, depending on both the quantity and quality of evidence. These degrees are different for criminal and civil cases, the former requiring evidence beyond a reasonable doubt, the latter considering only which side has the preponderance of evidence, or whether the proposition is more likely true or false. The decision-maker, often a jury, but sometimes a judge decides whether the burden of proof has been fulfilled.
After deciding who will carry the burden of proof, the evidence is first gathered and then presented before the court:
Collection
In a criminal investigation, rather than attempting to prove an abstract or hypothetical point, the evidence gatherers attempt to determine who is responsible for a criminal act. The focus of criminal evidence is to connect physical evidence and reports of witnesses to a specific person.
Presentation
The path that physical evidence takes from the scene of a crime or the arrest of a suspect to the courtroom is called the chain of custody. In a criminal case, this path must be clearly documented or attested to by those who handled the evidence. If the chain of evidence is broken, a defendant may be able to persuade the judge to declare the evidence inadmissible.
Presenting evidence before the court differs from the gathering of evidence in important ways. Gathering evidence may take many forms; presenting evidence that tends to prove or disprove the point at issue is strictly governed by rules. Failure to follow these rules leads to any number of consequences. In law, certain policies allow (or require) evidence to be excluded from consideration based either on indicia relating to reliability, or broader social concerns. Testimony (which tells) and exhibits (which show) are the two main categories of evidence presented at a trial or hearing. In the United States, evidence in federal court is admitted or excluded under the Federal Rules of Evidence.
Burden of proof
The burden of proof is the obligation of a party in an argument or dispute to provide sufficient evidence to shift the other party's or a third party's belief from their initial position. The burden of proof must be fulfilled by both establishing confirming evidence and negating oppositional evidence. Conclusions drawn from evidence may be subject to criticism based on a perceived failure to fulfill the burden of proof.
Two principal considerations are:
On whom does the burden of proof rest?
To what degree of certitude must the assertion be supported?
The latter question depends on the nature of the point under contention and determines the quantity and quality of evidence required to meet the burden of proof.
In a criminal trial in the United States, for example, the prosecution carries the burden of proof since the defendant is presumed innocent until proven guilty beyond a reasonable doubt. Similarly, in most civil procedures, the plaintiff carries the burden of proof and must convince a judge or jury that the preponderance of the evidence is on their side. Other legal standards of proof include "reasonable suspicion", "probable cause" (as for arrest), "prima facie evidence", "credible evidence", "substantial evidence", and "clear and convincing evidence".
In a philosophical debate, there is an implicit burden of proof on the party asserting a claim, since the default position is generally one of neutrality or unbelief. Each party in a debate will therefore carry the burden of proof for any assertion they make in the argument, although some assertions may be granted by the other party without further evidence. If the debate is set up as a resolution to be supported by one side and refuted by another, the overall burden of proof is on the side supporting the resolution.
Specific types of evidence
Digital evidence
Personal experience
Physical evidence
Relationship evidence
Scientific evidence
Testimonial evidence
Trace evidence
See also
Argument
Belief
Best practice
Empiricism
Evidence packaging
Evidence-based assessment
Evidence-based conservation
Evidence-based dentistry
Evidence-based design
Evidence-based education
Evidence-based legislation
Evidence-based library and information practice
Evidence-based management
Evidence-based medical ethics
Evidence-based medicine
Evidence-based nursing
Evidence-based pharmacy in developing countries
Evidence-based policing
Evidence-based policy
Evidence-based practice
Evidence-based prosecution
Evidence-based toxicology
Falsifiability
Hierarchy of evidence
Logical positivism
Mathematical proof
National Registry of Evidence-Based Programs and Practices
Policy-based evidence making
Proof (truth)
Reason
Skepticism
Theory of justification
Validity (logic)
References
External links
ASTM E141 Standard Practice for Acceptance of Evidence Based on the Results of Probability Sampling
Concepts in epistemology | 0.785177 | 0.998607 | 0.784083 |
Nihilism | In philosophy, nihilism (; ) is any viewpoint, or a family of views, that rejects generally accepted or fundamental aspects of human existence, namely knowledge, morality, or meaning. There have been different nihilist positions, including that human values are baseless, that life is meaningless, that knowledge is impossible, or that some other highly regarded concepts are in fact meaningless or pointless. The term was popularized by Ivan Turgenev and more specifically by his character Bazarov in the novel Fathers and Sons.
Scholars of nihilism may regard it as merely a label that has been applied to various separate philosophies, or as a distinct historical concept arising out of nominalism, skepticism, and philosophical pessimism, as well as possibly out of Christianity itself. Contemporary understanding of the idea stems largely from the Nietzschean 'crisis of nihilism', from which derive the two central concepts: the destruction of higher values and the opposition to the affirmation of life. Definitions by philosophers such as Crosby (1998) and Deleuze (1962) focus on extreme critiques of nihilism like those asserted by Nietzsche. Earlier forms of nihilism, however, may be more selective in negating specific hegemonies of social, moral, political and aesthetic thought.
The term is sometimes used in association with anomie to explain the general mood of despair at a perceived pointlessness of existence or arbitrariness of human principles and social institutions. Nihilism has also been described as conspicuous in or constitutive of certain historical periods. For example, Jean Baudrillard and others have characterized postmodernity as a nihilistic epoch or mode of thought. Likewise, some theologians and religious figures have stated that postmodernity and many aspects of modernity represent nihilism by a negation of religious principles. Nihilism has, however, been widely ascribed to both religious and irreligious viewpoints.
In popular use, the term commonly refers to forms of existential nihilism, according to which life is without intrinsic value, meaning, or purpose. Other prominent positions within nihilism include the rejection of all normative and ethical views, the rejection of all social and political institutions, the stance that no knowledge can or does exist, and a number of metaphysical positions, which assert that non-abstract objects do not exist, that composite objects do not exist, or even that life itself does not exist.
Etymology, terminology and definition
The etymological origin of nihilism is the Latin root word , meaning 'nothing', which is similarly found in the related terms annihilate, meaning 'to bring to nothing', and nihility, meaning 'nothingness'. The term nihilism emerged in several places in Europe during the 18th century, notably in the German form , though was also in use during the Middle Ages to denote certain forms of heresy. The concept itself first took shape within Russian and German philosophy, which respectively represented the two major currents of discourse on nihilism prior to the 20th century. The term likely entered English from either the German , Late Latin , or French .
Early examples of the term's use are found in German publications. In 1733, German writer Friedrich Leberecht Goetz used it as a literary term in combination with noism. In the period surrounding the French Revolution, the term was also a pejorative for certain value-destructive trends of modernity, namely the negation of Christianity and European tradition in general. Nihilism first entered philosophical study within a discourse surrounding Kantian and post-Kantian philosophies, notably appearing in the writings of Swiss esotericist Jacob Hermann Obereit in 1787 and German philosopher Friedrich Heinrich Jacobi in 1799. As early as 1824, the term began to take on a social connotation with German journalist Joseph von Görres attributing it to a negation of existing social and political institutions. The Russian form of the word, , entered publication in 1829 when Nikolai Nadezhdin used it synonymously with skepticism. In Russian journalism the word continued to have significant social connotations.
From the time of Jacobi, the term almost fell completely out of use throughout Europe until it was revived by Russian author Ivan Turgenev, who brought the word into popular use with his 1862 novel Fathers and Sons, leading many scholars to believe he coined the term. The nihilist characters of the novel define themselves as those who "deny ", who do "not take any principle on faith, whatever reverence that principle may be enshrined in", and who regard "at the present time, negation is the most useful of all". Despite Turgenev's own anti-nihilistic leanings, many of his readers likewise took up the name of nihilist, thus ascribing the Russian nihilist movement its name. Nihilism was further discussed by German philosopher Friedrich Nietzsche, who used the term to describe the Western world's disintegration of traditional morality. For Nietzsche, nihilism applied to both the modern trends of value-destruction expressed in the 'death of God', as well as what he saw as the life-denying morality of Christianity. Under Nietzsche's profound influence, the term was then further treated within French philosophy and continental philosophy more broadly, while the influence of nihilism in Russia arguably continued well into the Soviet era.
Religious scholars such as Altizer have stated that nihilism must necessarily be understood in relation to religion, and that the study of core elements of its character requires fundamentally theological consideration.
History
Buddhism
The concept of nihilism was discussed by the Buddha (563 BC to 483 BC), as recorded in the Theravada and Mahayana Tripiṭaka. The Tripiṭaka, originally written in Pali, refers to nihilism as natthikavāda and the nihilist view as micchādiṭṭhi. Various sutras within it describe a multiplicity of views held by different sects of ascetics while the Buddha was alive, some of which were viewed by him to be morally nihilistic. In the "Doctrine of Nihilism" in the Apannaka Sutta, the Buddha describes moral nihilists as holding the following views:
The act of giving produces no beneficial results;
Good and bad actions produce no results;
After death, beings are not reborn into the present world or into another world;
There is no one in the world who, through direct knowledge, can confirm that beings are reborn into this world or into another world.
The Buddha further states that those who hold these views will fail to see the virtue in good mental, verbal, and bodily conduct and the corresponding dangers in misconduct, and will therefore tend towards the latter.
Nirvana and nihilism
The culmination of the path that the Buddha taught was nirvana, "a place of nothingness...nonpossession and...non-attachment...[which is] the total end of death and decay." Ajahn Amaro, an ordained Buddhist monk of more than 40 years, observes that in English nothingness can sound like nihilism. However, the word could be emphasized in a different way, so that it becomes no-thingness, indicating that nirvana is not a thing you can find, but rather a state where you experience the reality of non-grasping.
In the Alagaddupama Sutta, the Buddha describes how some individuals feared his teaching because they believe that their self would be destroyed if they followed it. He describes this as an anxiety caused by the false belief in an unchanging, everlasting self. All things are subject to change and taking any impermanent phenomena to be a self causes suffering. Nonetheless, his critics called him a nihilist who teaches the annihilation and extermination of an existing being. The Buddha's response was that he only teaches the cessation of suffering. When an individual has given up craving and the conceit of 'I am' their mind is liberated, they no longer come into any state of 'being' and are no longer born again.
The Aggi-Vacchagotta Sutta records a conversation between the Buddha and an individual named Vaccha that further elaborates on this. In the sutta, Vaccha asks the Buddha to confirm one of the following, with respect to the existence of the Buddha after death:
After death a Buddha reappears somewhere else;
After death a Buddha does not reappear;
After death a Buddha both does and does not reappear;
After death a Buddha neither does nor does not reappear.
To all four questions, the Buddha answers that the terms "reappears somewhere else," "does not reappear," "both does and does not reappear," and "neither does nor does not reappear," do not apply. When Vaccha expresses puzzlement, the Buddha asks Vaccha a counter question to the effect of: if a fire were to go out and someone were to ask you whether the fire went north, south, east or west, how would you reply? Vaccha replies that the question does not apply and that an extinguished fire can only be classified as 'out'.
Ṭhānissaro Bhikkhu elaborates on the classification problem around the words 'reappear,' etc. with respect to the Buddha and Nirvana by stating that a "Person who has attained the goal [nirvana] is thus indescribable because [they have] abandoned all things by which [they] could be described." The Suttas themselves describe the liberated mind as 'untraceable' or as 'consciousness without feature', making no distinction between the mind of a liberated being that is alive and the mind of one that is no longer alive.
Despite the Buddha's explanations to the contrary, Buddhist practitioners may, at times, still approach Buddhism in a nihilistic manner. Ajahn Amaro illustrates this by retelling the story of a Buddhist monk, Ajahn Sumedho, who in his early years took a nihilistic approach to Nirvana. A distinct feature of Nirvana in Buddhism is that an individual attaining it is no longer subject to rebirth. Ajahn Sumedho, during a conversation with his teacher Ajahn Chah, comments that he is "Determined above all things to fully realize Nirvana in this lifetime...deeply weary of the human condition and...[is] determined not to be born again." To this, Ajahn Chah replies: "What about the rest of us, Sumedho? Don't you care about those who'll be left behind?" Ajahn Amaro comments that Ajahn Chah could detect that his student had a nihilistic aversion to life rather than true detachment.
Jacobi
The term nihilism was first introduced to philosophy by Friedrich Heinrich Jacobi (1743–1819), who used the term to characterize rationalism, and in particular Spinoza's determinism and the Aufklärung, in order to carry out a reductio ad absurdum according to which all rationalism (philosophy as criticism) reduces to nihilism—and thus it should be avoided and replaced with a return to some type of faith and revelation. Bret W. Davis writes, for example:The first philosophical development of the idea of nihilism is generally ascribed to Friedrich Jacobi, who in a famous letter criticized Fichte's idealism as falling into nihilism. According to Jacobi, Fichte's absolutization of the ego (the 'absolute I' that posits the 'not-I') is an inflation of subjectivity that denies the absolute transcendence of God. A related but oppositional concept is fideism, which sees reason as hostile and inferior to faith.
Kierkegaard
Søren Kierkegaard (1813–1855) posited an early form of nihilism, which he referred to as leveling. He saw leveling as the process of suppressing individuality to a point where an individual's uniqueness becomes non-existent and nothing meaningful in one's existence can be affirmed:
Kierkegaard, an advocate of a philosophy of life, generally argued against levelling and its nihilistic consequences, although he believed it would be "genuinely educative to live in the age of levelling [because] people will be forced to face the judgement of [levelling] alone." George Cotkin asserts Kierkegaard was against "the standardization and levelling of belief, both spiritual and political, in the nineteenth century," and that Kierkegaard "opposed tendencies in mass culture to reduce the individual to a cipher of conformity and deference to the dominant opinion." In his day, tabloids (like the Danish magazine Corsaren) and apostate Christianity were instruments of levelling and contributed to the "reflective apathetic age" of 19th-century Europe. Kierkegaard argues that individuals who can overcome the levelling process are stronger for it, and that it represents a step in the right direction towards "becoming a true self." As we must overcome levelling, Hubert Dreyfus and Jane Rubin argue that Kierkegaard's interest, "in an increasingly nihilistic age, is in how we can recover the sense that our lives are meaningful."
Russian nihilism
From the period 1860–1917, Russian nihilism was both a nascent form of and broad cultural movement which overlapped with certain revolutionary tendencies of the era, for which it was often wrongly characterized as a form of political terrorism. Russian nihilism centered on the dissolution of existing values and ideals, incorporating theories of hard determinism, atheism, materialism, positivism, and rational egoism, while rejecting metaphysics, sentimentalism, and aestheticism. Leading philosophers of this school of thought included Nikolay Chernyshevsky and Dmitry Pisarev.
The intellectual origins of the Russian nihilist movement can be traced back to 1855 and perhaps earlier, where it was principally a philosophy of extreme moral and epistemological skepticism. However, it was not until 1862 that the name nihilism was first popularized, when Ivan Turgenev used the term in his celebrated novel Fathers and Sons to describe the disillusionment of the younger generation towards both the progressives and traditionalists that came before them, as well as its manifestation in the view that negation and value-destruction were most necessary to the present conditions. The movement very soon adopted the name, despite the novel's initial harsh reception among both the conservatives and younger generation.
Though philosophically both nihilistic and skeptical, Russian nihilism did not unilaterally negate ethics and knowledge as may be assumed, nor did it espouse meaninglessness unequivocally. Even so, contemporary scholarship has challenged the equating of Russian nihilism with mere skepticism, instead identifying it as a fundamentally movement. As passionate advocates of negation, the nihilists sought to liberate the Promethean might of the Russian people which they saw embodied in a class of prototypal individuals, or new types in their own words. These individuals, according to Pisarev, in freeing themselves from all authority become exempt from moral authority as well, and are distinguished above the rabble or common masses.
Later interpretations of nihilism were heavily influenced by works of anti-nihilistic literature, such as those of Fyodor Dostoevsky, which arose in response to Russian nihilism. "In contrast to the corrupted nihilists [of the real world], who tried to numb their nihilistic sensitivity and forget themselves through self-indulgence, Dostoevsky's figures voluntarily leap into nihilism and try to be themselves within its boundaries.", writes contemporary scholar Nishitani. "The nihility expressed in , or , provides a principle whose sincerity they try to live out to the end. They search for and experiment with ways for the self to justify itself after God has disappeared."
Nietzsche
Nihilism is often associated with the German philosopher Friedrich Nietzsche, who provided a detailed diagnosis of nihilism as a widespread phenomenon of Western culture. Though the notion appears frequently throughout Nietzsche's work, he uses the term in a variety of ways, with different meanings and connotations.
With regard to Nietzsche's development of thought, it has been noted in research that although he dealt with "nihilistic" themes from 1869 onwards ("pessimism, with nirvana and with nothingness and non-being"), a conceptual use of nihilism occurred for the first time in handwritten notes in the middle of 1880 (KSA 9.127-128). This was the time of a then popular scientific work that reconstructed the so-called "Russian nihilism" on the basis of Russian newspaper reports on nihilistic incidents (N. Karlowitsch: Die Entwicklung des Nihilismus. Berlin 1880). This collection of material, published in three editions, was not only known to a broad German readership, but its influence on Nietzsche can also be proven.
Karen L. Carr describes Nietzsche's characterization of nihilism as "a condition of tension, as a disproportion between what we want to value (or need) and how the world appears to operate." When we find out that the world does not possess the objective value or meaning that we want it to have or have long since believed it to have, we find ourselves in a crisis. Nietzsche asserts that with the decline of Christianity and the rise of physiological decadence, nihilism is in fact characteristic of the modern age, though he implies that the rise of nihilism is still incomplete and that it has yet to be overcome. Though the problem of nihilism becomes especially explicit in Nietzsche's notebooks (published posthumously), it is mentioned repeatedly in his published works and is closely connected to many of the problems mentioned there.
Nietzsche characterized nihilism as emptying the world and especially human existence of meaning, purpose, comprehensible truth, or essential value. This observation stems in part from Nietzsche's perspectivism, or his notion that "knowledge" is always by someone of some thing: it is always bound by perspective, and it is never mere fact. Rather, there are interpretations through which we understand the world and give it meaning. Interpreting is something we can not go without; in fact, it is a condition of subjectivity. One way of interpreting the world is through morality, as one of the fundamental ways that people make sense of the world, especially in regard to their own thoughts and actions. Nietzsche distinguishes a morality that is strong or healthy, meaning that the person in question is aware that he constructs it himself, from weak morality, where the interpretation is projected on to something external.
Nietzsche discusses Christianity, one of the major topics in his work, at length in the context of the problem of nihilism in his notebooks, in a chapter entitled "European Nihilism." Here he states that the Christian moral doctrine provides people with intrinsic value, belief in God (which justifies the evil in the world) and a basis for objective knowledge. In this sense, in constructing a world where objective knowledge is possible, Christianity is an antidote against a primal form of nihilism, against the despair of meaninglessness. However, it is exactly the element of truthfulness in Christian doctrine that is its undoing: in its drive towards truth, Christianity eventually finds itself to be a construct, which leads to its own dissolution. It is therefore that Nietzsche states that we have outgrown Christianity "not because we lived too far from it, rather because we lived too close." As such, the self-dissolution of Christianity constitutes yet another form of nihilism. Because Christianity was an interpretation that posited itself as the interpretation, Nietzsche states that this dissolution leads beyond skepticism to a distrust of all meaning.
Stanley Rosen identifies Nietzsche's concept of nihilism with a situation of meaninglessness, in which "everything is permitted." According to him, the loss of higher metaphysical values that exist in contrast to the base reality of the world, or merely human ideas, gives rise to the idea that all human ideas are therefore valueless. Rejecting idealism thus results in nihilism, because only similarly transcendent ideals live up to the previous standards that the nihilist still implicitly holds. The inability for Christianity to serve as a source of valuating the world is reflected in Nietzsche's famous aphorism of the madman in The Gay Science. The death of God, in particular the statement that "we killed him", is similar to the self-dissolution of Christian doctrine: due to the advances of the sciences, which for Nietzsche show that man is the product of evolution, that Earth has no special place among the stars and that history is not progressive, the Christian notion of God can no longer serve as a basis for a morality.
One such reaction to the loss of meaning is what Nietzsche calls passive nihilism, which he recognizes in the pessimistic philosophy of Schopenhauer. Schopenhauer's doctrine, which Nietzsche also refers to as Western Buddhism, advocates separating oneself from will and desires in order to reduce suffering. Nietzsche characterizes this attitude as a "will to nothingness", whereby life turns away from itself, as there is nothing of value to be found in the world. This mowing away of all value in the world is characteristic of the nihilist, although in this, the nihilist appears inconsistent: this "will to nothingness" is still a form of valuation or willing. He describes this as "an inconsistency on the part of the nihilists":
Nietzsche's relation to the problem of nihilism is a complex one. He approaches the problem of nihilism as deeply personal, stating that this predicament of the modern world is a problem that has "become conscious" in him. According to Nietzsche, it is only when nihilism is overcome that a culture can have a true foundation upon which to thrive. He wished to hasten its coming only so that he could also hasten its ultimate departure.
He states that there is at least the possibility of another type of nihilist in the wake of Christianity's self-dissolution, one that does not stop after the destruction of all value and meaning and succumb to the following nothingness. This alternate, 'active' nihilism on the other hand destroys to level the field for constructing something new. This form of nihilism is characterized by Nietzsche as "a sign of strength," a willful destruction of the old values to wipe the slate clean and lay down one's own beliefs and interpretations, contrary to the passive nihilism that resigns itself with the decomposition of the old values. This willful destruction of values and the overcoming of the condition of nihilism by the constructing of new meaning, this active nihilism, could be related to what Nietzsche elsewhere calls a free spirit or the Übermensch from Thus Spoke Zarathustra and The Antichrist, the model of the strong individual who posits his own values and lives his life as if it were his own work of art. It may be questioned, though, whether "active nihilism" is indeed the correct term for this stance, and some question whether Nietzsche takes the problems nihilism poses seriously enough.
Heideggerian interpretation of Nietzsche
Martin Heidegger's interpretation of Nietzsche influenced many postmodern thinkers who investigated the problem of nihilism as put forward by Nietzsche. Only recently has Heidegger's influence on Nietzschean nihilism research faded. As early as the 1930s, Heidegger was giving lectures on Nietzsche's thought. Given the importance of Nietzsche's contribution to the topic of nihilism, Heidegger's influential interpretation of Nietzsche is important for the historical development of the term nihilism.
Heidegger's method of researching and teaching Nietzsche is explicitly his own. He does not specifically try to present Nietzsche as Nietzsche. He rather tries to incorporate Nietzsche's thoughts into his own philosophical system of Being, Time and Dasein. In his Nihilism as Determined by the History of Being (1944–46), Heidegger tries to understand Nietzsche's nihilism as trying to achieve a victory through the devaluation of the, until then, highest values. The principle of this devaluation is, according to Heidegger, the will to power. The will to power is also the principle of every earlier valuation of values. How does this devaluation occur and why is this nihilistic? One of Heidegger's main critiques on philosophy is that philosophy, and more specifically metaphysics, has forgotten to discriminate between investigating the notion of a being (seiende) and Being (Sein). According to Heidegger, the history of Western thought can be seen as the history of metaphysics. Moreover, because metaphysics has forgotten to ask about the notion of Being (what Heidegger calls Seinsvergessenheit), it is a history about the destruction of Being. That is why Heidegger calls metaphysics nihilistic. This makes Nietzsche's metaphysics not a victory over nihilism, but a perfection of it.
Heidegger, in his interpretation of Nietzsche, has been inspired by Ernst Jünger. Many references to Jünger can be found in Heidegger's lectures on Nietzsche. For example, in a letter to the rector of Freiburg University of November 4, 1945, Heidegger, inspired by Jünger, tries to explain the notion of "God is dead" as the "reality of the Will to Power." Heidegger also praises Jünger for defending Nietzsche against a too biological or anthropological reading during the Nazi era.
Heidegger's interpretation of Nietzsche influenced a number of important postmodernist thinkers. Gianni Vattimo points at a back-and-forth movement in European thought, between Nietzsche and Heidegger. During the 1960s, a Nietzschean 'renaissance' began, culminating in the work of Mazzino Montinari and Giorgio Colli. They began work on a new and complete edition of Nietzsche's collected works, making Nietzsche more accessible for scholarly research. Vattimo explains that with this new edition of Colli and Montinari, a critical reception of Heidegger's interpretation of Nietzsche began to take shape. Like other contemporary French and Italian philosophers, Vattimo does not want, or only partially wants, to rely on Heidegger for understanding Nietzsche. On the other hand, Vattimo judges Heidegger's intentions authentic enough to keep pursuing them. Philosophers who Vattimo exemplifies as a part of this back and forth movement are French philosophers Deleuze, Foucault and Derrida. Italian philosophers of this same movement are Cacciari, Severino and himself. Jürgen Habermas, Jean-François Lyotard and Richard Rorty are also philosophers who are influenced by Heidegger's interpretation of Nietzsche.
Deleuzean interpretation of Nietzsche
Gilles Deleuze's interpretation of Nietzsche's concept of nihilism is different—in some sense diametrically opposed—to the usual definition (as outlined in the rest of this article). Nihilism is one of the main topics of Deleuze's early book Nietzsche and Philosophy (1962). There, Deleuze repeatedly interprets Nietzsche's nihilism as "the enterprise of denying life and depreciating existence". Nihilism thus defined is therefore not the denial of higher values, or the denial of meaning, but rather the depreciation of life in the name of such higher values or meaning. Deleuze therefore (with, he claims, Nietzsche) says that Christianity and Platonism, and with them the whole of metaphysics, are intrinsically Nihilist.
Postmodernism
Postmodern and poststructuralist thought has questioned the very grounds on which Western cultures have based their 'truths': absolute knowledge and meaning, a 'decentralization' of authorship, the accumulation of positive knowledge, historical progress, and certain ideals and practices of humanism and the Enlightenment.
Derrida
Jacques Derrida, whose deconstruction is perhaps most commonly labeled nihilistic, did not himself make the nihilistic move that others have claimed. Derridean deconstructionists argue that this approach rather frees texts, individuals or organizations from a restrictive truth, and that deconstruction opens up the possibility of other ways of being. Gayatri Chakravorty Spivak, for example, uses deconstruction to create an ethics of opening up Western scholarship to the voice of the subaltern and to philosophies outside of the canon of western texts. Derrida himself built a philosophy based upon a 'responsibility to the other'. Deconstruction can thus be seen not as a denial of truth, but as a denial of our ability to know truth. That is to say, it makes an epistemological claim, compared to nihilism's ontological claim.
Lyotard
Lyotard argues that, rather than relying on an objective truth or method to prove their claims, philosophers legitimize their truths by reference to a story about the world that can not be separated from the age and system the stories belong to—referred to by Lyotard as meta-narratives. He then goes on to define the postmodern condition as characterized by a rejection both of these meta-narratives and of the process of legitimation by meta-narratives. This concept of the instability of truth and meaning leads in the direction of nihilism, though Lyotard stops short of embracing the latter.
In lieu of meta-narratives we have created new language-games in order to legitimize our claims which rely on changing relationships and mutable truths, none of which is privileged over the other to speak to ultimate truth.
Baudrillard
Postmodern theorist Jean Baudrillard wrote briefly of nihilism from the postmodern viewpoint in Simulacra and Simulation. He stuck mainly to topics of interpretations of the real world over the simulations of which the real world is composed. The uses of meaning were an important subject in Baudrillard's discussion of nihilism:
Positions
From the 19th century, nihilism has encompassed a range of positions within various fields of philosophy. Each of these, as the Encyclopædia Britannica states, "denied the existence of genuine moral truths or values, rejected the possibility of knowledge or communication, and asserted the ultimate meaninglessness or purposelessness of life or of the universe".
Cosmic nihilism is the position that reality or the cosmos is either wholly or significantly unintelligible and that it provides no foundation for human aims and principles. Particularly, it may regard the cosmos as distinctly hostile or indifferent to humanity. It is often related to both epistemological and existential nihilism, as well as cosmicism.
Epistemological nihilism is a form of philosophical skepticism according to which knowledge does not exist, or, if it does exist, it is unattainable for human beings. It should not be confused with epistemological fallibilism, according to which all knowledge is uncertain.
Existential nihilism is the position that life has no intrinsic meaning or value. With respect to the universe, existential nihilism posits that a single human or even the entire human species is insignificant, without purpose, and unlikely to change in the totality of existence. The meaninglessness of life is largely explored in the philosophical school of existentialism, where one can create their own subjective meaning or purpose. In popular use, "nihilism" now most commonly refers to forms of existential nihilism.
Metaphysical nihilism is the position that concrete objects and physical constructs might not exist in the possible world, or that, even if there exist possible worlds that contain some concrete objects, there is at least one that contains only abstract objects.
Extreme metaphysical nihilism, also sometimes called ontological nihilism, is the position that nothing actually exists at all. The American Heritage Medical Dictionary defines one form of nihilism as "An extreme form of skepticism that denies all existence". A similar skepticism concerning the concrete world can be found in solipsism. However, despite the fact that both views deny the certainty of objects' true existence, the nihilist would deny the existence of self, whereas the solipsist would affirm it. Both of these positions are considered forms of anti-realism.
Mereological nihilism, also called compositional nihilism, is the metaphysical position that objects with proper parts do not exist. This position applies to objects in space, and also to objects existing in time, which are posited to have no temporal parts. Rather, only basic building blocks without parts exist, and thus the world we see and experience, full of objects with parts, is a product of human misperception (i.e., if we could see clearly, we would not perceive compositive objects). This interpretation of existence must be based on resolution: The resolution with which humans see and perceive the "improper parts" of the world is not an objective fact of reality, but is rather an implicit trait that can only be qualitatively explored and expressed. Therefore, there is no arguable way to surmise or measure the validity of mereological nihilism. For example, an ant can get lost on a large cylindrical object because the circumference of the object is so large with respect to the ant that the ant effectively feels as though the object has no curvature. Thus, the resolution with which the ant views the world it exists "within" is an important determining factor in how the ant experiences this "within the world" feeling.
Moral nihilism, also called ethical nihilism, is the meta-ethical position that no morality or ethics exists whatsoever; therefore, no action is ever morally preferable to any other. Moral nihilism is distinct from both moral relativism and expressivism in that it does not acknowledge socially constructed values as personal or cultural moralities. It may also differ from other moral positions within nihilism that, rather than argue there is no morality, hold that if it does exist, it is a human construction and thus artificial, wherein any and all meaning is relative for different possible outcomes. An alternative scholarly perspective is that moral nihilism is a morality in itself. Cooper writes, "In the widest sense of the word 'morality', moral nihilism is a morality".
Passive and active nihilism, the former of which is also equated to philosophical pessimism, refer to two approaches to nihilist thought; passive nihilism sees nihility as an end in itself, whereas active nihilism attempts to surpass it. For Nietzsche, passive nihilism further encapsulates the "will to nothing" and the modern condition of resignation or unawareness towards the dissolution of higher values brought about by the 19th century.
Political nihilism is the position holding no political goals whatsoever, except for the complete destruction of all existing political institutions—along with the principles, values, and social institutions that uphold them. Though often related to anarchism, it may differ in that it presents no method of social organisation after a negation of the current political structure has taken place. An analysis of political nihilism is further presented by Leo Strauss.
Therapeutic nihilism, also called medical nihilism, is the position that the effectiveness of medical intervention is dubious or without merit. Dealing with the philosophy of science as it relates to the contextualized demarcation of medical research, Jacob Stegenga applies Bayes' theorem to medical research and argues for the premise that "Even when presented with evidence for a hypothesis regarding the effectiveness of a medical intervention, we ought to have low confidence in that hypothesis."
In culture, the arts, and media
Dada
The term Dada was first used by Richard Huelsenbeck and Tristan Tzara in 1916. The movement, which lasted from approximately 1916 to 1923, arose during World War I, an event that influenced the artists. The Dada Movement began in the old town of Zürich, Switzerland—known as the "Niederdorf" or "Niederdörfli"—in the Café Voltaire. The Dadaists claimed that Dada was not an art movement, but an anti-art movement, sometimes using found objects in a manner similar to found poetry.
This tendency toward devaluation of art has led many to claim that Dada was an essentially nihilistic movement. Given that Dada created its own means for interpreting its products, it is difficult to classify alongside most other contemporary art expressions. Due to perceived ambiguity, it has been classified as a nihilistic modus vivendi.
Literature
The term "nihilism" was actually popularized in 1862 by Ivan Turgenev in his novel Fathers and Sons, whose hero, Bazarov, was a nihilist and recruited several followers to the philosophy. He found his nihilistic ways challenged upon falling in love.
Anton Chekhov portrayed nihilism when writing Three Sisters. The phrase "what does it matter" or variants of this are often spoken by several characters in response to events; the significance of some of these events suggests a subscription to nihilism by said characters as a type of coping strategy.
The philosophical ideas of the French author, the Marquis de Sade, are often noted as early examples of nihilistic principles.
Media
The frequently self-destructive and amoral tendencies of a nihilistic worldview can be seen in many of today's media, including movies and TV shows.
Patrick Bateman in Bret Easton Ellis's 1991 novel American Psycho and 2000 film adaptation, displays both moral and existential nihilism. Throughout the film, Bateman does not shy away from murder or torture to accomplish his goals. As he realizes the evil in his deeds he tries to confess and take on the punishment for his acts of crime.
Phil Connors in the 1993 comedy film Groundhog Day develops existential nihilistic tendencies near the middle of the film. As he lives the same day an unspoken countless number of times he slips into a depression and attempts to take his own life in a variety of different ways. He will also resort to kidnapping Punxsutawney Phil, the groundhog to which he credits his looping days, and drives off a cliff, killing both of them.
Vincent, the main antagonist of the 2004 film Collateral, believes that life has no meaning because that human nature is intrinsically evil, and that deep down, people care only about themselves.
In the 2022 film Everything Everywhere All at Once, the lead antagonist, Jobu Tupaki, comes to an existential nihilistic conclusion that the infinite chaos of the multiverse means that there is no reason to continue to exist. She manifests her nihilism by creating a black hole-like "everything bagel" in which she will destroy herself and the rest of the multiverse. Her mother Evelyn is briefly persuaded by her logic but then refutes it in favor of a more positive outlook based on the value of human relationships and choice.
In the 2023 video game, Honkai: Star Rail, 'Nihility' is a playable path, presided by the Aeon IX, on which characters who believe that ultimate fate of the multiverse is nothingness, and therefore, worthless, walk on.
See also
Citations
General and cited sources
Primary texts
Brassier, Ray (2007) Nihil Unbound: Enlightenment and Extinction, New York: Palgrave Macmillan.
Jacobi, Friedrich Heinrich, Jacobi an Fichte (1799/1816), German Text (1799/1816), Appendix with Jacobi's and Fichte's complementary Texts, critical Apparatus, Commentary, and Italian Translation, Istituto Italiano per gli Studi Filosofici, Naples 2011, .
Heidegger, Martin (1982), Nietzsche, Vols. I-IV, trans. F.A. Capuzzi, San Francisco: Harper & Row.
Kierkegaard, Søren (1998/1854), The Moment and Late Writings: Kierkegaard's Writings, Vol. 23, ed. and trans. Howard V. Hong and Edna H. Hong, Princeton, N.J: Princeton University Press. .
Kierkegaard, Søren (1978/1846), The Two Ages : Kierkegaard's Writings, Vol 14, ed. and trans. Howard V. Hong, and Edna H. Hong, Princeton, N.J: Princeton University Press. .
Kierkegaard, Søren (1995/1850), Works of Love : Kierkegaard's Writings, Vol 16, ed. and trans. Howard V. Hong and Edna H. Hong, Princeton, N.J: Princeton University Press. .
Nietzsche, Friedrich (2005/1886), Beyond Good and Evil, trans. Helen Zimmern.
Nietzsche, Friedrich (1974/1887), The Gay Science, trans. Walter Kaufman, Vintage, .
Nietzsche, Friedrich (1980), Sämtliche Werken. Kritische Studienausgabe, ed. C. Colli and M. Montinari, Walter de Gruyter. .
Nietzsche, Friedrich (2008/1885), Thus Spake Zarathustra, trans. Thomas Common.
Tartaglia, James (2016), Philosophy in a Meaningless Life: A System of Nihilism, Consciousness and Reality, London: Bloomsbury Publishing.
Secondary texts
Arena, Leonardo Vittorio (1997), Del nonsense: tra Oriente e Occidente, Urbino: Quattroventi.
Arena, Leonardo Vittorio (2012), Nonsense as the Meaning, ebook.
Arena, Leonardo Vittorio (2015), On Nudity. An Introduction to Nonsense, Mimesis International.
Barnett, Christopher (2011), Kierkegaard, pietism and holiness, Ashgate Publishing.
Carr, Karen (1992), The Banalisation of Nihilism, State University of New York Press.
Cattarini, L. S. (2018), Beyond Sartre and Sterility: Surviving Existentialism (Montreal: contact argobookshop.ca)
Cunningham, Conor (2002), Genealogy of Nihilism: Philosophies of Nothing & the Difference of Theology, New York, NY: Routledge.
Dent, G., Wallace, M., & Dia Center for the Arts. (1992). "Black popular culture" (Discussions in contemporary culture ; no. 8). Seattle: Bay Press.
Dod, Elmar (2013), Der unheimlichste Gast. Die Philosophie des Nihilismus. Marburg: Tectum 2013.
Dreyfus, Hubert L. (2004), Kierkegaard on the Internet: Anonymity vs. Commitment in the Present Age. Retrieved at December 1, 2009.
Fraser, John (2001), "Nihilism, Modernisn and Value", retrieved at December 2, 2009.
Galimberti, Umberto (2008), L'ospite inquietante. Il nichilismo e i giovani, Milano: Feltrinelli. .
Gillespie, Michael Allen (1996), Nihilism Before Nietzsche, Chicago, IL: University of Chicago Press.
Giovanni, George di (2008), "Friedrich Heinrich Jacobi", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). Retrieved on December 1, 2009.
Harper, Douglas, "Nihilism", in: Online Etymology Dictionary, retrieved at December 2, 2009.
Harries, Karsten (2010), Between nihilism and faith: a commentary on Either/or, Walter de Gruyter Press.
Hibbs, Thomas S. (2000), Shows About Nothing: Nihilism in Popular Culture from The Exorcist to Seinfeld, Dallas, TX: Spence Publishing Company.
Kopić, Mario (2001), S Nietzscheom o Europi, Zagreb: Jesenski i Turk.
Korab-Karpowicz, W. J. (2005), "Martin Heidegger (1889—1976)", in: Internet Encyclopedia of Philosophy, retrieved at December 2, 2009.
Kuhn, Elisabeth (1992), Friedrich Nietzsches Philosophie des europäischen Nihilismus, Walter de Gruyter.
Irti, Natalino (2004), Nichilismo giuridico, Laterza, Roma-Bari.
Löwith, Karl (1995), Martin Heidegger and European Nihilism, New York, NY: Columbia UP.
Marmysz, John (2003), Laughing at Nothing: Humor as a Response to Nihilism, Albany, NY: SUNY Press.
Müller-Lauter, Wolfgang (2000), Heidegger und Nietzsche. Nietzsche-Interpretationen III, Berlin-New York.
Parvez Manzoor, S. (2003), "Modernity and Nihilism. Secular History and Loss of Meaning", retrieved at December 2, 2009.
Rose, Eugene Fr. Seraphim (1995), Nihilism, The Root of the Revolution of the Modern Age, Forestville, CA: Fr. Seraphim Rose Foundation.
Rosen, Stanley (2000), Nihilism: A Philosophical Essay, South Bend, Indiana: St. Augustine's Press (2nd Edition).
Severino, Emanuele (1982), Essenza del nichilismo, Milano: Adelphi. .
Slocombe, Will (2006), Nihilism and the Sublime Postmodern: The (Hi)Story of a Difficult Relationship, New York, NY: Routledge.
Tigani, Francesco (2010), Rappresentare Medea. Dal mito al nichilismo, Roma: Aracne. .
Tigani, Francesco (2014), Lo spettro del nulla e il corpo del nichilismo, in La nave di Teseo. Saggi sull'Essere, il mito e il potere, Napoli: Guida. .
Villet, Charles (2009), Towards Ethical Nihilism: The Possibility of Nietzschean Hope, Saarbrücken: Verlag Dr. Müller.
Williams, Peter S. (2005), I Wish I Could Believe in Meaning: A Response to Nihilism, Damaris Publishing.
External links
Nihil - center for nihilism and nihilist studies
Nihilist Abyss
Friedrich Nietzsche, Thus Spake Zarathustra, translated by Thomas Common
"Nihilism" in the Internet Encyclopedia of Philosophy
Fathers and Sons by Ivan Turgenev
"Moral Skepticism", section "Skeptical Hypotheses" in the Stanford Encyclopedia of Philosophy
"In the Dust of This Planet", Radiolab podcast episode on nihilism and popular culture
"Nihilism", In Our Time, BBC Radio 4 discussion with Rob Hopkins, Raymond Tallis and Catherine Belsey (Nov. 16, 2000)
Philosophy of life
Political ideologies | 0.784061 | 0.999846 | 0.78394 |
Applied ethics | Applied ethics is the practical aspect of moral considerations. It is ethics with respect to real-world actions and their moral considerations in private and public life, the professions, health, technology, law, and leadership. For example, bioethics is concerned with identifying the best approach to moral issues in the life sciences, such as euthanasia, the allocation of scarce health resources, or the use of human embryos in research. Environmental ethics is concerned with ecological issues such as the responsibility of government and corporations to clean up pollution. Business ethics includes the duties of whistleblowers to the public and to their employers.
History
Applied ethics has expanded the study of ethics beyond the realms of academic philosophical discourse. The field of applied ethics, as it appears today, emerged from debate surrounding rapid medical and technological advances in the early 1970s and is now established as a subdiscipline of moral philosophy. However, applied ethics is, by its very nature, a multi-professional subject because it requires specialist understanding of the potential ethical issues in fields like medicine, business or information technology. Nowadays, ethical codes of conduct exist in almost every profession.
An applied ethics approach to the examination of moral dilemmas can take many different forms but one of the most influential and most widely utilised approaches in bioethics and health care ethics is the four-principle approach developed by Tom Beauchamp and James Childress. The four-principle approach, commonly termed principlism, entails consideration and application of four prima facie ethical principles: autonomy, non-maleficence, beneficence, and justice.
Underpinning theory
Applied ethics is distinguished from normative ethics, which concerns standards for right and wrong behavior, and from meta-ethics, which concerns the nature of ethical properties, statements, attitudes, and judgments.
Whilst these three areas of ethics appear to be distinct, they are also interrelated. The use of an applied ethics approach often draws upon these normative ethical theories:
Consequentialist ethics, which hold that the rightness of acts depends only on their consequences. The paradigmatic consequentialist theory is utilitarianism, which classically holds that whether an act is morally right depends on whether it maximizes net aggregated psychological wellbeing. This theory's main developments came from Jeremy Bentham and John Stuart Mill who distinguished between act and rule utilitarianism. Notable later developments were made by Henry Sidgwick who introduced the significance of motive or intent, and R. M. Hare who introduced the significance of preference in utilitarian decision-making. Other forms of consequentialism include prioritarianism.
Deontological ethics, which hold that acts have an inherent rightness or wrongness regardless of their context or consequences. This approach is epitomized by Immanuel Kant's notion of the categorical imperative, which was the centre of Kant's ethical theory based on duty. Another key deontological theory is natural law, which was heavily developed by Thomas Aquinas and is an important part of the Catholic Church's teaching on morals. Threshold deontology holds that rules ought to govern up to a point despite adverse consequences; but when the consequences become so dire that they cross a stipulated threshold, consequentialism takes over.
Virtue ethics, derived from Aristotle's and Confucius' notions, which asserts that the right action will be that chosen by a suitably 'virtuous' agent.
Normative ethical theories can clash when trying to resolve real-world ethical dilemmas. One approach attempting to overcome the divide between consequentialism and deontology is case-based reasoning, also known as casuistry. Casuistry does not begin with theory, rather it starts with the immediate facts of a real and concrete case. While casuistry makes use of ethical theory, it does not view ethical theory as the most important feature of moral reasoning. Casuists, like Albert Jonsen and Stephen Toulmin (The Abuse of Casuistry, 1988), challenge the traditional paradigm of applied ethics. Instead of starting from theory and applying theory to a particular case, casuists start with the particular case itself and then ask what morally significant features (including both theory and practical considerations) ought to be considered for that particular case. In their observations of medical ethics committees, Jonsen and Toulmin note that a consensus on particularly problematic moral cases often emerges when participants focus on the facts of the case, rather than on ideology or theory. Thus, a Rabbi, a Catholic priest, and an agnostic might agree that, in this particular case, the best approach is to withhold extraordinary medical care, while disagreeing on the reasons that support their individual positions. By focusing on cases and not on theory, those engaged in moral debate increase the possibility of agreement.
Applied ethics was later distinguished from the nascent applied epistemology, which is also under the umbrella of applied philosophy. While the former was concerned with the practical application of moral considerations, the latter focuses on the application of epistemology in solving practical problems.
See also
References
Further reading
(monograph)
External links
Ethics | 0.787369 | 0.995378 | 0.78373 |
Eudaimonia | Eudaimonia (; ), sometimes anglicized as Eudaemonia, Eudemonia or Eudimonia, is a Greek word literally translating to the state or condition of good spirit, and which is commonly translated as happiness or welfare.
In the works of Aristotle, eudaimonia was the term for the highest human good in older Greek tradition. It is the aim of practical philosophy-prudence, including ethics and political philosophy, to consider and experience what this state really is and how it can be achieved. It is thus a central concept in Aristotelian ethics and subsequent Hellenistic philosophy, along with the terms aretē (most often translated as virtue or excellence) and phronesis ('practical or ethical wisdom').
Discussion of the links between ēthikē aretē (virtue of character) and eudaimonia (happiness) is one of the central concerns of ancient ethics, and a subject of disagreement. As a result, there are many varieties of eudaimonism.
Definition and etymology
In terms of its etymology, eudaimonia is an abstract noun derived from the words eû (good, well) and daímōn (spirit or deity).
Semantically speaking, the word δαίμων derives from the same root of the Ancient Greek verb δαίομαι (, "to divide") allowing the concept of eudaimonia to be thought of as an "activity linked with dividing or dispensing, in a good way".
Definitions, a dictionary of Greek philosophical terms attributed to Plato himself but believed by modern scholars to have been written by his immediate followers in the Academy, provides the following definition of the word eudaimonia: "The good composed of all goods; an ability which suffices for living well; perfection in respect of virtue; resources sufficient for a living creature."
In his Nicomachean Ethics (§21; 1095a15–22), Aristotle says that everyone agrees that eudaimonia is the highest good for humans, but that there is substantial disagreement on what sort of life counts as doing and living well; i.e. eudaimon:Verbally there is a very general agreement; for both the general run of men and people of superior refinement say that it is [eudaimonia], and identify living well and faring well with being happy; but with regard to what [eudaimonia] is they differ, and the many do not give the same account as the wise. For the former think it is some plain and obvious thing like pleasure, wealth or honour... [1095a17]
So, as Aristotle points out, saying that a eudaimonic life is a life that is objectively desirable and involves living well is not saying very much. Everyone wants to be eudaimonic; and everyone agrees that being eudaimonic is related to faring well and to an individual's well-being. The really difficult question is to specify just what sort of activities enable one to live well. Aristotle presents various popular conceptions of the best life for human beings. The candidates that he mentions are (1) a life of pleasure, (2) a life of political activity, and (3) a philosophical life.
Eudaimonia and areté
One important move in Greek philosophy to answer the question of how to achieve eudaimonia is to bring in another important concept in ancient philosophy, aretē ('virtue'). Aristotle says that the eudaimonic life is one of "virtuous activity in accordance with reason" [1097b22–1098a20]; even Epicurus, who argues that the eudaimonic life is the life of pleasure, maintains that the life of pleasure coincides with the life of virtue. So, the ancient ethical theorists tend to agree that virtue is closely bound up with happiness (areté is bound up with eudaimonia). However, they disagree on the way in which this is so. A major difference between Aristotle and the Stoics, for instance, is that the Stoics believed moral virtue was in and of itself sufficient for happiness (eudaimonia). For the Stoics, one does not need external goods, like physical beauty, in order to have virtue and therefore happiness.
One problem with the English translation of areté as virtue is that we are inclined to understand virtue in a moral sense, which is not always what the ancients had in mind. For Aristotle, areté pertains to all sorts of qualities we would not regard as relevant to ethics, for example, physical beauty. So it is important to bear in mind that the sense of virtue operative in ancient ethics is not exclusively moral and includes more than states such as wisdom, courage, and compassion. The sense of virtue which areté connotes would include saying something like "speed is a virtue in a horse," or "height is a virtue in a basketball player." Doing anything well requires virtue, and each characteristic activity (such as carpentry, flute playing, etc.) has its own set of virtues. The alternative translation excellence (a desirable quality) might be helpful in conveying this general meaning of the term. The moral virtues are simply a subset of the general sense in which a human being is capable of functioning well or excellently.
Eudaimonia and happiness
Eudaimonia implies a positive and divine state of being that humanity is able to strive toward and possibly reach. A literal view of eudaimonia means achieving a state of being similar to a benevolent deity, or being protected and looked after by a benevolent deity. As this would be considered the most positive state to be in, the word is often translated as happiness although incorporating the divine nature of the word extends the meaning to also include the concepts of being fortunate, or blessed. Despite this etymology, however, discussions of eudaimonia in ancient Greek ethics are often conducted independently of any supernatural significance.
In his Nicomachean Ethics (1095a15–22) Aristotle says that eudaimonia means 'doing and living well'. It is significant that synonyms for eudaimonia are living well and doing well. In the standard English translation, this would be to say that, "happiness is doing well and living well." The word happiness does not entirely capture the meaning of the Greek word. One important difference is that happiness often connotes being or tending to be in a certain pleasant state of mind. For example, when one says that someone is "a very happy person", one usually means that they seem subjectively contented with the way things are going in their life. They mean to imply that they feel good about the way things are going for them. In contrast, Aristotle suggests that eudaimonia is a more encompassing notion than feeling happy since events that do not contribute to one's experience of feeling happy may affect one's eudaimonia.
Eudaimonia depends on all the things that would make us happy if we knew of their existence, but quite independently of whether we do know about them. Ascribing eudaimonia to a person, then, may include ascribing such things as being virtuous, being loved and having good friends. But these are all objective judgments about someone's life: they concern whether a person is really being virtuous, really being loved, and really having fine friends. This implies that a person who has evil sons and daughters will not be judged to be eudaimonic even if he or she does not know that they are evil and feels pleased and contented with the way they have turned out (happy). Conversely, being loved by your children would not count towards your happiness if you did not know that they loved you (and perhaps thought that they did not), but it would count towards your eudaimonia. So, eudaimonia corresponds to the idea of having an objectively good or desirable life, to some extent independently of whether one knows that certain things exist or not. It includes conscious experiences of well-being, success, and failure, but also a whole lot more. (See Aristotle's discussion: Nicomachean Ethics, book 1.10–1.11.)
Because of this discrepancy between the meanings of eudaimonia and happiness, some alternative translations have been proposed. W.D. Ross suggests 'well-being' and John Cooper proposes flourishing. These translations may avoid some of the misleading associations carried by "happiness" although each tends to raise some problems of its own. In some modern texts therefore, the other alternative is to leave the term in an English form of the original Greek, as eudaimonia.
Classical views on eudaimonia and aretē
Socrates
What is known of Socrates' philosophy is almost entirely derived from Plato's writings. Scholars typically divide Plato's works into three periods: the early, middle, and late periods. They tend to agree also that Plato's earliest works quite faithfully represent the teachings of Socrates and that Plato's own views, which go beyond those of Socrates, appear for the first time in the middle works such as the Phaedo and the Republic.
As with all ancient ethical thinkers, Socrates thought that all human beings wanted eudaimonia more than anything else (see Plato, Apology 30b, Euthydemus 280d–282d, Meno 87d–89a). However, Socrates adopted a quite radical form of eudaimonism (see above): he seems to have thought that virtue is both necessary and sufficient for eudaimonia. Socrates is convinced that virtues such as self-control, courage, justice, piety, wisdom and related qualities of mind and soul are absolutely crucial if a person is to lead a good and happy (eudaimon) life. Virtues guarantee a happy life eudaimonia. For example, in the Meno, with respect to wisdom, he says: "everything the soul endeavours or endures under the guidance of wisdom ends in happiness" (Meno 88c).
In the Apology, Socrates clearly presents his disagreement with those who think that the eudaimon life is the life of honour or pleasure, when he chastises the Athenians for caring more for riches and honour than the state of their souls.Good Sir, you are an Athenian, a citizen of the greatest city with the greatest reputation for both wisdom and power; are you not ashamed of your eagerness to possess as much wealth, reputation, and honors as possible, while you do not care for nor give thought to wisdom or truth or the best possible state of your soul? (29e) ... [I]t does not seem like human nature for me to have neglected all my own affairs and to have tolerated this neglect for so many years while I was always concerned with you, approaching each one of you like a father or an elder brother to persuade you to care for virtue. (31a–b; italics added)It emerges a bit further on that this concern for one's soul, that one's soul might be in the best possible state, amounts to acquiring moral virtue. So Socrates' pointing out that the Athenians should care for their souls means that they should care for their virtue, rather than pursuing honour or riches. Virtues are states of the soul. When a soul has been properly cared for and perfected, it possesses the virtues. Moreover, according to Socrates, this state of the soul, moral virtue, is the most important good. The health of the soul is incomparably more important for eudaimonia than (e.g.) wealth and political power. Someone with a virtuous soul is better off than someone who is wealthy and honoured but whose soul is corrupted by unjust actions. This view is confirmed in the Crito, where Socrates gets Crito to agree that the perfection of the soul, virtue, is the most important good:
And is life worth living for us with that part of us corrupted that unjust action harms and just action benefits? Or do we think that part of us, whatever it is, that is concerned with justice and injustice, is inferior to the body? Not at all. It is much more valuable...? Much more... (47e–48a)
Here, Socrates argues that life is not worth living if the soul is ruined by wrongdoing. In summary, Socrates seems to think that virtue is both necessary and sufficient for eudaimonia. A person who is not virtuous cannot be happy, and a person with virtue cannot fail to be happy. We shall see later on that Stoic ethics takes its cue from this Socratic insight.
Plato
Plato's great work of the middle period, the Republic, is devoted to answering a challenge made by the sophist Thrasymachus, that conventional morality, particularly the virtue of justice, actually prevents the strong man from achieving eudaimonia. Thrasymachus's views are restatements of a position which Plato discusses earlier on in his writings, in the Gorgias, through the mouthpiece of Callicles. The basic argument presented by Thrasymachus and Callicles is that justice (being just) hinders or prevents the achievement of eudaimonia because conventional morality requires that we control ourselves and hence live with un-satiated desires. This idea is vividly illustrated in book 2 of the Republic when Glaucon, taking up Thrasymachus' challenge, recounts a myth of the magical ring of Gyges. According to the myth, Gyges becomes king of Lydia when he stumbles upon a magical ring, which, when he turns it a particular way, makes him invisible, so that he can satisfy any desire he wishes without fear of punishment. When he discovers the power of the ring he kills the king, marries his wife and takes over the throne. The thrust of Glaucon's challenge is that no one would be just if he could escape the retribution he would normally encounter for fulfilling his desires at whim. But if eudaimonia is to be achieved through the satisfaction of desire, whereas being just or acting justly requires suppression of desire, then it is not in the interests of the strong man to act according to the dictates of conventional morality. (This general line of argument reoccurs much later in the philosophy of Nietzsche.) Throughout the rest of the Republic, Plato aims to refute this claim by showing that the virtue of justice is necessary for eudaimonia.
The argument of the Republic is lengthy and complex. In brief, Plato argues that virtues are states of the soul, and that the just person is someone whose soul is ordered and harmonious, with all its parts functioning properly to the person's benefit. In contrast, Plato argues that the unjust man's soul, without the virtues, is chaotic and at war with itself, so that even if he were able to satisfy most of his desires, his lack of inner harmony and unity thwart any chance he has of achieving eudaimonia. Plato's ethical theory is eudaimonistic because it maintains that eudaimonia depends on virtue. On Plato's version of the relationship, virtue is depicted as the most crucial and the dominant constituent of eudaimonia.
Aristotle
Aristotle's account is articulated in the Nicomachean Ethics and the Eudemian Ethics. In outline, for Aristotle, eudaimonia involves activity, exhibiting virtue (aretē sometimes translated as excellence) in accordance with reason. This conception of eudaimonia derives from Aristotle's essentialist understanding of human nature, the view that reason (logos sometimes translated as rationality) is unique to human beings and that the ideal function or work (ergon) of a human being is the fullest or most perfect exercise of reason. Basically, well-being (eudaimonia) is gained by proper development of one's highest and most human capabilities and human beings are "the rational animal". It follows that eudaimonia for a human being is the attainment of excellence (areté) in reason.
According to Aristotle, eudaimonia actually requires activity, action, so that it is not sufficient for a person to possess a squandered ability or disposition. Eudaimonia requires not only good character but rational activity. Aristotle clearly maintains that to live in accordance with reason means achieving excellence thereby. Moreover, he claims this excellence cannot be isolated and so competencies are also required appropriate to related functions. For example, if being a truly outstanding scientist requires impressive math skills, one might say "doing mathematics well is necessary to be a first rate scientist". From this it follows that eudaimonia, living well, consists in activities exercising the rational part of the psyche in accordance with the virtues or excellency of reason [1097b22–1098a20]. Which is to say, to be fully engaged in the intellectually stimulating and fulfilling work at which one achieves well-earned success. The rest of the Nicomachean Ethics is devoted to filling out the claim that the best life for a human being is the life of excellence in accordance with reason. Since the reason for Aristotle is not only theoretical but practical as well, he spends quite a bit of time discussing excellence of character, which enables a person to exercise his practical reason (i.e., reason relating to action) successfully.
Aristotle's ethical theory is eudaimonist because it maintains that eudaimonia depends on virtue. However, it is Aristotle's explicit view that virtue is necessary but not sufficient for eudaimonia. While emphasizing the importance of the rational aspect of the psyche, he does not ignore the importance of other goods such as friends, wealth, and power in a life that is eudaimonic. He doubts the likelihood of being eudaimonic if one lacks certain external goods such as good birth, good children, and beauty. So, a person who is hideously ugly or has "lost children or good friends through death" (1099b5–6), or who is isolated, is unlikely to be eudaimon. In this way, "dumb luck" (chance) can preempt one's attainment of eudaimonia.
Pyrrho
Pyrrho was the founder of Pyrrhonism. A summary of his approach to eudaimonia was preserved by Eusebius, quoting Aristocles of Messene, quoting Timon of Phlius, in what is known as the "Aristocles passage".
Whoever wants eudaimonia must consider these three questions: First, how are pragmata (ethical matters, affairs, topics) by nature? Secondly, what attitude should we adopt towards them? Thirdly, what will be the outcome for those who have this attitude?" Pyrrho's answer is that "As for pragmata they are all adiaphora (undifferentiated by a logical differentia), astathmēta (unstable, unbalanced, not measurable), and anepikrita (unjudged, unfixed, undecidable). Therefore, neither our sense-perceptions nor our doxai (views, theories, beliefs) tell us the truth or lie; so we certainly should not rely on them. Rather, we should be adoxastoi (without views), aklineis (uninclined toward this side or that), and akradantoi (unwavering in our refusal to choose), saying about every single one that it no more is than it is not or it both is and is not or it neither is nor is not.
With respect to aretē, the Pyrrhonist philosopher Sextus Empiricus said:
If one defines a system as an attachment to a number of dogmas that agree with one another and with appearances, and defines a dogma as an assent to something non-evident, we shall say that the Pyrrhonist does not have a system. But if one says that a system is a way of life that, in accordance with appearances, follows a certain rationale, where that rationale shows how it is possible to seem to live rightly ("rightly" being taken, not as referring only to aretē, but in a more ordinary sense) and tends to produce the disposition to suspend judgment, then we say that he does have a system.
Epicurus
Epicurus' ethical theory is hedonistic. His views were very influential for the founders and best proponents of utilitarianism, Jeremy Bentham and John Stuart Mill. Hedonism is the view that pleasure is the only intrinsic good and that pain is the only intrinsic bad. An object, experience or state of affairs is intrinsically valuable if it is good simply because of what it is. Intrinsic value is to be contrasted with instrumental value. An object, experience or state of affairs is instrumentally valuable if it serves as a means to what is intrinsically valuable. To see this, consider the following example. Suppose a person spends their days and nights in an office, working at not entirely pleasant activities for the purpose of receiving money. Someone asks them "why do you want the money?", and they answer: "So, I can buy an apartment overlooking the ocean, and a red sports car." This answer expresses the point that money is instrumentally valuable because its value lies in what one obtains by means of it—in this case, the money is a means to getting an apartment and a sports car and the value of making this money dependent on the price of these commodities.
Epicurus identifies the good life with the life of pleasure. He understands eudaimonia as a more or less continuous experience of pleasure and, also, freedom from pain and distress. But Epicurus does not advocate that one pursue any and every pleasure. Rather, he recommends a policy whereby pleasures are maximized "in the long run". In other words, Epicurus claims that some pleasures are not worth having because they lead to greater pains, and some pains are worthwhile when they lead to greater pleasures. The best strategy for attaining a maximal amount of pleasure overall is not to seek instant gratification but to work out a sensible long term policy.
Ancient Greek ethics is eudaimonist because it links virtue and eudaimonia, where eudaimonia refers to an individual's well-being. Epicurus' doctrine can be considered eudaimonist since Epicurus argues that a life of pleasure will coincide with a life of virtue. He believes that we do and ought to seek virtue because virtue brings pleasure. Epicurus' basic doctrine is that a life of virtue is the life that generates the most pleasure, and it is for this reason that we ought to be virtuous. This thesis—the eudaimon life is the pleasurable life—is not a tautology as "eudaimonia is the good life" would be: rather, it is the substantive and controversial claim that a life of pleasure and absence of pain is what eudaimonia consists in.
One important difference between Epicurus' eudaimonism and that of Plato and Aristotle is that for the latter virtue is a constituent of eudaimonia, whereas Epicurus makes virtue a means to happiness. To this difference, consider Aristotle's theory. Aristotle maintains that eudaimonia is what everyone wants (and Epicurus would agree). He also thinks that eudaimonia is best achieved by a life of virtuous activity in accordance with reason. The virtuous person takes pleasure in doing the right thing as a result of a proper training of moral and intellectual character (See e.g., Nicomachean Ethics 1099a5). However, Aristotle does not think that virtuous activity is pursued for the sake of pleasure. Pleasure is a byproduct of virtuous action: it does not enter at all into the reasons why virtuous action is virtuous. Aristotle does not think that we literally aim for eudaimonia. Rather, eudaimonia is what we achieve (assuming that we are not particularly unfortunate in the possession of external goods) when we live according to the requirements of reason. Virtue is the largest constituent in a eudaimon life.
By contrast, Epicurus holds that virtue is the means to achieve happiness. His theory is eudaimonist in that he holds that virtue is indispensable to happiness; but virtue is not a constituent of a eudaimon life, and being virtuous is not (external goods aside) identical with being eudaimon. Rather, according to Epicurus, virtue is only instrumentally related to happiness. So whereas Aristotle would not say that one ought to aim for virtue in order to attain pleasure, Epicurus would endorse this claim.
The Stoics
Stoic philosophy begins with Zeno of Citium , and was developed by Cleanthes (331–232 BC) and Chrysippus into a formidable systematic unity. Zeno believed happiness was a "good flow of life"; Cleanthes suggested it was "living in agreement with nature", and Chrysippus believed it was "living in accordance with experience of what happens by nature." Stoic ethics is a particularly strong version of eudaimonism. According to the Stoics, virtue is necessary and sufficient for eudaimonia. (This thesis is generally regarded as stemming from the Socrates of Plato's earlier dialogues.)
We saw earlier that the conventional Greek concept of arete is not quite the same as that denoted by virtue, which has Christian connotations of charity, patience, and uprightness, since arete includes many non-moral virtues such as physical strength and beauty. However, the Stoic concept of arete is much nearer to the Christian conception of virtue, which refers to the moral virtues. However, unlike Christian understandings of virtue, righteousness or piety, the Stoic conception does not place as great an emphasis on mercy, forgiveness, self-abasement (i.e. the ritual process of declaring complete powerlessness and humility before God), charity and self-sacrificial love, though these behaviors/mentalities are not necessarily spurned by the Stoics (they are spurned by some other philosophers of Antiquity). Rather Stoicism emphasizes states such as justice, honesty, moderation, simplicity, self-discipline, resolve, fortitude, and courage (states which Christianity also encourages).
The Stoics make a radical claim that the eudaimon life is the morally virtuous life. Moral virtue is good, and moral vice is bad, and everything else, such as health, honour and riches, are merely "neutral". The Stoics therefore are committed to saying that external goods such as wealth and physical beauty are not really good at all. Moral virtue is both necessary and sufficient for eudaimonia. In this, they are akin to Cynic philosophers such as Antisthenes and Diogenes in denying the importance to eudaimonia of external goods and circumstances, such as were recognized by Aristotle, who thought that severe misfortune (such as the death of one's family and friends) could rob even the most virtuous person of eudaimonia. This Stoic doctrine re-emerges later in the history of ethical philosophy in the writings of Immanuel Kant, who argues that the possession of a "good will" is the only unconditional good. One difference is that whereas the Stoics regard external goods as neutral, as neither good nor bad, Kant's position seems to be that external goods are good, but only so far as they are a condition to achieving happiness.
Modern conceptions
"Modern Moral Philosophy"
Interest in the concept of eudaimonia and ancient ethical theory more generally had a revival in the 20th century. G. E. M. Anscombe in her article "Modern Moral Philosophy" (1958) argued that duty-based conceptions of morality are conceptually incoherent for they are based on the idea of a "law without a lawgiver". She claims a system of morality conceived along the lines of the Ten Commandments depends on someone having made these rules. Anscombe recommends a return to the eudaimonistic ethical theories of the ancients, particularly Aristotle, which ground morality in the interests and well-being of human moral agents, and can do so without appealing to any such lawgiver.
Julia Driver in the Stanford Encyclopedia of Philosophy explains:
Anscombe's article Modern Moral Philosophy stimulated the development of virtue ethics as an alternative to Utilitarianism, Kantian Ethics, and Social Contract theories. Her primary charge in the article is that, as secular approaches to moral theory, they are without foundation. They use concepts such as "morally ought", "morally obligated", "morally right", and so forth that are legalistic and require a legislator as the source of moral authority. In the past God occupied that role, but systems that dispense with God as part of the theory are lacking the proper foundation for meaningful employment of those concepts.
Modern psychology
Models of eudaimonia in psychology and positive psychology emerged from early work on self-actualization and the means of its accomplishment by researchers such as Erik Erikson, Gordon Allport, and Abraham Maslow (hierarchy of needs).
Theories include Diener's tripartite model of subjective well-being, Ryff's Six-factor Model of Psychological Well-being, Keyes work on flourishing, and Seligman's contributions to positive psychology and his theories on authentic happiness and P.E.R.M.A. Related concepts are happiness, flourishing, quality of life, contentment, and meaningful life.
The Japanese concept of Ikigai has been described as eudaimonic well-being, as it "entails actions of devoting oneself to pursuits one enjoys and is associated with feelings of accomplishment and fulfillment."
Positive psychology on eudaimonia
The "Questionnaire for Eudaimonic Well-Being" developed in Positive Psychology lists six dimensions of eudaimonia:
self-discovery;
perceived development of one's best potentials;
a sense of purpose and meaning in life;
investment of significant effort in pursuit of excellence;
intense involvement in activities; and
enjoyment of activities as personally expressive.
See also
Ataraxia
Eudaemon (mythology)
Eudaemons
Eupraxsophy
Humanism
Social quality
Summum bonum
References
Further reading
Primary sources
Aristotle. The Nicomachean Ethics, translated by Martin Ostwald. New York: The Bobbs-Merrill Company. 1962
—— The Complete Works of Aristotle, vol. 1 and 2 (rev. ed.), edited by Jonathan Barnes (1984). Bollingen Foundation.1995.
Cicero. "On Ends" in De Finibus Bonorum et Malorum, translated by H. Rackham, Loeb Classical Library. Cambridge: Harvard University Press. 1914. Latin text with old-fashioned and not always philosophically precise English translation.
Epicurus. "Letter to Menoeceus, Principal Doctrines, and Vatican Sayings." pp. 28–40 in Hellenistic Philosophy: Introductory Readings (2nd ed.), edited by B. Inwood and L. Gerson. Indianapolis: Hackett Publishing Co. 1998. .
Plato. Plato's Complete Works, edited by John M. Cooper, translated by D. S. Hutchinson. Indianapolis: Hackett Publishing Co. 1997. .
Secondary sources
Ackrill, J. L. (1981) Aristotle the Philosopher. Oxford: Oxford University Press.
Anscombe, G. E. M. (1958) "Modern Moral Philosophy". Philosophy 33; repr. in G.E.M. Anscombe (1981), vol. 3, 26–42.
Broadie, Sarah W. (1991) Ethics with Aristotle. Oxford: Oxford University Press.
Irwin, T. H. (1995) Plato's Ethics, Oxford: Oxford University Press.
Long, A. A., and D.N. Sedley, The Hellenistic Philosophers, vol 1 and 2 (Cambridge: Cambridge University Press, 1987)
McMahon, Darrin M. (2005). Happiness: A History. Atlantic Monthly Press.
—— (2004) "The History of Happiness: 400 B.C. – A.D. 1780." Daedalus (Spring 2004).
Norton, David L. (1976) Personal Destinies, Princeton University Press.
Sellars, J. (2014). Stoicism. Routledge.
Urmson, J. O. (1988) Aristotle's Ethics. Oxford: Blackwell.
Vlastos, G. (1991) Socrates: Ironist and Moral Philosopher. Ithaca, NY: Cornell University Press.
External links
Ancient Ethical Theory, Stanford Encyclopedia of Philosophy
Aristotle's Ethics, Stanford Encyclopedia of Philosophy
Aristotle: Ethics, Internet Encyclopedia of Philosophy
Concepts in ancient Greek ethics
Concepts in ancient Greek philosophy of mind
Happiness
Theories in ancient Greek philosophy
Virtue
Virtue ethics
Well-being | 0.784081 | 0.999183 | 0.783441 |
Skepticism | Skepticism, also spelled scepticism in British English, is a questioning attitude or doubt toward knowledge claims that are seen as mere belief or dogma. For example, if a person is skeptical about claims made by their government about an ongoing war then the person doubts that these claims are accurate. In such cases, skeptics normally recommend not disbelief but suspension of belief, i.e. maintaining a neutral attitude that neither affirms nor denies the claim. This attitude is often motivated by the impression that the available evidence is insufficient to support the claim. Formally, skepticism is a topic of interest in philosophy, particularly epistemology.
More informally, skepticism as an expression of questioning or doubt can be applied to any topic, such as politics, religion, or pseudoscience. It is often applied within restricted domains, such as morality (moral skepticism), atheism (skepticism about the existence of God), or the supernatural. Some theorists distinguish "good" or moderate skepticism, which seeks strong evidence before accepting a position, from "bad" or radical skepticism, which wants to suspend judgment indefinitely.
Philosophical skepticism is one important form of skepticism. It rejects knowledge claims that seem certain from the perspective of common sense. Radical forms of philosophical skepticism deny that "knowledge or rational belief is possible" and urge us to suspend judgment on many or all controversial matters. More moderate forms claim only that nothing can be known with certainty, or that we can know little or nothing about nonempirical matters, such as whether God exists, whether human beings have free will, or whether there is an afterlife. In ancient philosophy, skepticism was understood as a way of life associated with inner peace.
Skepticism has been responsible for many important developments in science and philosophy. It has also inspired several contemporary social movements. Religious skepticism advocates for doubt concerning basic religious principles, such as immortality, providence, and revelation. Scientific skepticism advocates for testing beliefs for reliability, by subjecting them to systematic investigation using the scientific method, to discover empirical evidence for them.
Definition and semantic field
Skepticism, also spelled scepticism (from the Greek , to search, to think about or look for), refers to a doubting attitude toward knowledge claims. So if a person is skeptical of their government's claims about an ongoing war then the person has doubts that these claims are true. Or being skeptical that one's favorite hockey team will win the championship means that one is uncertain about the strength of their performance. Skepticism about a claim implies that one does not believe the claim to be true. But it does not automatically follow that one should believe that the claim is false either. Instead, skeptics usually recommend a neutral attitude: beliefs about this matter should be suspended. In this regard, skepticism about a claim can be defined as the thesis that "the only justified attitude with respect to [this claim] is suspension of judgment". It is often motivated by the impression that one cannot be certain about it. This is especially relevant when there is significant expert disagreement. Skepticism is usually restricted to a claim or a field of inquiry. So religious and moral skeptics have a doubtful attitude about religious and moral doctrines. But some forms of philosophical skepticism, are wider in that they reject any form of knowledge.
Some definitions, often inspired by ancient philosophy, see skepticism not just as an attitude but as a way of life. This is based on the idea that maintaining the skeptical attitude of doubt toward most concerns in life is superior to living in dogmatic certainty, for example because such a skeptic has more happiness and peace of mind or because it is morally better. In contemporary philosophy, on the other hand, skepticism is often understood neither as an attitude nor as a way of life but as a thesis: the thesis that knowledge does not exist.
Skepticism is related to various terms. It is sometimes equated with agnosticism and relativism. However, there are slight differences in meaning. Agnosticism is often understood more narrowly as skepticism about religious questions, in particular, about the Christian doctrine. Relativism does not deny the existence of knowledge or truth but holds that they are relative to a person and differ from person to person, for example, because they follow different cognitive norms. The opposite of skepticism is dogmatism, which implies an attitude of certainty in the form of an unquestioning belief. A similar contrast is often drawn in relation to blind faith and credulity.
Types
Various types of skepticism have been discussed in the academic literature. Skepticism is usually restricted to knowledge claims on one particular subject, which is why its different forms can be distinguished based on the subject. For example, religious skeptics distrust religious doctrines and moral skeptics raise doubts about accepting various moral requirements and customs. Skepticism can also be applied to knowledge in general. However, this attitude is usually only found in some forms of philosophical skepticism. A closely related classification distinguishes based on the source of knowledge, such as skepticism about perception, memory, or intuition. A further distinction is based on the degree of the skeptical attitude. The strongest forms assert that there is no knowledge at all or that knowledge is impossible. Weaker forms merely state that one can never be absolutely certain.
Some theorists distinguish between a good or healthy form of moderate skepticism in contrast to a bad or unhealthy form of radical skepticism. On this view, the "good" skeptic is a critically-minded person who seeks strong evidence before accepting a position. The "bad" skeptic, on the other hand, wants to "suspend judgment indefinitely... even in the face of demonstrable truth". Another categorization focuses on the motivation for the skeptical attitude. Some skeptics have ideological motives: they want to replace inferior beliefs with better ones. Others have a more practical outlook in that they see problematic beliefs as the cause of harmful customs they wish to stop. Some skeptics have very particular goals in mind, such as bringing down a certain institution associated with the spread of claims they reject.
Philosophical skepticism is a prominent form of skepticism and can be contrasted with non-philosophical or ordinary skepticism. Ordinary skepticism involves a doubting attitude toward knowledge claims that are rejected by many. Almost everyone shows some form of ordinary skepticism, for example, by doubting the knowledge claims made by flat earthers or astrologers. Philosophical skepticism, on the other hand, is a much more radical and rare position. It includes the rejection of knowledge claims that seem certain from the perspective of common sense. Some forms of it even deny that one knows that "I have two hands" or that "the sun will come out tomorrow". It is taken seriously in philosophy nonetheless because it has proven very hard to conclusively refute philosophical skepticism.
In various fields
Skepticism has been responsible for important developments in various fields, such as science, medicine, and philosophy. In science, the skeptical attitude toward traditional opinions was a key factor in the development of the scientific method. It emphasizes the need to scrutinize knowledge claims by testing them through experimentation and precise measurement. In the field of medicine, skepticism has helped establish more advanced forms of treatment by putting into doubt traditional forms that were based on intuitive appeal rather than empirical evidence. In the history of philosophy, skepticism has often played a productive role not just for skeptics but also for non-skeptical philosophers. This is due to its critical attitude that challenges the epistemological foundations of philosophical theories. This can help to keep speculation in check and may provoke creative responses, transforming the theory in question in order to overcome the problems posed by skepticism. According to Richard H. Popkin, "the history of philosophy can be seen, in part, as a struggle with skepticism". This struggle has led many contemporary philosophers to abandon the quest for absolutely certain or indubitable first principles of philosophy, which was still prevalent in many earlier periods. Skepticism has been an important topic throughout the history of philosophy and is still widely discussed today.
Philosophy
As a philosophical school or movement, skepticism arose both in ancient Greece and India. In India the Ajñana school of philosophy espoused skepticism. It was a major early rival of Buddhism and Jainism, and possibly a major influence on Buddhism. Two of the foremost disciples of the Buddha, Sariputta and Moggallāna, were initially students of the Ajñana philosopher Sanjaya Belatthiputta. A strong element of skepticism is found in Early Buddhism, most particularly in the Aṭṭhakavagga sutra. However the total effect these philosophies had on each other is difficult to discern. Since skepticism is a philosophical attitude and a style of philosophizing rather than a position, the Ajñanins may have influenced other skeptical thinkers of India such as Nagarjuna, Jayarāśi Bhaṭṭa, and Shriharsha.
In Greece, philosophers as early as Xenophanes expressed skeptical views, as did Democritus
and a number of Sophists. Gorgias, for example, reputedly argued that nothing exists, that even if there were something we could not know it, and that even if we could know it we could not communicate it. The Heraclitean philosopher Cratylus refused to discuss anything and would merely wriggle his finger, claiming that communication is impossible since meanings are constantly changing. Socrates also had skeptical tendencies, claiming to know nothing worthwhile.
There were two major schools of skepticism in the ancient Greek and Roman world. The first was Pyrrhonism, founded by Pyrrho of Elis. The second was Academic Skepticism, so-called because its two leading defenders, Arcesilaus who initiated the philosophy, and Carneades, the philosophy's most famous proponent, were heads of Plato's Academy. Pyrrhonism's aims are psychological. It urges suspension of judgment to achieve mental tranquility. The Academic Skeptics denied that knowledge is possible. The Academic Skeptics claimed that some beliefs are more reasonable or probable than others, whereas Pyrrhonian skeptics argue that equally compelling arguments can be given for or against any disputed view. Nearly all the writings of the ancient skeptics are now lost. Most of what we know about ancient skepticism is from Sextus Empiricus, a Pyrrhonian skeptic who lived in . His works contain a lucid summary of stock skeptical arguments.
Ancient skepticism faded out during the late Roman Empire, particularly after Augustine attacked the skeptics in his work Against the Academics. There was little knowledge of, or interest in, ancient skepticism in Christian Europe during the Middle Ages. Interest revived during the Renaissance and Reformation, particularly after the complete writings of Sextus Empiricus were translated into Latin in 1569 and after Martin Luther's skepticism of holy orders. A number of Catholic writers, including Francisco Sanches, Michel de Montaigne (1533–1592), Pierre Gassendi (1592–1655), and Marin Mersenne (1588–1648) deployed ancient skeptical arguments to defend moderate forms of skepticism and to argue that faith, rather than reason, must be the primary guide to truth. Similar arguments were offered later (perhaps ironically) by the Protestant thinker Pierre Bayle in his influential Historical and Critical Dictionary (1697–1702).
The growing popularity of skeptical views created an intellectual crisis in seventeenth-century Europe. An influential response was offered by the French philosopher and mathematician René Descartes (1596–1650). In his classic work, Meditations of First Philosophy (1641), Descartes sought to refute skepticism, but only after he had formulated the case for skepticism as powerfully as possible. Descartes argued that no matter what radical skeptical possibilities we imagine there are certain truths (e.g., that thinking is occurring, or that I exist) that are absolutely certain. Thus, the ancient skeptics were wrong to claim that knowledge is impossible. Descartes also attempted to refute skeptical doubts about the reliability of our senses, our memory, and other cognitive faculties. To do this, Descartes tried to prove that God exists and that God would not allow us to be systematically deceived about the nature of reality. Many contemporary philosophers question whether this second stage of Descartes's critique of skepticism is successful.
In the eighteenth century a new case for skepticism was offered by the Scottish philosopher David Hume (1711–1776). Hume was an empiricist, claiming that all genuine ideas can be traced back to original impressions of sensation or introspective consciousness. Hume argued that on empiricist grounds there are no sound reasons for belief in God, an enduring self or soul, an external world, causal necessity, objective morality, or inductive reasoning. In fact, he argued that "Philosophy would render us entirely Pyrrhonian, were not Nature too strong for it." As Hume saw it, the real basis of human belief is not reason, but custom or habit. We are hard-wired by nature to trust, say, our memories or inductive reasoning, and no skeptical arguments, however powerful, can dislodge those beliefs. In this way, Hume embraced what he called a "mitigated" skepticism, while rejecting an "excessive" Pyrrhonian skepticism that he saw as both impractical and psychologically impossible.
Hume's skepticism provoked a number of important responses. Hume's Scottish contemporary, Thomas Reid (1710–1796), challenged Hume's strict empiricism and argued that it is rational to accept "common-sense" beliefs such as the basic reliability of our senses, our reason, our memories, and inductive reasoning, even though none of these things can be proved. In Reid's view, such common-sense beliefs are foundational and require no proof in order to be rationally justified. Not long after Hume's death, the German philosopher Immanuel Kant (1724–1804) argued that human empirical experience has possibility conditions which could not have been realized unless Hume's skeptical conclusions about causal synthetic a priori judgements were false.
Today, skepticism continues to be a topic of lively debate among philosophers. British philosopher Julian Baggini posits that reason is perceived as "an enemy of mystery and ambiguity," but, if used properly, can be an effective tool for solving many larger societal issues.
Religion
Religious skepticism generally refers to doubting particular religious beliefs or claims. For example, a religious skeptic might believe that Jesus existed (see historicity of Jesus) while questioning claims that he was the messiah or performed miracles. Historically, religious skepticism can be traced back to Xenophanes, who doubted many religious claims of his time, although he recognized that "God is one, supreme among gods and men, and not like mortals in body or in mind." He maintained that there was one greatest God. God is one eternal being, spherical in form, comprehending all things within himself, is the absolute mind and thought, therefore is intelligent, and moves all things, but bears no resemblance to human nature either in body or mind."
Religious skepticism is not the same as atheism or agnosticism, though these often do involve skeptical attitudes toward religion and philosophical theology (for example, towards divine omnipotence). Religious people are generally skeptical about claims of other religions, at least when the two denominations conflict concerning some belief. Additionally, they may also be skeptical of the claims made by atheists.
The historian Will Durant writes that Plato was "as skeptical of atheism as of any other dogma". The Baháʼí Faith encourages skepticism that is mainly centered around self-investigation of
truth.
Science
A scientific or empirical skeptic is one who questions beliefs on the basis of scientific understanding and empirical evidence.
Scientific skepticism may discard beliefs pertaining to purported phenomena not subject to reliable observation and thus not systematic or empirically testable. Most scientists, being scientific skeptics, test the reliability of certain kinds of claims by subjecting them to systematic investigation via the scientific method. As a result, a number of ostensibly scientific claims are considered to be "pseudoscience" if they are found to improperly apply or to ignore the fundamental aspects of the scientific method.
Auditing
Professional skepticism is an important concept in auditing. It requires an auditor to have a "questioning mind", to make a critical assessment of evidence, and to consider the sufficiency of the evidence.
See also
Notes
Sources
Further reading
External links
Doubt
Epistemological theories
Philosophical methodology
Philosophical schools and traditions
Psychological attitude
Scientific method | 0.78543 | 0.997438 | 0.783417 |
Socratic questioning | Socratic questioning (or Socratic maieutics) is an educational method named after Socrates that focuses on discovering answers by asking questions of students. According to Plato, Socrates believed that "the disciplined practice of thoughtful questioning enables the scholar/student to examine ideas and be able to determine the validity of those ideas". Plato explains how, in this method of teaching, the teacher assumes an ignorant mindset in order to compel the student to assume the highest level of knowledge. Thus, a student is expected to develop the ability to acknowledge contradictions, recreate inaccurate or unfinished ideas, and critically determine necessary thought.
Socratic questioning is a form of disciplined questioning that can be used to pursue thought in many directions and for many purposes, including: to explore complex ideas, to get to the truth of things, to open up issues and problems, to uncover assumptions, to analyze concepts, to distinguish what we know from what we do not know, to follow out logical consequences of thought or to control discussions. Socratic questioning is based on the foundation that thinking has structured logic, and allows underlying thoughts to be questioned. The key to distinguishing Socratic questioning from questioning per se is that the former is systematic, disciplined, deep and usually focuses on fundamental concepts, principles, theories, issues or problems.
Pedagogy
When teachers use Socratic questioning in teaching, their purpose may be to probe student thinking, to determine the extent of student knowledge on a given topic, issue or subject, to model Socratic questioning for students or to help students analyze a concept or line of reasoning. It is suggested that students should learn the discipline of Socratic questioning so that they begin to use it in reasoning through complex issues, in understanding and assessing the thinking of others and in following-out the implications of what they and others think. In fact, Socrates himself thought that questioning was the only defensible form of teaching.
In teaching, teachers can use Socratic questioning for at least two purposes:
To deeply probe student thinking, to help students begin to distinguish what they know or understand from what they do not know or understand (and to help them develop intellectual humility in the process).
To foster students' abilities to ask Socratic questions, to help students acquire the powerful tools of Socratic dialogue, so that they can use these tools in everyday life (in questioning themselves and others). To this end, teachers can model the questioning strategies they want students to emulate and employ. Moreover, teachers need to directly teach students how to construct and ask deep questions. Beyond that, students need practice to improve their questioning abilities.
Socratic questioning illuminates the importance of questioning in learning. This includes differentiating between systematic and fragmented thinking, while forcing individuals to understand the root of their knowledge and ideas. Educators who support the use of Socratic questioning in educational settings argue that it helps students become active and independent learners. Examples of Socratic questions that are used for students in educational settings:
Getting students to clarify their thinking and explore the origin of their thinking
e.g., 'Why do you say that?', 'Could you explain further?'
Challenging students about assumptions
e.g., 'Is this always the case?', 'Why do you think that this assumption holds here?'
Providing evidence as a basis for arguments
e.g., 'Why do you say that?', 'Is there reason to doubt this evidence?'
Discovering alternative viewpoints and perspectives and conflicts between contentions
e.g., 'What is the counter-argument?', 'Can/did anyone see this another way?'
Exploring implications and consequences
e.g., 'But if...happened, what else would result?', 'How does...affect...?'
Questioning the question
e.g., 'Why do you think that I asked that question?', 'Why was that question important?', 'Which of your questions turned out to be the most useful?'
Socratic questioning and critical thinking
The art of Socratic questioning is intimately connected with critical thinking because the art of questioning is important to excellence of thought. Socrates argued for the necessity of probing individual knowledge, and acknowledging what one may not know or understand. Critical thinking has the goal of reflective thinking that focuses on what should be believed or done about a topic. Socratic questioning adds another level of thought to critical thinking, by focusing on extracting depth, interest and assessing the truth or plausibility of thought. Socrates argued that a lack of knowledge is not bad, but students must strive to make known what they don't know through the means of a form of critical thinking.
Critical thinking and Socratic questioning both seek meaning and truth. Critical thinking provides the rational tools to monitor, assess, and perhaps reconstitute or re-direct our thinking and action. This is what educational reformer John Dewey described as reflective inquiry: "in which the thinker turns a subject over in the mind, giving it serious and consecutive consideration." Socratic questioning is an explicit focus on framing self-directed, disciplined questions to achieve that goal.
The technique of questioning or leading discussion is spontaneous, exploratory, and issue-specific. The Socratic educator listens to the viewpoints of the student and considers the alternative points of view. It is necessary to teach students to sift through all the information, form a connection to prior knowledge, and transform the data to new knowledge in a thoughtful way. Some qualitative research shows that the use of the Socratic questioning within a traditional Yeshiva education setting helps students succeed in law school, although it remains an open question as to whether that relationship is causal or merely correlative.
It has been proposed in different studies that the "level of thinking that occurs is influenced by the level of questions asked". Thus, utilizing the knowledge that students don't know stimulates their ability to ask more complex questions. This requires educators to create conducive learning environments that promote and value the role of critical thinking, mobilising their ability to form complex thoughts and questions.
Psychology
Socratic questioning has also been used in psychotherapy, most notably as a cognitive restructuring technique in classical Adlerian psychotherapy, logotherapy, rational emotive behavior therapy, cognitive therapy, and logic-based therapy. The purpose is to help uncover the assumptions and evidence that underpin people's thoughts in respect of problems. A set of Socratic questions in cognitive therapy aim to deal with automatic thoughts that distress the patient:
Revealing the issue: 'What evidence supports this idea? And what evidence is against its being true?'
Conceiving reasonable alternatives: 'What might be another explanation or viewpoint of the situation? Why else did it happen?'
Examining various potential consequences: 'What are worst, best, bearable and most realistic outcomes?'
Evaluate those consequences: 'What's the effect of thinking or believing this? What could be the effect of thinking differently and no longer holding onto this belief?'
Distancing: 'Imagine a specific friend/family member in the same situation or if they viewed the situation this way, what would I tell them?'
Careful use of Socratic questioning enables a therapist to challenge recurring or isolated instances of a person's illogical thinking while maintaining an open position that respects the internal logic to even the most seemingly illogical thoughts.
See also
Argument map
Argumentation theory
Cross-examination
Inquiry
Intellectual virtue
Interrogation
Issue map
Socratic method
References
Questioning
Learning
Problem solving methods
Educational psychology
School qualifications
Education reform
Critical thinking skills
Philosophical methodology
Legal reasoning
de:Mäeutik | 0.786894 | 0.995348 | 0.783233 |
Coherentism | In philosophical epistemology, there are two types of coherentism: the coherence theory of truth, and the coherence theory of justification (also known as epistemic coherentism).
Coherent truth is divided between an anthropological approach, which applies only to localized networks ('true within a given sample of a population, given our understanding of the population'), and an approach that is judged on the basis of universals, such as categorical sets. The anthropological approach belongs more properly to the correspondence theory of truth, while the universal theories are a small development within analytic philosophy.
The coherentist theory of justification, which may be interpreted as relating to either theory of coherent truth, characterizes epistemic justification as a property of a belief only if that belief is a member of a coherent set. What distinguishes coherentism from other theories of justification is that the set is the primary bearer of justification.
As an epistemological theory, coherentism opposes dogmatic foundationalism and also infinitism through its insistence on definitions. It also attempts to offer a solution to the regress argument that plagues correspondence theory. In an epistemological sense, it is a theory about how belief can be proof-theoretically justified.
Coherentism is a view about the structure and system of knowledge, or else justified belief. The coherentist's thesis is normally formulated in terms of a denial of its contrary, such as dogmatic foundationalism, which lacks a proof-theoretical framework, or correspondence theory, which lacks universalism. Counterfactualism, through a vocabulary developed by David K. Lewis and his many worlds theory although popular with philosophers, has had the effect of creating wide disbelief of universals amongst academics. Many difficulties lie in between hypothetical coherence and its effective actualization. Coherentism claims, at a minimum, that not all knowledge and justified belief rest ultimately on a foundation of noninferential knowledge or justified belief. To defend this view, they may argue that conjunctions (and) are more specific, and thus in some way more defensible, than disjunctions (or).
After responding to foundationalism, coherentists normally characterize their view positively by replacing the foundationalism metaphor of a building as a model for the structure of knowledge with different metaphors, such as the metaphor that models our knowledge on a ship at sea whose seaworthiness must be ensured by repairs to any part in need of it. This metaphor fulfills the purpose of explaining the problem of incoherence, which was first raised in mathematics. Coherentists typically hold that justification is solely a function of some relationship between beliefs, none of which are privileged beliefs in the way maintained by dogmatic foundationalists. In this way universal truths are in closer reach. Different varieties of coherentism are individuated by the specific relationship between a system of knowledge and justified belief, which can be interpreted in terms of predicate logic, or ideally, proof theory.
Definition
As a theory of truth, coherentism restricts true sentences to those that cohere with some specified set of sentences. Someone's belief is true if and only if it is coherent with all or most of his or her other (true) beliefs. The terminology of coherence is then said to correlate with truth via some concept of what qualifies all truth, such as absoluteness or universalism. These further terms become the qualifiers of what is meant by a truth statement, and the truth-statements then decide what is meant by a true belief. Usually, coherence is taken to imply something stronger than mere consistency. Statements that are comprehensive and meet the requirements of Occam's razor are usually to be preferred.
As an illustration of the principle, if people lived in a virtual reality universe, they could see birds in the trees that aren't really there. Not only are the birds not really there, but the trees aren't really there either. The people may or may not know that the bird and the tree are there, but in either case there is a coherence between the virtual world and the real one, expressed in terms of true beliefs within available experience. Coherence is a way of explicating truth values while circumventing beliefs that might be false in any way. More traditional critics from the correspondence theory of truth have said that it cannot have contents and proofs at the same time, unless the contents are infinite, or unless the contents somehow exist in the form of proof. Such a form of 'existing proof' might seem ridiculous, but coherentists tend to think it is non-problematic. It therefore falls into a group of theories that are sometimes deemed excessively generalistic, what Gábor Forrai calls 'blob realism'.
Perhaps the best-known objection to a coherence theory of truth is Bertrand Russell's argument concerning contradiction. Russell maintained that a belief and its negation will each separately cohere with one complete set of all beliefs, thus making it internally inconsistent. For example, if someone holds a belief that is false, how might we determine whether the belief refers to something real although it is false, or whether instead the right belief is true although it is not believed? Coherence must thus rely on a theory that is either non-contradictory or accepts some limited degree of incoherence, such as relativism or paradox. Additional necessary criteria for coherence may include universalism or absoluteness, suggesting that the theory remains anthropological or incoherent when it does not use the concept of infinity. A coherentist might argue that this scenario applies regardless of the theories being considered, and so, that coherentism must be the preferred truth-theoretical framework in avoiding relativism.
History
In modern philosophy, the coherence theory of truth was defended by Baruch Spinoza, Immanuel Kant, Johann Gottlieb Fichte, Karl Wilhelm Friedrich Schlegel, and Georg Wilhelm Friedrich Hegel and Harold Henry Joachim (who is credited with the definitive formulation of the theory). However, Spinoza and Kant have also been interpreted as defenders of the correspondence theory of truth.
In late modern philosophy, epistemic coherentist views were held by Schlegel and Hegel, but the definitive formulation of the coherence theory of justification was provided by F. H. Bradley in his book The Principles of Logic (1883).
In contemporary philosophy, epistemologists who have significantly contributed to epistemic coherentism include: A. C. Ewing, Brand Blanshard, C. I. Lewis, Nicholas Rescher, Laurence BonJour, Keith Lehrer, and Paul Thagard. Otto Neurath is also sometimes thought to be an epistemic coherentist.
The regress argument
Both coherence and foundationalist theories of justification attempt to answer the regress argument, a fundamental problem in epistemology that goes as follows. Given some statement P, it appears reasonable to ask for a justification for P. If that justification takes the form of another statement, P', one can again reasonably ask for a justification for P', and so forth. There are three possible outcomes to this questioning process:
the series is infinitely long, with every statement justified by some other statement.
the series forms a loop, so that each statement is ultimately involved in its own justification.
the series terminates with certain statements having to be self-justifying.
An infinite series appears to offer little help, unless a way is found to model infinite sets. This might entail additional assumptions. Otherwise, it is impossible to check that each justification is satisfactory without making broad generalizations.
Coherentism is sometimes characterized as accepting that the series forms a loop, but although this would produce a form of coherentism, this is not what is generally meant by the term. Those who do accept the loop theory sometimes argue that the body of assumptions used to prove the theory is not what is at question in considering a loop of premises. This would serve the typical purpose of circumventing the reliance on a regression, but might be considered a form of logical foundationalism. But otherwise, it must be assumed that a loop begs the question, meaning that it does not provide sufficient logic to constitute proof.
Foundationalism's response
One might conclude that there must be some statements that, for some reason, do not need justification. This view is called foundationalism. For instance, rationalists such as Descartes and Spinoza developed axiomatic systems that relied on statements that were taken to be self-evident: "I think therefore I am" is the most famous example. Similarly, empiricists take observations as providing the foundation for the series.
Foundationalism relies on the claim that it is not necessary to ask for justification of certain propositions, or that they are self-justifying. Coherentists argue that this position is overly dogmatic. In other words, it does not provide real criteria for determining what is true and what is not. The Coherentist analytic project then involves a process of justifying what is meant by adequate criteria for non-dogmatic truth. As an offshoot of this, the theory insists that it is always reasonable to ask for a justification for any statement. For example, if someone makes an observational statement, such as "it is raining", the coherentist contends that it is reasonable to ask for example whether this mere statement refers to anything real. What is real about the statement, it turns out, is the extended pattern of relations that we call justifications. But, unlike the relativist, the coherentist argues that these associations may be objectively real. Coherentism contends that dogmatic foundationalism does not provide the whole set of pure relations that might result in actually understanding the objective context of phenomena, because dogmatic assumptions are not proof-theoretic, and therefore remain incoherent or relativistic. Coherentists therefore argue that the only way to reach proof-theoretic truth that is not relativistic is through coherency.
Coherentism's response
Coherentism rejects the soundness of the regression argument, which assumes that the justification for a proposition follows a linear sequence: P" justifies P', which in turn justifies P. According to coherentism, justification is a holistic process. Inferential justification for the belief that P is nonlinear, meaning that P" and P' are not epistemically prior to P. Instead, the beliefs P", P', and P work together to achieve epistemic justification. Catherine Elgin has expressed the same point differently, arguing that beliefs must be "mutually consistent, cotenable, and supportive. That is, the components must be reasonable in light of one another. Since both cotenability and supportiveness are matters of degree, coherence is too." Usually the system of belief is taken to be the complete set of beliefs of the individual or group, that is, their theory of the world.
It is necessary for coherentism to explain in some detail what it means for a system to be coherent. At the least, coherence must include logical consistency. It also usually requires some degree of integration of the various components of the system. A system that contains more than one unrelated explanation of the same phenomenon is not as coherent as one that uses only one explanation, all other things being equal. Conversely, a theory that explains divergent phenomena using unrelated explanations is not as coherent as one that uses only one explanation for those divergent phenomena. These requirements are variations on Occam's razor. The same points can be made more formally using Bayesian statistics. Finally, the greater the number of phenomena explained by the system, the greater its coherence.
Problems for coherentism
A problem coherentism has to face is the plurality objection. There is nothing within the definition of coherence that makes it impossible for two entirely different sets of beliefs to be internally coherent. Thus there might be several such sets. But if one supposes—in line with the principle of non-contradiction—that there can only be one complete set of truths, coherentism must therefore resolve internally that these systems are not contradictory, by establishing what is meant by truth. At this point, Coherence could be faulted for adopting its own variation of dogmatic foundationalism by arbitrarily selecting truth values. Coherentists must argue that their truth-values are not arbitrary for provable reasons.
A second objection also emerges, the finite problem: that arbitrary, ad hoc relativism could reduce statements of relatively insignificant value to non-entities during the process of establishing universalism or absoluteness. This might result in a totally flat truth-theoretic framework, or even arbitrary truth values. Coherentists generally solve this by adopting a metaphysical condition of universalism, sometimes leading to materialism, or by arguing that relativism is trivial.
A third objection that coherentism faces is the problem of isolation. Intuitively, one might think that the justification of an empirical belief must depend on some connection between the believed proposition and the way the world is. For example, a belief that 'snow is white' must in some way connect to the fact that snow really is white in the external world. Such a connection could be found in how the agent in question has experiences of the world being this way. However, if coherence is sufficient for justification and coherence is only a property of sets of beliefs, hence ruling out any such connection through experience, then it seems that coherentism would allow for the justification of empirical beliefs in isolation from the external world. Coherentists have a variety of responses to this. One strategy is to argue that no set of beliefs held by an agent would remain coherent over time if it was isolated from the external world in this way. Another approach argues that coherentism should be modified such that empirical beliefs can only be justified if the relevant set includes beliefs and experiences, and hence no belief can be justified without involving experiences about the world. This latter position is known as non-doxastic coherentism.
However, metaphysics poses another problem, the problem of the stowaway argument that might carry epistemological implications. However, a coherentist might say that if the truth conditions of the logic hold, then there will be no problem regardless of any additional conditions that happen to be true. Thus, the stress is on making the theory valid within the set, and also verifiable.
A number of philosophers have raised concerns over the link between intuitive notions of coherence that form the foundation of epistemic forms of coherentism and some formal results in Bayesian probability. This is an issue raised by Luc Bovens and Stephen Hartmann and by Erik J. Olsson in the form of 'impossibility' theorems. These theorems aim to give a formal proof that there is no way to formalise the notion of coherence such that the coherence of a set of beliefs always increases the probability of the joint truth of the beliefs. Attempts have been made to construct a theoretical account of coherentist intuitions. Importantly, epistemologist Luca Moretti and mathematical economist Franz Dietrich have given a formal proof that in certain cases the coherence of a set of beliefs transmits incremental confirmation (i.e. if some evidence confirms a given belief, and this belief is sufficiently coherent with other beliefs, then the evidence also confirms these other beliefs).
See also
Epistemological theories
Foundherentism
Bayesian epistemology
Related ideas
Web of belief
Theories of truth
Consensus theory of truth
Correspondence theory of truth
Deflationary theory of truth
Epistemic theories of truth
Indefinability theory of truth
Pragmatic theory of truth
Redundancy theory of truth
Semantic theory of truth
References
Bibliography
Rescher, Nicholas. The Coherence Theory of Truth. Oxford UP. 1973.
External links
Epistemological theories
Theories of justification | 0.79497 | 0.984922 | 0.782983 |
Philosophy and economics | Philosophy and economics studies topics such as public economics, behavioural economics, rationality, justice, history of economic thought, rational choice, the appraisal of economic outcomes, institutions and processes, the status of highly idealized economic models, the ontology of economic phenomena and the possibilities of acquiring knowledge of them.
It is useful to divide philosophy of economics in this way into three subject matters which can be regarded respectively as branches of action theory, ethics (or normative social and political philosophy), and philosophy of science. Economic theories of rationality, welfare, and social choice defend substantive philosophical theses often informed by relevant philosophical literature and of evident interest to those interested in action theory, philosophical psychology, and social and political philosophy.
Economics is of special interest to those interested in epistemology and philosophy of science both because of its detailed peculiarities and because it has many of the overt features of the natural sciences, while its object consists of social phenomena. In any empirical setting, the epistemic assumptions of financial economics (and related applied financial disciplines) are relevant, and are further discussed under the Epistemology of finance.
Scope
Definition and ontology of economics
The question usually addressed in any subfield of philosophy (the philosophy of X) is "what is X?". A philosophical approach to the question "what is economics?" is less likely to produce an answer than it is to produce a survey of the definitional and territorial difficulties and controversies. Similar considerations apply as a prologue to further discussion of methodology in a subject. Definitions of economics have varied over time from the modern origins of the subject, reflecting programmatic concerns and distinctions of expositors.
Ontological questions continue with further "what is..." questions addressed at fundamental economic phenomena, such as "what is (economic) value?" or "what is a market?". While it is possible to respond to such questions with real verbal definitions, the philosophical value of posing such questions actually aims at shifting entire perspectives as to the nature of the foundations of economics. In the rare cases that attempts at ontological shifts gain wide acceptance, their ripple effects can spread throughout the entire field of economics.
Methodology and epistemology of economics
An epistemology deals with how we know things. In the philosophy of economics this means asking questions such as: what kind of a "truth claim" is made by economic theories – for example, are we claiming that the theories relate to reality or perceptions? How can or should we prove economic theories – for example, must every economic theory be empirically verifiable? How exact are economic theories and can they lay claim to the status of an exact science – for example, are economic predictions as reliable as predictions in the natural sciences, and why or why not? Another way of expressing this issue is to ask whether economic theories can state "laws". Philosophers of science and economists have explored these issues intensively since the work of Alexander Rosenberg and Daniel M. Hausman dating to 3 decades ago.
Rational choice, decision theory and game theory
Philosophical approaches in decision theory focus on foundational concepts in decision theory – for example, on the natures of choice or preference, rationality, risk and uncertainty, and economic agents.
Game theory is shared between a number of disciplines, but especially mathematics, economics and philosophy. Game theory is still extensively discussed within the field of the philosophy of economics. Game theory is closely related to and builds on decision theory and is likewise very strongly interdisciplinary.
Ethics and justice
The ethics of economic systems deals with the issues such as how it is right (just, fair) to keep or distribute economic goods. Economic systems as a product of collective activity allow examination of their ethical consequences for all of their participants. Ethics and economics relates ethical studies to welfare economics. It has been argued that a closer relation between welfare economics and modern ethical studies may enrich both areas, even including predictive and descriptive economics as to rationality of behaviour, given social interdependence.
Ethics and justice overlap disciplines in different ways. Approaches are regarded as more philosophical when they study the fundamentals – for example, John Rawls' A Theory of Justice (1971) and Robert Nozick's Anarchy, State and Utopia (1974). 'Justice' in economics is a subcategory of welfare economics with models frequently representing the ethical-social requirements of a given theory. "Practical" matters include such subjects as law and cost–benefit analysis
Utilitarianism, one of the ethical methodologies, has its origins inextricably interwoven with the emergence of modern economic thought. Today utilitarianism has spread throughout applied ethics as one of a number of approaches. Non-utilitarian approaches in applied ethics are also now used when questioning the ethics of economic systems – e.g. rights-based (deontological) approaches.
Many political ideologies have been an immediate outgrowth of reflection on the ethics of economic systems. Marx, for example, is generally regarded primarily as a philosopher, his most notable work being on the philosophy of economics. However, Marx's economic critique of capitalism did not depend on ethics, justice, or any form of morality, instead focusing on the inherent contradictions of capitalism through the lens of a process which is today called dialectical materialism.
Non-mainstream economic thinking
The philosophy of economics defines itself as including the questioning of foundations or assumptions of economics. The foundations and assumption of economics have been questioned from the perspective of noteworthy but typically under-represented groups. These areas are therefore to be included within the philosophy of economics.
Praxeology: a deductive theory of human action based on premises presumed to be philosophically true (following the analytic–synthetic distinction of Immanuel Kant). Developed by Ludwig von Mises within the Austrian School, is a self-conscious opposition to the mathematical modeling and hypothesis-testing to validate neoclassical economics.
Cross-cultural perspectives on economics, and economic anthropology: an example is the Buddhist-inspired Bhutanese "Gross National Happiness" concept (suggested as a better development measure than GNI/GDP). Amartya Sen is a renowned advocate for the integration of cross-cultural phenomena into economic thinking.
Feminist perspectives on economics, or feminist economics.
Scholars cited in the literature
Aristotle
Kenneth Arrow
Roger E. Backhouse
Ken Binmore
Kevin Carson
Milton Friedman
Frank Hahn
Friedrich Hayek
Martin Hollis
Daniel M. Hausman
Terence Wilmot Hutchison
David Hume
John Neville Keynes
John Maynard Keynes
Tony Lawson
John Locke
Uskali Mäki
Thomas Robert Malthus
Karl Marx
John Stuart Mill
Ludwig von Mises
Pierre-Joseph Proudhon
John E. Roemer
Murray Rothbard
John Rawls
Lionel Robbins
Joan Robinson
Alexander Rosenberg
Paul Samuelson
E. F. Schumacher
Amartya Sen
Brian Skyrms
Adam Smith
Max Weber
Carl Menger
Bernard Williams
Related disciplines
The ethics of economic systems is an area of overlap between business ethics and the philosophy of economics. People who write on the ethics of economic systems are more likely to call themselves political philosophers than business ethicists or economic philosophers. There is significant overlap between theoretical issues in economics and the philosophy of economics. As economics is generally accepted to have its origins in philosophy, the history of economics overlaps with the philosophy of economics.
Degrees
Some universities offer joint degrees that combine philosophy, politics and economics. These degrees cover many of the problems that are discussed in Philosophy and Economics, but are more broadly construed. A small number of universities, notably the London School of Economics, University of Edinburgh, the Erasmus University Rotterdam, Copenhagen Business School, the University of Vienna the University of Bayreuth and the University of Hamburg offer master's degree programs specialized in philosophy, politics and economics.
Journals
Economics and Philosophy
Erasmus Journal for Philosophy and Economics
Journal of Economic Methodology
Philosophy and Public Affairs
Politics, Philosophy & Economics – Aims and Scope
See also
Analytic philosophy
Critique of political economy
Epistemology of finance
Philosophy of science
Schools of economic thought
History of economic thought
Teoría de Precios: Porqué está mal la Economía textbook (2010)
References
Further reading
Boulding, Kenneth E. (1969). "Economics as a Moral Science," American Economic Review, 59(1), pp. 1-12.
Caldwell, Bruce (1987). "positivism," The New Palgrave: A Dictionary of Economics, v.3, pp. 921–23.
Downie, R.S. (1987). "moral philosophy," The New Palgrave: A Dictionary of Economics, v. 3, pp. 551–56.
Hands, D. Wade, ed. (1993). The Philosophy and Methodology of Economics, Edward Elgar. 3 v. Description and Table of Contents links.
Davis, John B., Alain Marciano, Jochen Runde, eds. (2004). The Elgar Companion to Economics and Philosophy. Description & Table of Contents links and Introduction and ch. 1 previews via sidebar scrolling. Articles from 1925 & 1940–1991.
Hausman, Daniel M. (1992). Essays on Philosophy and Economic Methodology. Description, ch. 1 link. Chapter-preview links.
_, ed. ([1984] 2008). The Philosophy of Economics: An Anthology, 3rd ed. Cambridge. Description & Table of contents links and Introduction. From John Stuart Mill on.
Heilbroner, Robert L. ([1953] 1999). The Worldly Philosophers: The Lives, Times, and Ideas of the Great Economic Thinkers, 7th ed. Scroll to chapter-preview links.
Hodgson, Bernard (2001). Economics as Moral Science. Description and chapter-preview links, pp. xi-xiv.
Peil, Jan, and Irene van Staveren, eds. (2009). Handbook of Economics and Ethics, Edward Elgar. Description and preview.
Putnam, Hilary (1993). "The Collapse of the Fact/Value Dichotomy," in Martha Nussbaum and Amartya Sen, ed. The Quality of Life, pp. 143–157. Oxford. Reprinted in Putnam (2002), Part I, pp. 5 -64.
_ (2002). The Collapse of the Fact/Value Dichotomy and Other Essays, Description and chapter-preview links.
Robinson, Joan (1962). Economic Philosophy. Description and scroll to chapter and previews.
Rubinstein, Ariel (2006). "Dilemmas of an Economic Theorist," Econometrica, 74(4), pp. 865–883 (close Page tab).
Szenberg, Michael, ed. (1992). Eminent Economists: Their Life Philosophies, Cambridge. Description and preview.
Walsh, Vivian (1961). Scarcity and Evil]: An Original Exploration of Moral Issues on the Frontier Between Guilt and Tragedy. Prentice-Hall.
_ (1987). "philosophy and economics," The New Palgrave: A Dictionary of Economics, v. 3, pp. 861–869.
_ (1996). Rationality, Allocation, and Reproduction. Cambridge. Description and scroll to chapter-preview links.
External links
Philosophy of Economics (Daniel Little's entry in the Routledge Encyclopedia of the Philosophy of Science)
Philosophy of Economics (Stanford Encyclopedia of Philosophy) by Daniel M. Hausman, notable in the field.
Interdisciplinary subfields of economics
Economics | 0.800818 | 0.977362 | 0.782689 |
Political philosophy | Political philosophy, or political theory, is the philosophical study of government, addressing questions about the nature, scope, and legitimacy of public agents and institutions and the relationships between them. Its topics include politics, justice, liberty, property, rights, law, and authority: what they are, if they are needed, what makes a government legitimate, what rights and freedoms it should protect, what form it should take, what the law is, and what duties citizens owe to a legitimate government, if any, and when it may be legitimately overthrown, if ever.
Political theory also engages questions of a broader scope, tackling the political nature of phenomena and categories such as identity, culture, sexuality, race, wealth, human-nonhuman relations, ethics, religion, and more.
Political philosophy is a branch of philosophy, but it has also played a major part in political science, within which a strong focus has historically been placed on both the history of political thought and contemporary political theory (from normative political theory to various critical approaches).
Purpose
In the Oxford Handbook of Political Theory (2009), the field is described as: "[...] an interdisciplinary endeavor whose center of gravity lies at the humanities end of the happily still undisciplined discipline of political science ... For a long time, the challenge for the identity of political theory has been how to position itself productively in three sorts of location: in relation to the academic disciplines of political science, history, and philosophy; between the world of politics and the more abstract, ruminative register of theory; between canonical political theory and the newer resources (such as feminist and critical theory, discourse analysis, film and film theory, popular and political culture, mass media studies, neuroscience, environmental studies, behavioral science, and economics) on which political theorists increasingly draw."
In a 1956 American Political Science Review report authored by Harry Eckstein, political philosophy as a discipline had utility in two ways: the utility of political philosophy might be found either in the intrinsic ability of the best of past political thought to sharpen the wits of contemporary political thinkers, much as any difficult intellectual exercise sharpens the mind and deepens the imagination, or in the ability of political philosophy to serve as a thought-saving device by providing the political scientist with a rich source of concepts, models, insights, theories, and methods.
In his 2001 book A Student's Guide to Political Philosophy, Harvey Mansfield contrasts political philosophy with political science. He argues that political science "apes" the natural sciences and is a rival to political philosophy, replacing normative words like "good", "just", and "noble" with words like "utility" or "preferences". According to Mansfield, political science rebelled from political philosophy in the seventeenth century and declared itself distinct and separate in the positivist movement of the late nineteenth century. He writes:
According to Mansfield, political science and political philosophy are two distinct kinds of political philosophy, one modern and the other ancient. He stresses that the only way to understand modern political science and its ancient alternative fully is to enter the history of political philosophy and to study the tradition handed down over the centuries. Although modern political science feels no obligation to look at its roots, and might even denigrate the subject as if it could not be of any real significance, he says, "our reasoning shows that the history of political philosophy is required for understanding its substance".
History
Ancient traditions
Ancient India
Indian political philosophy in ancient times demarcated a clear distinction between (1) nation and state (2) religion and state. The constitutions of Hindu states evolved over time and were based on political and legal treatises and prevalent social institutions. The institutions of state were broadly divided into governance, diplomacy, administration, defense, law and order. Mantranga, the principal governing body of these states, consisted of the King, Prime Minister, Commander in chief of army, Chief Priest of the King. The Prime Minister headed the committee of ministers along with head of executive (Maha Amatya).
Chanakya was a 4th-century BC Indian political philosopher. The Arthashastra provides an account of the science of politics for a wise ruler, policies for foreign affairs and wars, the system of a spy state and surveillance and economic stability of the state. Chanakya quotes several authorities including Bruhaspati, Ushanas, Prachetasa Manu, Parasara, and Ambi, and described himself as a descendant of a lineage of political philosophers, with his father Chanaka being his immediate predecessor. Another influential extant Indian treatise on political philosophy is the Sukra Neeti. An example of a code of law in ancient India is the Manusmṛti or Laws of Manu.
Ancient China
Chinese political philosophy dates back to the Spring and Autumn period, specifically with Confucius in the 6th century BC. Chinese political philosophy was developed as a response to the social and political breakdown of the country characteristic of the Spring and Autumn period and the Warring States period. The major philosophies during the period, Confucianism, Legalism, Mohism, Agrarianism and Taoism, each had a political aspect to their philosophical schools. Philosophers such as Confucius, Mencius, and Mozi, focused on political unity and political stability as the basis of their political philosophies. Confucianism advocated a hierarchical, meritocratic government based on empathy, loyalty, and interpersonal relationships. Legalism advocated a highly authoritarian government. Mohism advocated a communal, decentralized government centered on frugality and asceticism. The Agrarians advocated a peasant utopian communalism and egalitarianism. Taoism advocated a proto-anarchism. Legalism was the dominant political philosophy of the Qin dynasty, but was replaced by State Confucianism in the Han dynasty. Each had religious or mythic aspects as well that played into how they viewed fairness in governance.
Prior to China's adoption of communism, State Confucianism remained the dominant political philosophy of China up to the 20th century.
Ancient Greece
Western political philosophy originates in the philosophy of ancient Greece, where political philosophy dates back to at least Plato. Ancient Greece was dominated by city-states, which experimented with various forms of political organization. Plato grouped forms of government into five categories of descending stability and morality: republic, timocracy, oligarchy, democracy and tyranny. One of the first, extremely important classical works of political philosophy is Plato's Republic, which was followed by Aristotle's Nicomachean Ethics and Politics. Roman political philosophy was influenced by the Stoics and the Roman statesman Cicero.
Medieval Europe
Medieval political philosophy in Europe was heavily influenced by Christian thinking. It had much in common with the Mutazilite Islamic thinking in that the Roman Catholics thought subordinating philosophy to theology did not subject reason to revelation but in the case of contradictions, subordinated reason to faith as the Asharite of Islam. The Scholastics by combining the philosophy of Aristotle with the Christianity of St. Augustine emphasized the potential harmony inherent in reason and revelation. Perhaps the most influential political philosopher of medieval Europe was St. Thomas Aquinas who helped reintroduce Aristotle's works, which had only been transmitted to Catholic Europe through Muslim Spain, along with the commentaries of Averroes. Aquinas's use of them set the agenda, for scholastic political philosophy dominated European thought for centuries even unto the Renaissance.
Some medieval political philosophers, such as Aquinas in his Summa Theologica, developed the idea that a king who is a tyrant is no king at all and could be overthrown. Others, like Nicole Oresme in his Livre de Politiques, categorically denied this right to overthrow an unjust ruler. Magna Carta, viewed by many as a cornerstone of Anglo-American political liberty, explicitly proposes the right to revolt against the ruler for justice's sake. Other documents similar to Magna Carta are found in other European countries such as Spain and Hungary.
Saint Augustine
The early Christian philosophy of Augustine of Hippo was heavily influenced by Plato. A key change brought about by Christian thought was the moderation of the Stoicism and theory of justice of the Roman world, as well emphasis on the role of the state in applying mercy as a moral example. Augustine also preached that one was not a member of his or her city, but was either a citizen of the City of God (Civitas Dei) or the Earthly City (Civitas Terrena). Augustine's City of God is an influential work of this period that attacked the thesis, held by many Christian Romans, that the Christian view could be realized on Earth.
St. Thomas Aquinas
Thomas Aquinas meticulously dealt with the varieties of philosophy of law. According to Aquinas, there are four kinds of law:
Eternal law ("the divine government of everything")
Divine positive law (having been "posited" by God; external to human nature)
Natural law (the right way of living discoverable by natural reason; what cannot-not be known; internal to human nature)
Human law (what we commonly call "law"—including customary law; the law of the Communitas Perfecta)
Aquinas never discusses the nature or categorization of canon law. There is scholarly debate surrounding the place of canon law within the Thomistic jurisprudential framework. Aquinas was an incredibly influential thinker in the Natural Law tradition.
Islamic Political Evolution
Mutazilite vs. Asharite
The rise of Islam, based on both the Qur'an and Muhammad strongly altered the power balances and perceptions of origin of power in the Mediterranean region. Early Islamic philosophy emphasized an inexorable link between science and religion, and the process of ijtihad to find truth—in effect all philosophy was "political" as it had real implications for governance. This view was challenged by the "rationalist" Mutazilite philosophers, who held a more Hellenic view, reason above revelation, and as such are known to modern scholars as the first speculative theologians of Islam; they were supported by a secular aristocracy who sought freedom of action independent of the Caliphate. By the late ancient period, however, the "traditionalist" Asharite view of Islam had in general triumphed. According to the Asharites, reason must be subordinate to the Quran and the Sunna.
Islamic political philosophy, was, indeed, rooted in the very sources of Islam—i.e., the Qur'an and the Sunnah, the words and practices of Muhammad—thus making it essentially theocratic. However, in Western thought, it is generally supposed that it was a specific area peculiar merely to the great philosophers of Islam: al-Kindi (Alkindus), al-Farabi (Abunaser), İbn Sina (Avicenna), Ibn Bajjah (Avempace) and Ibn Rushd (Averroes). The political conceptions of Islam such as kudrah (power), sultan, ummah, cemaa (obligation)-and even the "core" terms of the Qur'an—i.e., ibadah (worship), din (religion), rab (master) and ilah (deity)—is taken as the basis of an analysis. Hence, not only the ideas of the Muslim political philosophers but also many other jurists and ulama posed political ideas and theories. For example, the ideas of the Khawarij in the very early years of Islamic history on Khilafa and Ummah, or that of Shia Islam on the concept of Imamah are considered proofs of political thought. The clashes between the Ehl-i Sunna and Shia in the 7th and 8th centuries had a genuine political character. Political thought was not purely rooted in theism, however. Aristotleanism flourished as the Islamic Golden Age saw rise to a continuation of the peripatetic philosophers who implemented the ideas of Aristotle in the context of the Islamic world. Abunaser, Avicenna and Ibn Rushd where part of this philosophical school who claimed that human reason surpassed mere coincidence and revelation. They believed, for example, that natural phenomena occur because of certain rules (made by god), not because god interfered directly (unlike Al-Ghazali and his followers).
Other notable political philosophers of the time include Nizam al-Mulk, a Persian scholar and vizier of the Seljuq Empire who composed the Siyasatnama, or the "Book of Government" in English. In it, he details the role of the state in terms of political affairs (i.e. how to deal with political opponents without ruining the government's image), as well as its duty to protect the poor and reward the worthy. In his other work, he explains how the state should deal with other issues such as supplying jobs to immigrants like the Turkmens who were coming from the north (present day southern Russia, Kazakhstan, Turkmenistan and Uzbekistan).
Ibn Khaldun
The 14th-century Arab scholar Ibn Khaldun is considered one of the greatest political theorists. The British philosopher-anthropologist Ernest Gellner considered Ibn Khaldun's definition of government, "...an institution which prevents injustice other than such as it commits itself," the best in the history of political theory. For Ibn Khaldun, government should be restrained to a minimum for as a necessary evil, it is the constraint of men by other men.
European Renaissance
During the Renaissance secular political philosophy began to emerge after about a century of theological political thought in Europe. While the Middle Ages did see secular politics in practice under the rule of the Holy Roman Empire, the academic field was wholly scholastic and therefore Christian in nature.
Niccolò Machiavelli
One of the most influential works during this burgeoning period was Niccolò Machiavelli's The Prince, written between 1511–12 and published in 1532, after Machiavelli's death. That work, as well as The Discourses, a rigorous analysis of classical antiquity, did much to influence modern political thought in the West. A minority (including Jean-Jacques Rousseau) interpreted The Prince as a satire meant to be given to the Medici after their recapture of Florence and their subsequent expulsion of Machiavelli from Florence. Though the work was written for the di Medici family in order to perhaps influence them to free him from exile, Machiavelli supported the Republic of Florence rather than the oligarchy of the Medici family. At any rate, Machiavelli presents a pragmatic and somewhat consequentialist view of politics, whereby good and evil are mere means used to bring about an end—i.e., the acquisition and maintenance of absolute power. Thomas Hobbes, well known for his theory of the social contract, goes on to expand this view at the start of the 17th century during the English Renaissance. Although neither Machiavelli nor Hobbes believed in the divine right of kings, they both believed in the inherent selfishness of the individual. It was necessarily this belief that led them to adopt a strong central power as the only means of preventing the disintegration of the social order.
European Enlightenment
During the Enlightenment period, new theories emerged about what the human was and is and about the definition of reality and the way it was perceived, along with the discovery of other societies in the Americas, and the changing needs of political societies (especially in the wake of the English Civil War, the American Revolution, the French Revolution, and the Haitian Revolution). These new theories led to new questions and insights by such thinkers as Thomas Hobbes, John Locke, Benjamin Constant and Jean-Jacques Rousseau.
These theorists were driven by two basic questions: one, by what right or need do people form states; and two, what the best form for a state could be. These fundamental questions involved a conceptual distinction between the concepts of "state" and "government." It was decided that "state" would refer to a set of enduring institutions through which power would be distributed and its use justified. The term "government" would refer to a specific group of people who occupied the institutions of the state, and create the laws and ordinances by which the people, themselves included, would be bound. This conceptual distinction continues to operate in political science, although some political scientists, philosophers, historians and cultural anthropologists have argued that most political action in any given society occurs outside of its state, and that there are societies that are not organized into states that nevertheless must be considered in political terms. As long as the concept of natural order was not introduced, the social sciences could not evolve independently of theistic thinking. Since the cultural revolution of the 17th century in England, which spread to France and the rest of Europe, society has been considered subject to natural laws akin to the physical world.
Political and economic relations were drastically influenced by these theories as the concept of the guild was subordinated to the theory of free trade, and Roman Catholic dominance of theology was increasingly challenged by Protestant churches subordinate to each nation-state, which also (in a fashion the Roman Catholic Church often decried angrily) preached in the vulgar or native language of each region. Free trade, as opposed to these religious theories, is a trade policy that does not restrict imports or exports. It can also be understood as the free market idea applied to international trade. In government, free trade is predominantly advocated by political parties that hold liberal economic positions while economically left-wing and nationalist political parties generally support protectionism, the opposite of free trade. However, the enlightenment was an outright attack on religion, particularly Christianity. The most outspoken critic of the church in France was François Marie Arouet de Voltaire, a representative figure of the enlightenment.
Historians have described Voltaire's description of the history of Christianity as "propagandistic". Voltaire is partially responsible for the misattribution of the expression Credo quia absurdum to the Church Fathers. In a letter to Frederick II, King of Prussia, dated 5 January 1767, he wrote about Christianity: La nôtre [religion] est sans contredit la plus ridicule, la plus absurde, et la plus sanguinaire qui ait jamais infecté le monde.
"Ours [i.e., the Christian religion] is assuredly the most ridiculous, the most absurd and the most bloody religion which has ever infected this world. Your Majesty will do the human race an eternal service by extirpating this infamous superstition, I do not say among the rabble, who are not worthy of being enlightened and who are apt for every yoke; I say among honest people, among men who think, among those who wish to think. ... My one regret in dying is that I cannot aid you in this noble enterprise, the finest and most respectable which the human mind can point out." After Voltaire, religion would never be the same again in France.
As well, there was no spread of this doctrine within the New World and the advanced civilizations of the Aztec, Maya, Inca, Mohican, Delaware, Huron and especially the Iroquois. The Iroquois philosophy, in particular, gave much to Christian thought of the time and in many cases actually inspired some of the institutions adopted in the United States: for example, Benjamin Franklin was a great admirer of some of the methods of the Iroquois Confederacy, and much of early American literature emphasized the political philosophy of the natives. The Iroquois (/ˈɪrəkwɔɪ/ or /ˈɪrəkwɑː/) or Haudenosaunee are a historically powerful northeast Native American confederacy in North America. They were known during the colonial years to the French as the Iroquois League, and later as the Iroquois Confederacy, and to the English as the Five Nations, comprising the Mohawk, Onondaga, Oneida, Cayuga, and Seneca. After 1722, they accepted the Tuscarora people from the Southeast into their confederacy, as they were also Iroquoian-speaking, and became known as the Six Nations.
John Locke
John Locke in particular exemplified this new age of political theory with his work Two Treatises of Government. In it, Locke proposes a state of nature theory that directly complements his conception of how political development occurs and how it can be founded through contractual obligation. Locke stood to refute Sir Robert Filmer's paternally founded political theory in favor of a natural system based on nature in a particular given system. The theory of the divine right of kings became a passing fancy, exposed to the type of ridicule with which John Locke treated it. Unlike Machiavelli and Hobbes but like Aquinas, Locke would accept Aristotle's dictum that man seeks to be happy in a state of social harmony as a social animal. Unlike Aquinas's preponderant view on the salvation of the soul from original sin, Locke believes man's mind comes into this world as tabula rasa. For Locke, knowledge is neither innate, revealed nor based on authority but subject to uncertainty tempered by reason, tolerance and moderation. According to Locke, an absolute ruler as proposed by Hobbes is unnecessary, for natural law is based on reason and seeking peace and survival for man.
John Stuart Mill
John Stuart Mill's work on political philosophy begins in On Liberty. On Liberty is the most influential statement of his liberal principles. He begins by distinguishing old and new threats to liberty. The old threat to liberty is found in traditional societies in which there is rule by one (a monarchy) or a few (an aristocracy). Though one could be worried about restrictions on liberty by benevolent monarchs or aristocrats, the traditional worry is that when rulers are politically unaccountable to the governed they will rule in their own interests, rather than the interests of the governed. Mill's explicit theory of rights is introduced in Chapter V of Utilitarianism in the context of his sanction theory of duty, which is an indirect form of utilitarianism that identifies wrong actions as actions that it is useful to sanction. Mill then introduces justice as a proper part of the duty. Justice involves duties that are perfect duties—that is, duties that are correlated with rights.
Justice implies something which it is not only right to do, and wrong not to do, but which some individual person can claim from us as a matter of right. These perfect duties will thus create liberty and collective freedom within a state.
He uses, On Liberty to discuss gender equality in society. To Mill, Utilitarianism was the perfect tool to justify gender equality in The Subjection of Women, referring to the political, lawful and social subjection of women. When a woman was married, she entered legally binding coverture with her husband; once she married her legal existence as an individual was suspended under "marital unity". While it is easy to presume that a woman would not marry under these circumstances, being unmarried had social consequences. A woman could only advance in social stature and wealth if she had a rich husband to do the groundwork. Mill uses his Utilitarian ethics to assess how gender equality would be the best way to achieve "the greatest good for the greatest number" :
"The principle that regulates the existing social relations between the two sexes … and is now one of the chief obstacles to human improvement…"
The 'chief obstacle' to Mill relates to women's intellectual capability. The Subjection of Women looks at this in the women of society and argues that diminishing their intellectual potential wastes the knowledge and skill of half of the population; such knowledge lost could formulate ideas that could maximize pleasure for society.
Benjamin Constant
One of the first thinkers to go by the name of "liberal", Constant looked to Britain rather than to ancient Rome for a practical model of freedom in a large, commercial society. He drew a distinction between the "Liberty of the Ancients" and the "Liberty of the Moderns". The Liberty of the Ancients was participatory republican liberty, which gave the citizens the right to directly influence politics through debates and votes in the public assembly. In order to support this degree of participation, citizenship was a burdensome moral obligation requiring a considerable investment of time and energy. Generally, this required a sub-society of slaves to do much of the productive work, leaving the citizens free to deliberate on public affairs. Ancient Liberty was also limited to relatively small and homogenous societies, in which the people could be conveniently gathered together in one place to transact public affairs.
The Liberty of the Moderns, in contrast, was based on the possession of civil liberties, the rule of law, and freedom from excessive state interference. Direct participation would be limited: a necessary consequence of the size of modern states, and also the inevitable result of having created a commercial society in which there are no slaves but almost everybody must earn a living through work. Instead, the voters would elect representatives, who would deliberate in Parliament on behalf of the people and would save citizens from the necessity of daily political involvement.
Moreover, Constant believed that, in the modern world, commerce was superior to war. He attacked Napoleon's martial appetite, on the grounds that it was illiberal and no longer suited to modern commercial social organization. Ancient Liberty tended to be warlike, whereas a state organized on the principles of Modern Liberty would be at peace with all peaceful nations.
Thomas Hobbes
The main practical conclusion of Hobbes' political theory is that state or society can not be secure unless at the disposal of an absolute sovereign. From this follows the view that no individual can hold rights of property against the sovereign, and that the sovereign may therefore take the goods of its subjects without their consent.
In Leviathan, Hobbes set out his doctrine of the foundation of states and legitimate governments and creating an objective science of morality. Much of the book is occupied with demonstrating the necessity of a strong central authority to avoid the evil of discord and civil war.
Beginning from a mechanistic understanding of human beings and their passions, Hobbes postulates what life would be like without government, a condition which he calls the state of nature. In that state, each person would have a right, or license, to everything in the world. This, Hobbes argues, would lead to a "war of all against all".
Jean-Jacques Rousseau
The Social Contract outlines the basis for a legitimate political order within a framework of classical republicanism. Published in 1762, it became one of the most influential works of political philosophy in the Western tradition. It developed some of the ideas mentioned in earlier work, the article Discours sur l'oeconomie politique (Discourse on Political Economy), featured in Diderot's Encyclopédie. The treatise begins with the dramatic opening lines, "Man is born free, and everywhere he is in chains. Those who think themselves the masters of others are indeed greater slaves than they."
Rousseau claimed that the state of nature was a primitive condition without law or morality, which human beings left for the benefits and necessity of cooperation. As society developed, the division of labor and private property required the human race to adopt institutions of law. In the degenerate phase of society, man is prone to be in frequent competition with his fellow men while also becoming increasingly dependent on them. This double pressure threatens both his survival and his freedom.
Modern era
Marxism
Karl Marx's critique of capitalism—developed with Friedrich Engels—was, alongside liberalism and fascism, one of the defining ideological movements of the twentieth century. The Industrial Revolution produced a parallel revolution in political thought. Urbanization and capitalism greatly reshaped society. During this same period, the socialist movement began to form. In the mid-19th century, Marxism was developed, and socialism in general gained increasing popular support, mostly from the urban working class. Without breaking entirely from the past, Marx established principles that would be used by future revolutionaries of the 20th century, namely Vladimir Lenin, Mao Zedong, Ho Chi Minh, and Fidel Castro. Though Hegel's philosophy of history is similar to Immanuel Kant's, and Karl Marx's theory of revolution towards the common good is partly based on Kant's view of history—Marx declared that he was turning Hegel's dialectic, which was "standing on its head", "the right side up again". Unlike Marx who believed in historical materialism, Hegel believed in the Phenomenology of Spirit. The Russian Revolution of 1917—and similar, albeit less successful, revolutions in many other European countries—brought communism, in particular the political theory of Leninism but also on a smaller level Luxemburgism, on the world stage.
Contemporaneously with the rise of analytic ethics in Anglo-American thought, several new lines of philosophy directed at the critique of existing societies arose between the 1950s and 1980s. Most of these took elements of Marxist economic analysis but combined them with a more cultural or ideological emphasis. Out of the Frankfurt School, thinkers like Herbert Marcuse, Theodor W. Adorno, Max Horkheimer, and Jürgen Habermas combined Marxian and Freudian perspectives. Along somewhat different lines, a number of other continental thinkers—still largely influenced by Marxism—put new emphases on structuralism and on a "return to Hegel". Within the (post-) structuralist line (though mostly not taking that label) are thinkers such as Gilles Deleuze, Michel Foucault, Claude Lefort, and Jean Baudrillard. The Situationists were more influenced by Hegel; Guy Debord, in particular, moved a Marxist analysis of commodity fetishism to the realm of consumption, and looked at the relation between consumerism and dominant ideology formation.
Christian democracy
Christian democracy is a centre-right ideology inspired by Christian social teaching. It originated as a reaction against the industrialisation and urbanisation associated with laissez-faire capitalism. Jacques Maritain has been recognized as the leading Christian-democratic philosopher. Key ideas in Christian-democratic thought include personalism, popularism, subsidiarity, and stewardship.
In post-war Europe, Christian-democratic parties dominated politics in several nations—the Christian People's Party in Belgium, CDU and CSU in Germany, Fine Gael and Fianna Fáil in Ireland, and Christian Democracy in Italy. Many post-war Europeans saw Christian democracy as a moderate alternative to the extremes of right-wing nationalism and left-wing communism. Christian-democratic parties were especially popular among European women, who often voted for these parties to a large extent due to their pro-family policies.
Intersectionality
Colonialism and racism were important issues that arose during the 1950s and 1960s. The rise of feminism, LGBT social movements and the end of colonial rule and of the political exclusion of such minorities as African Americans and sexual minorities in the developed world has led to feminist, postcolonial, and multicultural thought becoming significant. This led to a challenge to the social contract by philosophers Charles W. Mills in his book The Racial Contract and Carole Pateman in her book The Sexual Contract that the social contract excluded persons of colour and women respectively.
Social liberalism
In Anglo-American academic political philosophy, the publication of John Rawls's A Theory of Justice in 1971 is considered a milestone. Rawls used a thought experiment, the original position, in which representative parties choose principles of justice for the basic structure of society from behind a veil of ignorance. Rawls also offered a criticism of utilitarian approaches to questions of political justice. Robert Nozick's 1974 book Anarchy, State, and Utopia, which won a National Book Award, criticized the social liberalism of Rawls from a libertarian perspective, gaining much academic respectability.
Communitarianism
Another debate developed around the (distinct) criticisms of liberal political theory made by Michael Walzer, Michael Sandel and Charles Taylor. The liberal-communitarian debate is often considered valuable for generating a new set of philosophical problems, rather than a profound and illuminating clash of perspective. These and other communitarians (such as Alasdair MacIntyre and Daniel A. Bell) argue that, contra liberalism, communities are prior to individuals and therefore should be the center of political focus. Communitarians tend to support greater local control as well as economic and social policies which encourage the growth of social capital.
Republicanism
A pair of overlapping political perspectives arising toward the end of the 20th century are republicanism (or neo- or civic-republicanism) and the capability approach. The resurgent republican movement aims to provide an alternative definition of liberty from Isaiah Berlin's positive and negative forms of liberty, namely "liberty as non-domination." Unlike the American liberal movement which understands liberty as "non-interference," "non-domination" entails individuals not being subject to the arbitrary will of any other person. To a republican the mere status as a slave, regardless of how that slave is treated, is objectionable. Prominent republicans include historian Quentin Skinner, jurist Cass Sunstein, and political philosopher Philip Pettit. The capability approach, pioneered by economists Mahbub ul Haq and Amartya Sen and further developed by legal scholar Martha Nussbaum, understands freedom under allied lines: the real-world ability to act. Both the capability approach and republicanism treat choice as something which must be resourced. In other words, it is not enough to be legally able to do something, but to have the real option of doing it.
Influential political philosophers
A larger list of political philosophers is intended to be closer to exhaustive. Listed below are some of the most canonical or important thinkers, and especially philosophers whose central focus was in political philosophy and/or who are good representatives of a particular school of thought.
Thomas Aquinas: In synthesizing Christian theology and Peripatetic (Aristotelian) teaching in his Treatise on Law, Aquinas contends that God's gift of higher reason—manifest in human law by way of the divine virtues—gives way to the assembly of righteous government.
Aristotle: Wrote his Politics as an extension of his Nicomachean Ethics. Notable for the theories that humans are social animals, and that the polis (Ancient Greek city state) existed to bring about the good life appropriate to such animals. His political theory is based upon an ethics of perfectionism (as is Marx's, on some readings).
Mikhail Bakunin: After Pierre Joseph Proudhon, Bakunin became the most important political philosopher of anarchism. His specific version of anarchism is called collectivist anarchism.
Jeremy Bentham: The first thinker to analyze social justice in terms of maximization of aggregate individual benefits. Founded the philosophical/ethical school of thought known as utilitarianism.
Isaiah Berlin: Developed the distinction between positive and negative liberty.
Edmund Burke: Irish member of the British parliament, Burke is credited with the creation of conservative thought. Burke's Reflections on the Revolution in France is the most popular of his writings where he denounced the French revolution. Burke was one of the biggest supporters of the American Revolution.
Chanakya: Wrote influential text Arthashastra, some of earliest political thinkers in Asian history.
Noam Chomsky: He is widely recognized as having helped to spark the cognitive revolution in the human sciences, contributing to the development of a new cognitivistic framework for the study of language and the mind. Chomsky is a leading critic of U.S. foreign policy, neoliberalism and contemporary state capitalism, the Israeli–Palestinian conflict, and mainstream news media. His ideas have proven highly influential in the anti-capitalist and anti-imperialist movements, and aligns with anarcho-syndicalism and libertarian socialism; these labels are contentious however, given Chomsky's support of Serbian nationalism and participation in Bosnian Genocide denial.
Confucius: The first thinker to relate ethics to the political order.
Michel Foucault: Critiqued the modern conception of power on the basis of the prison complex and other prohibitive institutions, such as those that designate sexuality, madness and knowledge as the roots of their infrastructure, a critique that demonstrated that subjection is the power formation of subjects in any linguistic forum and that revolution cannot just be thought as the reversal of power between classes.
Antonio Gramsci: Instigated the concept of hegemony. Argued that the state and the ruling class use culture and ideology to gain the consent of the classes they rule over.
Jürgen Habermas: Philosopher and social critic. He has pioneered such concepts as the public sphere, communicative action, and deliberative democracy. His early work was heavily influenced by the Frankfurt School.
Friedrich Hayek: He argued that central planning was inefficient because members of central bodies could not know enough to match the preferences of consumers and workers with existing conditions. Hayek further argued that central economic planning—a mainstay of socialism—would lead to a "total" state with dangerous power. He advocated free-market capitalism in which the main role of the state is to maintain the rule of law and let spontaneous order develop.
G. W. F. Hegel: Emphasized the "cunning" of history, arguing that it followed a rational trajectory, even while embodying seemingly irrational forces; influenced Marx, Kierkegaard, Nietzsche, and Oakeshott.
Thomas Hobbes: Generally considered to have first articulated how the concept of a social contract that justifies the actions of rulers (even where contrary to the individual desires of governed citizens), can be reconciled with a conception of sovereignty.
David Hume: Hume criticized the social contract theory of John Locke and others as resting on a myth of some actual agreement. Hume was a realist in recognizing the role of force to forge the existence of states and that consent of the governed was merely hypothetical. He also introduced the concept of utility, later picked up on and developed by Jeremy Bentham. Hume also coined the 'is/ought' problem i.e. the idea that just because something is does not mean that is how it ought to be. This was very influential on normative politics
Thomas Jefferson: Politician and political theorist during the American Enlightenment. Expanded on the philosophy of Thomas Paine by instrumenting republicanism in the United States. Most famous for the United States Declaration of Independence.
Immanuel Kant: Argued that participation in civil society is undertaken not for self-preservation, as per Thomas Hobbes, but as a moral duty. First modern thinker who fully analyzed structure and meaning of obligation. Argued that an international organization was needed to preserve world peace.
Peter Kropotkin: One of the classic anarchist thinkers and the most influential theorist of anarcho-communism.
John Locke: Like Hobbes, described a social contract theory based on citizens' fundamental rights in the state of nature. He departed from Hobbes in that, based on the assumption of a society in which moral values are independent of governmental authority and widely shared, he argued for a government with power limited to the protection of personal property. His arguments may have been deeply influential to the formation of the United States Constitution.
György Lukács: Hungarian Marxist theorist, aesthetician, literary historian, and critic. One of the founders of Western Marxism. In his magnum opus History and Class Consciousness, he developed the Marxist theory of class consciousness and introduced the concept of "reification".
Niccolò Machiavelli: First systematic analysis of how politics necessitates expedient and evil actions. Gave an account of statecraft in a realistic point of view instead of relying on idealism. Machiavelli also relays recommendations on how to run a well ordered republican state, as he viewed them to be better forms of government than autocracies.
James Madison: American politician and protege of Jefferson considered to be "Father of the Constitution" and "Father of the Bill of Rights" of the United States. As a political theorist, he believed in separation of powers and proposed a comprehensive set of checks and balances that are necessary to protect the rights of an individual from the tyranny of the majority.
Herbert Marcuse: Called the father of the new left. One of the principal thinkers within the Frankfurt School, and generally important in efforts to fuse the thought of Sigmund Freud and Karl Marx. Introduced the concept of "repressive desublimation", in which social control can operate not only by direct control, but also by manipulation of desire. His work Eros and Civilization and notion of a non-repressive society was influential on the 1960s and its counter-cultural social movements.
Julius Evola: Called for a return to Pre-Renaissance values of Traditionalism and Aristocracy while discussing possible ways to survive the inevitable collapse of the modern civilization and to bring forth a new order.
Karl Marx: In large part, added the historical dimension to an understanding of society, culture and economics. Created the concept of ideology in the sense of (true or false) beliefs that shape and control social actions. Analyzed the fundamental nature of class as a mechanism of governance and social interaction. Profoundly influenced world politics with his theory of communism.
Mencius: One of the most important thinkers in the Confucian school, he is the first theorist to make a coherent argument for an obligation of rulers to the ruled.
John Stuart Mill: A utilitarian, and the person who named the system; he goes further than Bentham by laying the foundation for liberal democratic thought in general and modern, as opposed to classical, liberalism in particular. Articulated the place of individual liberty in an otherwise utilitarian framework.
Montesquieu: Analyzed protection of the people by a "balance of powers" in the divisions of a state.
Mozi: Eponymous founder of the Mohist school, advocated a form of consequentialism.
Friedrich Nietzsche: Philosopher who became a powerful influence on a broad spectrum of 20th-century political currents in anarchism, fascism, libertarianism, and conservatism. His interpreters have debated the content of his political philosophy.
Robert Nozick: Criticized Rawls, and argued for libertarianism, by appeal to a hypothetical history of the state and of property.
Thomas Paine: Enlightenment writer who defended liberal democracy, the American Revolution, and the French Revolution in Common Sense and The Rights of Man.
Plato: Wrote a lengthy dialogue The Republic in which he laid out his political philosophy: citizens should be divided into three categories. One category of people are the rulers: they should be philosophers, according to Plato, this idea is based on his Theory of Forms. His interpreters have debated the content of his political philosophy.
Pierre-Joseph Proudhon: Commonly considered the father of modern anarchism, specifically mutualism.
Ayn Rand: Founder of Objectivism and prime mover of the Objectivist and Libertarian movements in mid-twentieth-century America. Advocated a complete, laissez-faire capitalism. Rand held that the proper role of government was exclusively the protection of individual rights without economic interference. The government was to be separated from economics the same way and for the same reasons it was separated from religion. Any governmental action not directed at the defense of individual rights would constitute the initiation of force (or threat of force), and therefore a violation not only of rights but also of the legitimate function of government.
John Rawls: Revitalized the study of normative political philosophy in Anglo-American universities with his 1971 book A Theory of Justice, which uses a version of social contract theory to answer fundamental questions about justice and to criticise utilitarianism.
Murray Rothbard: The central theorist of anarcho-capitalism and an Austrian School economist.
Jean-Jacques Rousseau: Analyzed the social contract as an expression of the general will, and controversially argued in favor of absolute democracy where the people at large would act as sovereign.
Carl Schmitt: German political theorist, tied to the Nazis, who developed the concepts of the Friend/Enemy Distinction and the State of exception. Though his most influential books were written in the 1920s, he continued to write prolifically until his death (in academic quasi-exile) in 1985. He heavily influenced 20th-century political philosophy both within the Frankfurt School and among others, not all of whom are philosophers, such as Jacques Derrida, Hannah Arendt, and Giorgio Agamben.
Adam Smith: Often said to have founded modern economics; explained emergence of economic benefits from the self-interested behavior ("the invisible hand") of artisans and traders. While praising its efficiency, Smith also expressed concern about the effects of industrial labor (e.g., repetitive activity) on workers. His work on moral sentiments sought to explain social bonds which enhance economic activity.
Max Stirner: Important thinker within anarchism and the main representative of the anarchist current known as individualist anarchism. He was also the founder of ethical egoism which endorses anarchy.
Leo Strauss: Famously rejected modernity, mostly on the grounds of what he perceived to be modern political philosophy's excessive self-sufficiency of reason and flawed philosophical grounds for moral and political normativity. He argued instead we should return to pre-modern thinkers for answers to contemporary issues. His philosophy was influential on the formation of neoconservatism, and a number of his students later were members of the Bush administration.
Henry David Thoreau: Influential American thinker on such diverse later political positions and topics such as pacifism, anarchism, environmentalism and civil disobedience- notably with his written work Civil Disobedience- who influenced later important political activists such as Leo Tolstoy, Mahatma Gandhi and Martin Luther King Jr. Hard-lining on the individual citizen's right to seek justice over the state's, he was also an outspoken advocate and apologist for John Brown following his raid on Harper's Ferry for the purpose of abolitionist efforts, writing two pieces with one pleading for his mercy- A Plea for Captain John Brown- and the other- The Last Days of John Brown- describing the life that had been lived fully.
Alexis de Tocqueville: A French political scientist and diplomat, known for his works Democracy in America and The Old Regime and the Revolution.
Voltaire: French Enlightenment writer, poet, and philosopher famous for his advocacy of civil liberties, including freedom of religion and free trade.
See also
History of political thought
Philosophy of law
Political ethics
Political ideologies
Political journalism
Political theology
References
Further reading
Academic journals dedicated to political philosophy include: Political Theory, Philosophy and Public Affairs, Contemporary Political Theory, Constellations, and The Journal of Political Philosophy
External links
Video lectures (require Adobe Flash): Introduction to Political Philosophy delivered by Steven B Smith of Yale University and provided by Academic Earth.
philosophy
Philosophy of social science
Social philosophy | 0.783657 | 0.998669 | 0.782614 |
Standpoint theory | Standpoint theory, also known as standpoint epistemology, is a foundational framework in feminist social theory that examines how individuals' unique perspectives, shaped by their social and political experiences, influence their understanding of the world. Standpoint theory proposes that authority is rooted in individuals' personal knowledge and perspectives and the power that such authority exerts.
First originating in feminist philosophy, this theory posits that marginalized groups, situated as "outsiders within," offer valuable insights that challenge dominant perspectives and contribute to a more comprehensive understanding of societal dynamics. Standpoint theory's central concept is that an individual's perspectives are shaped by their social and political experiences. The amalgamation of a person's experiences forms a standpoint—a point of view—through which that individual sees and understands the world. In response to critiques that early standpoint theory treated social perspectives as monolithic or essentialized, social theorists understand standpoints as multifaceted rather than unvarying or absolute. For example, while Hispanic women may generally share some perspectives, particularly with regard to ethnicity and gender, they are not defined solely by these viewpoints; despite some common features, there is no essentially Hispanic female identity.
Standpoint theorists emphasize the utility of a naturalistic, or everyday experiential, concept of knowing (i.e., epistemology). One's standpoint (whether reflexively considered or not) shapes which concepts are intelligible, which claims are heard and understood by whom, which features of the world are perceptually salient, which reasons are understood to be relevant and forceful, and which conclusions credible.
Standpoint theory supports what feminist theorist Sandra Harding calls strong objectivity, or the notion that the perspectives of marginalized and/or oppressed individuals can help to create more objective accounts of the world. Through the outsider-within phenomenon, these individuals are placed in a unique position to point to patterns of behavior that those immersed in the dominant group culture are unable to recognize. Standpoint theory gives voice to the marginalized groups by allowing them to challenge the status quo as the outsider within the status quo representing the dominant position of privilege.
The predominant culture in which all groups exist is not experienced in the same way by all persons or groups. The views of those who belong to groups with more social power are validated more than those in marginalized groups. Those in marginalized groups must learn to be bicultural, or to "pass" in the dominant culture to survive, even though that perspective is not their own.
History
First-wave standpoint theory
First-wave standpoint theory emerged in the 1970s and 1980s, spearheaded by feminist philosophers like Sandra Harding. In Harding's 1986 book The Science Question in Feminism, she introduced the term "standpoint" to distinguish it from a generic perspective, emphasizing the requirement of political engagement. It aimed to challenge conventional notions of objectivity and neutrality in scientific inquiry by foregrounding the political engagement and lived experiences of marginalized groups, particularly women. Harding argues that the political engagement of feminists and their active focus on the lives of women allows them to have an epistemically privileged "standpoint". Harding also maintained that it is the marginalized groups that ultimately provide the clearest view on the true opportunities and obstacles faced in society.
Feminist standpoint theory's initial focus was in challenging the idea of scientific neutrality and objectivity from a presupposed generalized knower. This wave of standpoint theory underscored how gendered identities influence individuals' epistemic resources and capacities, impacting their access to knowledge. By centering the experiences of women, first-wave standpoint theorists sought to dismantle patriarchal structures in knowledge production and highlight the epistemic privilege inherent in marginalized perspectives.
Some uses of standpoint theory have been based in Hegelian and Marxist theory, such as Hegel's study of the different standpoints of slaves and masters in 1807. Hegel, a German Idealist, claimed that the master-slave relationship is about people's belonging positions, and the groups affect how people receive knowledge and power. Hegel's influence can be seen in some later feminist studies. For example, Nancy Hartsock examined standpoint theory by using relations between men and women. She published "The Feminist Standpoint: Developing Ground for a Specifically Feminist Historical Materialism" in 1983. Hartsock used Hegel's master–slave dialectic and Marx's theory of class and capitalism as an inspiration to look into matters of sex and gender.
Second-wave standpoint theory
Second-wave standpoint theory evolved to encompass a broader range of social positions, including, race, social class, culture, and economic status. Standpoint theory seeks to develop a particular feminist epistemology, that values the experiences of women and minorities as a source for knowledge.
Prominent standpoint theorists such as Dorothy Smith, Nancy Hartsock, Donna Haraway, Sandra Harding, Alison Wylie, Lynette Hunter and Patricia Hill Collins expanded the theoretical framework, emphasizing the importance of intersectionality. Second-wave standpoint theorists and activists in the United States developed the related concept of intersectionality to examine oppressions caused by the interactions between social factors such as gender, race, sexuality, and culture. Intersectionality became a key concept, explaining how intersecting oppressions contribute to complex power dynamics. For example, intersectionality can explain how social factors contribute to divisions of labor in the workforce. Though intersectionality was developed to consider social and philosophical issues, it has been applied in a range of academic areas like higher education, identity politics, and geography.
Third-wave standpoint theory
Contemporary standpoint theory continues to evolve in response to shifting political, social, and economic landscapes. In the era of third-wave feminism, characterized by inclusivity and activism, standpoint theory emphasizes the importance of community and collective action. This wave highlights the voices and experiences of diverse groups, including Black women, LGBTQ+ individuals, and people with disabilities. Examples include the first female and person of color Vice President of the United States, Kamala Harris, the global pandemic and the overturning of Roe v. Wade. In modern times, third-wave feminism emphasizes inclusive community and action. This has resulted in a resurgence of feminist activism and further integration of intersecting identities, like the unique perspective of Black women and abortion rights.
Standpoint theorist, Patricia Hill Collins, highlights the resonance of Standpoint Theory with Black feminist groups, in that, standpoint theory can be used as a framework for understanding Black feminist thought. Standpoint theory can be a framework for understanding the oppression of Black women or what feminist theorist Catherine E. Harnois coins as the "Black women's standpoint".
Key concepts
Generally, standpoint theory gives insight into specific circumstances only available to the members of a certain collective standpoint. According to Michael Ryan, "the idea of a collective standpoint does not imply an essential overarching characteristic but rather a sense of belonging to a group bounded by a shared experience." Kristina Rolin criticizes common misunderstandings of standpoint theory that include "the assumption of essentialism that all women share the same socially grounded perspective in virtue of being women, the assumption of automatic epistemic privilege is that epistemic advantage accrues to the subordinate automatically, just in virtue of their occupying a particular social position." She suggests that, on the contrary, neither assumptions are part of standpoint theory. According to standpoint theory:
A standpoint is a place from which human beings view the world.
A standpoint influences how the people adopting it socially construct the world.
A standpoint is a mental position from which things are viewed.
A standpoint is a position from which objects or principles are viewed and according to which they are compared and judged.
The inequalities of different social groups create differences in their standpoints.
All standpoints are partial; so (for example) standpoint feminism coexists with other standpoints.
Key terms
Social location: Viewpoints and perspectives are ultimately created through the groups that we subscribe to (created by connections through race, gender, etc.).
Epistemology: The theory of knowledge
Intersectionality: The characteristics of an individual's life, such as race and gender, that come together to create all aspects of one's identity.
Matrix of domination: Societal systems put in place that support the dominant group's power.
Local knowledge: Knowledge that is rooted in an individual's beliefs, experiences, along with time and place.
Applications
Since standpoint theory focuses on marginalized populations, it is often applied within fields that focus on these populations. Standpoint has been referenced as a concept that should be acknowledged and understood in the social work field, especially when approaching and assisting clients. Social workers seek to understand the concept of positionality within dynamic systems to encourage empathy. Many marginalized populations rely on the welfare system to survive. Those who structure the welfare system typically have never needed to utilize its services before. Standpoint theory has been presented as a method to improving the welfare system by recognizing suggestions made by those within the welfare system. In Africa, standpoint theory has catalyzed a social movement where women are introduced to the radio in order to promote awareness of their experiences and hardships and to help these women heal and find closure. Another example dealing with Africa is slavery and how slavery differed greatly depending on if one was the slave or the master. If there were any power relationships, there could never be a single perspective. No viewpoint could ever be complete, and there is no limit to anyone's perspective.
Asante and Davis's (1989) study of interracial encounters in the workplace found that because of different cultural perspectives, approaching organizational interactions with others with different beliefs, assumptions, and meanings often leads to miscommunication. Brenda Allen stated in her research that, "Organizational members' experiences, attitudes, and behaviors in the workplace are often influenced by race-ethnicity."
Paul Adler and John Jermier suggest that management scholars should be aware of their standpoints. They write that those studying management should "consciously choose [their] standpoints and take responsibility for the impact (or lack of impact) of [their] scholarship on the world."
Jermier argued that all parts of a research study – identifying the problem, theorizing research questions, gathering and analyzing data, drawing conclusions, and the knowledge produced – are there to some extent because of the researcher's standpoint. This caused him to question what standpoint to adopt in the management of scientists. To avoid falling into limitations of the status quo and certain standpoints, he said that "the view from below has greater potential to generate more complete and more objective knowledge claims." He continues to say that "if our desire is to heal the world, we will learn more about how the root mechanisms of the world work and about how things can be changed by adopting the standpoints of those people and other parts of nature that most deeply suffer its wounds."
Feminist standpoint theory
Feminist standpoint theorists make three principal claims: (1) Knowledge is socially situated. (2) Marginalized groups are socially situated in ways that make it more possible for them to be aware of things and ask questions than it is for the non-marginalized. (3) Research, particularly that focused on power relations, should begin with the lives of the marginalized.
Specifically, feminist standpoint theory is guided by four main theses: strong objectivity, the situated knowledge, epistemic advantage, and power relations.
Feminist standpoint theorists such as Dorothy Smith, Patricia Hill Collins, Nancy Hartsock, and Sandra Harding claimed that certain socio-political positions occupied by women (and by extension other groups who lack social and economic privilege) can become sites of epistemic privilege and thus productive starting points for inquiry into questions about not only those who are socially and politically marginalized, but also those who, by dint of social and political privilege, occupy the positions of oppressors. This claim was specifically generated by Sandra Harding and as such, "Starting off research from women's lives will generate less partial and distorted accounts not only of women's lives but also of men’s lives and of the whole social order." This practice is also quite evident when women enter into professions that are considered to be male oriented. Londa Schiebinger states, "While women now study at prestigious universities at about the same rate as men, they are rarely invited to join the faculty at top universities
... The sociologist Harriet Zuckerman has observed that 'the more prestigious the institution, the longer women wait to be promoted.' Men, generally speaking, face no such trade-off."
Standpoint feminists have been concerned with these dualisms for two related reasons. First, dualisms usually imply a hierarchical relationship between the terms, elevating one and devaluing the other. Also, related to this issue is the concern that these dualisms often become gendered in our culture. In this process, men are associated with one extreme and women with the other. In the case of reason and emotion, women are identified with emotion. Because our culture values emotion less than reason, women suffer from this association. Feminist critics are usually concerned with the fact that dualisms force false dichotomies (partition of a whole) onto women and men, failing to see that life is less either/or than both/and, as relational dialectics theory holds.
Indigenous standpoint theory
Indigenous standpoint theory is an intricate theoretical approach in how indigenous people navigate the difficulties of their experiences within spaces which contest their epistemology. Utility of this approach stems from diverse background of marginalized groups across societies and cultures whose unique experiences have been rejected and suppressed within a majoritarian intellectual knowledge production. However, the analysis of these experiences is not the cycle of accumulation of stories, of lived experiences, and in turn, does not produce limitless subjective narratives to obstruct objective knowledge. Martin Nakata is the foremost propounder of indigenous standpoint theory.
Indigenous standpoint, as well as feminist theory, expect the "knower" to address their social status of privilege to those they are researching. When addressing ourselves as "knowers" into the setting, the intention is not to realign the focus, but rather to include the social relations within what we as "knowers" know. This is a matter of respect as the researcher is expected to declare who they are and on what basis they write. This "self-awareness is fundamental to the research process because it should result in a researcher role that is respectful and not disruptive, aggressive or controlling".
An Indigenous "knower" does not possess a predisposed "readymade critical stance" on the world, but rather questions that must be answered before objective knowledge is obtained. Thus, this engagement enables us to create a critical Indigenous standpoint. This in itself does not determine truth; instead, it produces a range potential argument with further possible answers. The arguments established, however, still require its basis to be rational and reasonable and answer the logic and assumptions on which they were established. Thus, arguments cannot assert a claim of truth on an idea because they, the Indigenous individual, are a part of the Indigenous community as the theory would not allow to authorise themselves solely truthful on the basis of their experience. Indigenous standpoint theory is facilitated by three principles, defined by Martin Nakata.
Nakata's first principle states: "It would, therefore, begin from the premise that my social position is discursively constituted within and constitutive of complex set of social relations as expressed through social organization of my every day". This denotes that one's social position is established and acknowledgement of social relations within factors such as social, political, economic and cultural, impacts and influence who you are and structure your everyday life.
Nakata's second principle states: "This experience as a push-pull between Indigenous and non-Indigenous positions; that is, the familiar confusion with constantly being asked at any one moment to both agree and disagree with any proposition on the basis of a constrained choice between a whitefella or blackfella perspective". This signifies that the position of which Indigenous people hold at the cultural interface to decide a continuous stance is recognized. Instead, reorganization for Indigenous agency should be constituted on what they know from this position. Simplistically stated, it is questioning why Indigenous people should have to choose positions instead of share what they know from both.
Nakata's third and last principle states: "the idea that the constant 'tensions' that this tug-of-war creates are physically experienced, and both inform as well as limit what can be said and what is to be left unsaid in every day." Nakata here is describing the physical worlds of how Indigenous and non-Indigenous differ in everyday context, and how these differences can inform of limit has it might be unacceptable in western colonist society that would otherwise be acceptable with other Indigenous people.
Nakata states that these three principles allow him to forge a critical standpoint from the cultural interface and enable him to create better arguments in relation to his position within epistemologies and with other groups of "knowers". However, one cannot overturn a position one is dominant in just because of one's background due to the arguments being simplistic or misrepresented with no evidence to support itself etc.
Thus, Indigenous standpoint theory can be defined as a "method of inquiry, a process for making more intelligible 'the corpus of objectified knowledge about us' as it emerges and organizes understanding of ... lived realities".
Criticisms
Critics argue that standpoint theory, despite challenging essentialism, relies itself on essentialism, as it focuses on the dualism of subjectivity and objectivity. In regard to feminist standpoint theory: though it does dispel many false generalizations of women, it is argued that focus on social groups and social classes of women is still inherently essentialist. Generalizations across the entire female gender can be broken into smaller more specific groups pertaining to women's different social classes and cultures, but are still generalized as distinct groups, and thus marginalization still occurs. West and Turner state that Catherine O'Leary (1997) argued that although standpoint theory has helped reclaim women's experiences as suitable research topics, it contains a problematic emphasis on the universality of this experience, at the expense of differences among women's experiences.
Another main criticism of Harding and Wood's standpoint theory is the credibility of strong objectivity vs. subjectivity. Standpoint theorists argue that standpoints are relative and cannot be evaluated by any absolute criteria but make the assumption that the oppressed are less biased or more impartial than the privileged. This leaves open the possibility of an overbalance of power, in which the oppressed group intentionally or unintentionally becomes the oppressor. Intentional overbalance of power, or revenge, can manifest as justification for extremism and militarism, which can sometimes be seen in more extreme forms of feminism.
While standpoint theory began with a critical Marxist view of social-class oppression, it developed in the 1970s and 1980s along with changes in feminist philosophy. Other groups, as of now, need to be included into the theory and a new emphasis needs to be made toward other marginalized or muted groups. When Harding and Wood created standpoint theory, they did not account for how different cultures can exist within the same social group. "Early standpoint theorists sought to understand the way in which the gendered identity of knowers affected their epistemic resources and capacities". These other muted or marginalized groups have a more realistic approach to standpoint theory as they have different experiences than those that are in power and even within those muted groups differences defined by different cultures of people can have an altered standpoint. This view gives a basis to a central principle of standpoint theory—the inversion thesis. Academic Joshua St. Pierre defines the inversion thesis as giving "epistemic authority to those marginalized by systems of oppression insofar as these people are often better knowers than those who benefit from oppression. Put simply: social dispossession produces epistemic privilege."
Wylie has perhaps provided the most succinct articulation of second-wave standpoint theory. For her, a standpoint does not mark out a clearly defined territory such as "women" within which members have automatic privilege but is a rather a posture of epistemic engagement. Responding to the claim that the situated knowledge thesis reifies essentialism, Wylie argues that it is "an open (empirical) question whether such structures obtain in a given context, what form they take, and how they are internalized or embodied by individuals". Identities are complex and cannot be reduced to simple binaries. Likewise, she argues that the criticism of automatic privilege falters insofar as a standpoint is never given but is achieved (St. Pierre). This can be seen as an instance of moving the goalposts.
See also
Co-cultural communication theory
Critical race theory
Cultural studies
Groupthink
Muted group theory
Perspectivism
Positionality statement
Quill Kukla
Spiral of silence
Standpoint feminism
References
Further reading
Feminist theory
Identity politics
Point of view
Social constructionism | 0.788617 | 0.992365 | 0.782597 |
Aphorism | An aphorism (from Greek ἀφορισμός: aphorismos, denoting 'delimitation', 'distinction', and 'definition') is a concise, terse, laconic, or memorable expression of a general truth or principle. Aphorisms are often handed down by tradition from generation to generation.
The concept is generally distinct from those of an adage, brocard, chiasmus, epigram, maxim (legal or philosophical), principle, proverb, and saying; although some of these concepts may be construed as types of aphorism.
Often aphorisms are distinguished from other short sayings by the need for interpretation to make sense of them. In A Theory of the Aphorism, Andrew Hui defined an aphorism as "a short saying that requires interpretation".
A famous example is:
History
The word was first used in the Aphorisms of Hippocrates, a long series of propositions concerning the symptoms and diagnosis of disease and the art of healing and medicine. The often-cited first sentence of this work is: "" - "life is short, art is long", usually reversed in order (Ars longa, vita brevis).
This aphorism was later applied or adapted to physical science and then morphed into multifarious aphorisms of philosophy, morality, and literature. Currently, an aphorism is generally understood to be a concise and eloquent statement of truth.
Aphorisms are distinct from axioms: aphorisms generally originate from experience and custom, whereas axioms are self-evident truths and therefore require no additional proof. Aphorisms have been especially used in subjects to which no methodical or scientific treatment was originally applied, such as agriculture, medicine, jurisprudence, and politics.
Literature
Aphoristic collections, sometimes known as wisdom literature, have a prominent place in the canons of several ancient societies, such as the Sutra literature of India, the Biblical Ecclesiastes, Islamic hadiths, the golden verses of Pythagoras, Hesiod's Works and Days, the Delphic maxims, and Epictetus' Handbook. Aphoristic collections also make up an important part of the work of some modern authors. A 1559 oil–on–oak-panel painting, Netherlandish Proverbs (also called The Blue Cloak or The Topsy Turvy World) by Pieter Bruegel the Elder, artfully depicts a land populated with literal renditions of Flemish aphorisms (proverbs) of the day.
The first noted published collection of aphorisms is Adagia by Erasmus. Other important early aphorists were Baltasar Gracián, François de La Rochefoucauld, and Blaise Pascal.
Two influential collections of aphorisms published in the twentieth century were Unkempt Thoughts by Stanisław Jerzy Lec (in Polish) and Itch of Wisdom by Mikhail Turovsky (in Russian and English).
Society
Many societies have traditional sages or culture heroes to whom aphorisms are commonly attributed, such as the Seven Sages of Greece, Chanakya, Confucius, or King Solomon.
Misquoted or misadvised aphorisms are frequently used as a source of humour; for instance, wordplays of aphorisms appear in the works of P. G. Wodehouse, Terry Pratchett, and Douglas Adams. Aphorisms being misquoted by sports players, coaches, and commentators form the basis of Private Eye's Colemanballs section.
Philosophy
Professor of Humanities Andrew Hui, author of A Theory of the Aphorism offered the following definition of an aphorism: "a short saying that requires interpretation". Hui showed that some of the earliest philosophical texts from traditions around the world used an aphoristic style. Some of the earliest texts in the western philosophical canon feature short statements requiring interpretation, as seen in the Pre-Socratics like Heraclitus and Parmenides. In early Hindu literature, the Vedas were composed of many aphorisms. Likewise, in early Chinese philosophy, Taoist texts like the Tao Te Ching and the Confucian Analects relied on an aphoristic style. Francis Bacon, Blaise Pascal, Desiderius Erasmus, and Friedrich Nietzsche rank among some of the most notable philosophers who employed them in the modern time.
Andrew Hui argued that aphorisms played an important role in the history of philosophy, influencing the favored mediums of philosophical traditions. He argued for example, that the Platonic Dialogues served as a response to the difficult to interpret fragments and phrases which Pre-Socratic philosophers were famous for. Hui proposes that aphorisms often arrive before, after, or in response to more systematic argumentative philosophy. For example, aphorisms may come before a systematic philosophy, because the systematic philosophy consists of the attempt to interpret and explain the aphorisms, as he argues is the case with Confucianism. Alternately, aphorisms may be written against systematic philosophy, as a form of challenge or irreverence, as seen in Nietzsche's work. Lastly, aphorisms may come after or following systematic philosophy, as was the case with Francis Bacon, who sought to bring an end to old ways of thinking.
Aphorists
Georges Bataille
George E. P. Box
Jean Baudrillard
Ambrose Bierce (The Devil's Dictionary)
Nicolás Gómez Dávila (Escolios a un texto implícito)
Theodor W. Adorno (Minima Moralia: Reflections from Damaged Life)
F. H. Bradley
Malcolm de Chazal
Emil Cioran
Arkady Davidowitz
Desiderius Erasmus
Gustave Flaubert (Dictionary of Received Ideas)
Benjamin Franklin
Andrzej Maksymilian Fredro
Robert A. Heinlein (The Notebooks of Lazarus Long)
Edmond Jabès
Tomáš Janovic
Joseph Joubert
Franz Kafka
Karl Kraus
Stanisław Jerzy Lec
Georg Christoph Lichtenberg
Andrzej Majewski
Juan Manuel (the second, third and fourth parts of his famous work El Conde Lucanor)
Friedrich Nietzsche
Mark Miremont
Oiva Paloheimo
Dorothy Parker
Patanjali
Petar II Petrović-Njegoš
Faina Ranevskaya
François de La Rochefoucauld
George Santayana
Arthur Schopenhauer
Seneca the Younger
George Bernard Shaw
Mikhail Turovsky
Lev Shestov
Nassim Nicholas Taleb (The Bed of Procrustes)
Lao Tze
Voltaire
Wasif Ali Wasif
Oscar Wilde
Alexander Woollcott
Burchard of Worms
Cheng Yen (Jing Si Aphorism)
Sun Tzu
See also
Adage
Adagia by Desiderius Erasmus Roterodamus
Brocard
Chiasmus
Cliché
Epigram
Epitaph
French moralists
Gospel of Thomas
Legal maxim
Mahavakya
Maxim
Platitude
Proverb
Pseudo-Phocylides
Sacred Scripture:
Book of Proverbs
Ecclesiastes
Hidden Words
Wisdom of Sirach
Saying
Sūtra
The Triads of Ireland, and the Welsh Triads
References
Further reading
Gopnik, Adam, "Brevity, Soul, Wit: The art of the aphorism" (includes discussion of Andrew Hui, A Theory of the Aphorism: From Confucius to Twitter, Princeton, 2019), The New Yorker, 22 July 2019, pp. 67–69. "The aphorism [...] is [...] always an epitome, and seeks an essence. The ability to elide the extraneous is what makes the aphorism bite, but the possibility of inferring backward to a missing text is what makes the aphorism poetic." (p.69.)
External links
Commentary on Hippocrates' Aphorisms
Narrative techniques
Paremiology
Phrases | 0.783539 | 0.998641 | 0.782473 |
Value theory | Value theory is the systematic study of values. Also called axiology, it examines the nature, sources, and types of values. As a branch of philosophy, it has interdisciplinary applications in fields such as economics, sociology, anthropology, and psychology.
Value is the worth of something, usually understood as a degree that covers both positive and negative magnitudes corresponding to the terms good and bad. Values influence many human endeavors related to emotion, decision-making, and action. Value theorists distinguish between intrinsic and instrumental value. An entity has intrinsic value if it is good in itself, independent of external factors. An entity has instrumental value if it is useful as a means leading to other good things. Some classifications focus on the type of benefit, including economic, moral, political, aesthetic, and religious values. Other categorizations, based on the meaning and function of evaluative terms, discuss attributive, predicative, personal, impersonal, and agent-relative values.
Value realists state that values have mind-independent existence as objective features of reality. This view is rejected by anti-realists, some of whom argue that values are subjective human creations, whereas others claim that value statements are meaningless. Several sources of value have been proposed, such as hedonism, which says that only pleasure has intrinsic value, and desire theories, which identify desires as the ultimate source of value. Perfectionism, another prominent theory, emphasizes the cultivation of characteristic human abilities. Value pluralism holds that there are diverse sources of intrinsic value, raising the issue of whether values belonging to different types are comparable. Value theorists employ various methods of inquiry, ranging from reliance on intuitions and thought experiments to the description of first-person experience and the analysis of language.
Ethics is a closely related field focusing primarily on normative concepts about which behavior is right, whereas value theory explores evaluative concepts about what is good. In economics, theories of value are frameworks to assess and explain the economic value of commodities. Sociology and anthropology examine values as aspects of societies and cultures, reflecting their dominant preferences and beliefs. Psychologists tend to understand values as abstract motivational goals that shape an individual's personality. The roots of value theory lie in the ancient period in the form of reflections on the highest good that humans should pursue.
Definition
Value theory, also known as axiology and theory of values, is the systematic study of values. As the branch of philosophy examining which things are good and what it means for something to be good, it distinguishes different types of values and explores how they can be measured and compared. It also studies whether values are a fundamental aspect of reality and how they affect phenomena such as emotion, desire, decision, and action. Its topic is relevant to many human endeavors because values are guiding principles that underlie the political, economic, scientific, and personal spheres. Value theory analyzes and evaluates phenomena such as well-being, utility, beauty, human life, knowledge, wisdom, freedom, love, and justice.
The precise definition of value theory is disputed and some theorists rely on alternative characterizations. In a broad sense, value theory is a catch-all label that encompasses all philosophical disciplines studying evaluative or normative topics. According to this view, value theory is one of the main branches of philosophy and includes ethics, aesthetics, social philosophy, political philosophy, and philosophy of religion. A similar broad characterization sees value theory as a multidisciplinary area of inquiry that covers research from fields like sociology, anthropology, psychology, and economics in addition to philosophy. In a narrow sense, value theory is a subdiscipline of ethics that is particularly relevant to the school of consequentialism since it determines how to assess the value of consequences.
The word axiology has its origin in the ancient Greek terms (axios, meaning or ) and (logos, meaning or ). Even though the roots of value theory reach back to the ancient period, this area of thought was only conceived as a distinct discipline in the late 19th and early 20th centuries, when the term axiology was coined. The terms value theory and axiology are usually used as synonyms but some philosophers distinguish between them. According to one characterization, axiology is a subfield of value theory that limits itself to theories about what things are valuable and how valuable they are. The term timology is an older and less common synonym.
Value
Value is the worth, usefulness, or merit of something. Many evaluative terms are employed to talk about value, including good, best, great, and excellent as well as their negative counterparts, like bad and terrible. Some value terms, like good and bad, are pure evaluations in that they only express the value of something without any additional descriptive content. They are known as thin evaluative concepts. Thick evaluative concepts, like courageous and cruel, provide more information by expressing other qualities besides the evaluation, such as character traits. Values are often understood as degrees that cover positive and negative magnitudes corresponding to good and bad. The term value is sometimes restricted to positive degrees to contrast with the term disvalue for the negative degrees. The terms better and worse are used to compare degrees, but it is controversial whether this is possible in all cases. Evaluation is the assessment or measurement of value, often employed to compare the benefits of different options to find the most advantageous choice.
Evaluative terms are sometimes distinguished from normative or deontic terms. Normative terms, like right, wrong, and obligation, prescribe actions or other states by expressing what ought to be done or what is required. Evaluative terms have a wider scope because they are not limited to what people can control or are responsible for. For example, involuntary events like digestion and earthquakes can have a positive or negative value even if they are not right or wrong in a strict sense. Despite the distinction, evaluative and normative concepts are closely related. For example, the value of the consequences of an action may affect whether this action is right or wrong.
Value theorists distinguish various types or categories of values. The different classifications overlap and are based on considerations like the source, beneficiary, and function of the value.
Intrinsic and instrumental
A thing has intrinsic or final value if it is good in itself or good for its own sake. This means that it is good independent of external factors or outcomes. A thing has extrinsic or instrumental value if it is useful or leads to other good things. In other words, it is a means to bring about a desired end. For example, tools like microwaves or money have instrumental value thanks to the useful functions they perform. In some cases, the thing produced this way has itself instrumental value, like when using money to buy a microwave. This can result in a chain of instrumentally valuable things in which each link gets its value by causing the following link. Intrinsically valuable things stand at the endpoint of these chains and ground the value of all the links that come before them.
One suggestion to distinguish between intrinsic and instrumental value relies on a thought experiment that imagines the valuable thing in isolation from everything else. In such a situation, purely instrumentally valuable things lose their value since they serve no purpose while purely intrinsically valuable things remain valuable. According to a common view, pleasure is one of the sources of intrinsic value. Other suggested sources include desire satisfaction, virtue, life, health, beauty, freedom, and knowledge.
Intrinsic and instrumental value are not exclusive categories. As a result, a thing can have both intrinsic and instrumental value if it is both good in itself while also leading to other good things. In a similar sense, a thing can have different instrumental values at the same time, both positive and negative ones. This is the case if some of its consequences are good while others are bad. The total instrumental value of a thing is the value balance of all its consequences.
Because instrumental value depends on other values, it is an open question whether it should be understood as a value in a strict sense. For example, the overall value of a chain of causes leading to an intrinsically valuable thing remains the same if instrumentally valuable links are added or removed without affecting the intrinsically valuable thing. The observation that the overall value does not change is sometimes used as an argument that the things added or removed do not have value.
Traditionally, value theorists have used the terms intrinsic value and final value interchangeably, just like the terms extrinsic value and instrumental value. This practice has been questioned in the 20th century based on the idea that they are similar but not identical concepts. According to this view, a thing has intrinsic value if the source of its value is an intrinsic property, meaning that the value does not depend on how the thing is related to other objects. Extrinsic value, by contrast, depends on external relations. This view sees instrumental value as one type of extrinsic value based on causal relations. At the same time, it allows that there are other types of non-instrumental extrinsic value. Final value is understood as what is valued for its own sake, independent of whether intrinsic or extrinsic properties are responsible.
Absolute and relative
Another distinction relies on the contrast between absolute and relative value. Absolute value, also called value simpliciter, is a form of unconditional value. A thing has relative value if its value is limited to certain considerations or viewpoints.
One form of relative value is restricted to the type of an entity, expressed in sentences like "That is a good knife" or "Jack is a good thief". This form is known as attributive goodness since the word "good" modifies the meaning of another term. To be attributively good as a certain type means to possess certain qualities characteristic of that type. For example, a good knife is sharp and a good thief has the skill of stealing without getting caught. Attributive goodness contrasts with predicative goodness. The sentence "Pleasure is good" is an example since the word good is used as a predicate to talk about the unqualified value of pleasure. Attributive and predicative goodness can accompany each other, but this is not always the case. For instance, being a good thief is not necessarily a good thing.
Another type of relative value restricts goodness to a specific person. Known as personal value, it expresses what benefits a particular person, promotes their welfare, or is in their interest. For example, a poem written by a child may have personal value for the parents even if the poem lacks value for others. Impersonal value, by contrast, is good in general without restriction to any specific person or viewpoint. Some philosophers, like G. E. Moore, reject the existence of personal values, holding that all values are impersonal. Others have proposed theories about the relation between personal and impersonal value. The agglomerative theory says that impersonal value is nothing but the sum of all personal values. Another view understands impersonal value as a specific type of personal value taken from the perspective of the universe as a whole.
Agent-relative value is sometimes contrasted with personal value as another person-specific limitation of the evaluative outlook. Agent-relative values affect moral considerations about what a person is responsible for or guilty of. For example, if Mei promises to pick Pedro up from the airport then an agent-relative value obligates Mei to drive to the airport. This obligation is in place even if it does not benefit Mei, in which case there is an agent-relative value without a personal value. In consequentialism, agent-relative values are often discussed in relation to ethical dilemmas. One dilemma revolves around the question of whether an individual should murder an innocent person if this prevents the murder of two innocent people by a different perpetrator. The agent-neutral perspective tends to affirm this idea since one murder is preferable to two. The agent-relative perspective tends to reject this conclusion, arguing that the initial murder should be avoided since it negatively impacts the agent-relative value of the individual.
Traditionally, most value theorists see absolute value as the main topic of value theory and focus their attention on this type. Nonetheless, some philosophers, like Peter Geach and Philippa Foot, have argued that the concept of absolute value by itself is meaningless and should be understood as one form of relative value.
Other distinctions
Other classifications of values have been proposed without a widely accepted main classification. Some focus on the types of entities that have value. They include distinct categories for entities like things, the environment, individuals, groups, and society. Another subdivision pays attention to the type of benefit involved and encompasses material, economic, moral, social, political, aesthetic, and religious values. Classifications by the beneficiary of the value distinguish between self- and other-oriented values.
A historically influential approach identifies three spheres of value: truth, goodness, and beauty. For example, the neo-Kantian philosopher Wilhelm Windelband characterizes them as the highest goals of consciousness, with thought aiming at truth, will aiming at goodness, and emotion aiming at beauty. A similar view, proposed by the Chinese philosopher Zhang Dainian, says that the value of truth belongs to knowledge, the value of goodness belongs to behavior, and the value of beauty belongs to art. This three-fold distinction also plays a central role in the philosophies of Franz Brentano and Jürgen Habermas. Other suggested types of values include objective, subjective, potential, actual, contingent, necessary, inherent, and constitutive values.
Schools of thought
Realism and anti-realism
Value realism is the view that values have mind-independent existence. This means that objective facts determine what has value, irrespective of subjective beliefs and preferences. According to this view, the evaluative statement "That act is bad" is as objectively true or false as the empirical statement "That act causes distress".
Realists often analyze values as properties of valuable things. For example, stating that kindness is good asserts that kindness possesses the property of goodness. Value realists disagree about what type of property is involved. Naturalists say that value is a natural property. Natural properties can be known through empirical observation and are studied by the natural sciences. This means that value is similar to other natural properties, like size and shape. Non-naturalists reject this view but agree that values are real. They say that values differ significantly from empirical properties and belong to another realm of reality. According to one view, they are known through rational or emotional intuition rather than empirical observation.
Another disagreement among realists is about whether the entity carrying the value is a concrete individual or a state of affairs. For instance, the name "Bill" refers to an individual while the sentence "Bill is pleased" refers to a state of affairs. States of affairs are complex entities that combine other entities, like the individual "Bill" and the property "is pleased". Some value theorists hold that the value is a property directly of Bill while others contend that it is a property of the fact that Bill is pleased. This distinction affects various disputes in value theory. In some cases, a value is intrinsic according to one view and extrinsic according to the other.
Value realism contrasts with anti-realism, which comes in various forms. In its strongest version, anti-realism rejects the existence of values in any form, claiming that value statements are meaningless. Between these two positions, there are various intermediary views. Some anti-realists accept that value claims have meaning but deny that they have a truth value, a position known as non-cognitivism. For example, emotivists say that value claims express emotional attitudes, similar to how exclamations like "Yay!" or "Boo!" express emotions rather than stating facts.
Cognitivists contend that value statements have a truth value. Error theorists defend anti-realism based on this view by stating that all value statements are false because there are no values. Another view accepts the existence of values but denies that they are mind-independent. According to this view, the mental states of individuals determine whether an object has value, for instance, because individuals desire it. A similar view is defended by existentialists like Jean-Paul Sartre, who argued that values are human creations that endow the world with meaning. Subjectivist theories say that values are relative to each subject, whereas more objectivist outlooks hold that values depend on mind in general rather than on the individual mind. A different position accepts that values are mind-independent but holds that they are reducible to other facts, meaning that they are not a fundamental part of reality. One form of reductionism maintains that a thing is good if it is fitting to favor this thing, regardless of whether people actually favor it. The strongest form of realism says that value is a fundamental part of reality and cannot be reduced to other aspects.
Sources of value
Various theories about the sources of value have been proposed. They aim to clarify what kinds of things are intrinsically good. The historically influential theory of hedonism states that how people feel is the only source of value. More specifically, it says that pleasure is the only intrinsic good and pain is the only intrinsic evil. According to this view, everything else only has instrumental value to the extent that it leads to pleasure or pain, including knowledge, health, and justice. Hedonists usually understand the term pleasure in a broad sense that covers all kinds of enjoyable experiences, including bodily pleasures of food and sex as well as more intellectual or abstract pleasures, like the joy of reading a book or being happy about a friend's promotion. Pleasurable experiences come in degrees, and hedonists usually associate their intensity and duration with the magnitude of value they have.
Many hedonists identify pleasure and pain as symmetric opposites, meaning that the value of pleasure balances out the disvalue of pain if they have the same intensity. However, some hedonists reject this symmetry and give more weight to avoiding pain than to experiencing pleasure. Although it is widely accepted that pleasure is valuable, the hedonist claim that it is the only source of value is controversial.
Desire theories offer a slightly different account, stating that desire satisfaction is the only source of value. This theory overlaps with hedonism because many people desire pleasure and because desire satisfaction is often accompanied by pleasure. Nonetheless, there are important differences: people desire a variety of other things as well, like knowledge, achievement, and respect; additionally, desire satisfaction may not always result in pleasure. Some desire theorists hold that value is a property of desire satisfaction itself, while others say that it is a property of the objects that satisfy a desire. One debate in desire theory concerns whether any desire is a source of value. For example, if a person has a false belief that money makes them happy, it is questionable whether the satisfaction of their desire for money is a source of value. To address this consideration, some desire theorists say that a desire can only provide value if a fully informed and rational person would have it. This view excludes faulty desires.
Perfectionism identifies the realization of human nature and the cultivation of characteristic human abilities as the source of intrinsic goodness. It covers capacities and character traits belonging to the bodily, emotional, volitional, cognitive, social, artistic, and religious fields. Perfectionists disagree about which human excellences are the most important. Many are pluralistic in recognizing a diverse array of human excellences, such as knowledge, creativity, health, beauty, free agency, and moral virtues like benevolence and courage. According to one suggestion, there are two main fields of human goods: theoretical abilities responsible for understanding the world and practical abilities responsible for interacting with it. Some perfectionists provide an ideal characterization of human nature, holding that human excellences are those aspects that promote the realization of this goal. This view is exemplified in Aristotle's focus on rationality as the nature and ideal state of human beings. Non-humanistic versions extend perfectionism to the natural world in general, arguing that excellence as a source of intrinsic value is not limited to the human realm.
Monism and pluralism
Monist theories of value assert that there is only a single source of intrinsic value. They agree that various things have value but maintain that all fundamentally good things belong to the same type. For example, hedonists hold that nothing but pleasure has intrinsic value, while desire theorists argue that desire satisfaction is the only source of fundamental goodness. Pluralists reject this view, contending that a simple single-value system is too crude to capture the complexity of the sphere of values. They say that diverse sources of value exist independently of one another, each contributing to the overall value of the world.
One motivation for value pluralism is the observation that people value diverse types of things, including happiness, friendship, success, and knowledge. This diversity becomes particularly prominent when people face difficult decisions between competing values, such as choosing between friendship and career success. Since monists accept only one source of intrinsic value, they explain this observation by holding that other items in this diversity have only instrumental value or, in some cases, no value at all.
Pluralists have proposed various accounts of how their view affects practical decisions. Rational decisions often rely on value comparisons to determine which course of action should be pursued. Some pluralists discuss a hierarchy of values reflecting the relative importance and weight of different value types to help people promote higher values when faced with difficult choices. For example, philosopher Max Scheler ranks values based on how enduring and fulfilling they are into the levels of pleasure, utility, vitality, culture, and holiness. He asserts that people should not promote lower values, like pleasure, if this comes at the expense of higher values.
Radical pluralists reject this approach, putting more emphasis on diversity by holding that different types of values are not comparable with each other. This means that each value type is unique, making it impossible to determine which one is superior. Some value theorists use radical pluralism to argue that value conflicts are inevitable, that the gain of one value cannot always compensate for the loss of another, and that some ethical dilemmas are irresolvable. For example, philosopher Isaiah Berlin applied this idea to the values of liberty and equality, arguing that a gain in one cannot make up for a loss in the other. Similarly, philosopher Joseph Raz said that it is often impossible to compare the values of career paths, like when choosing between becoming a lawyer or a clarinetist. The terms incomparability and incommensurability are often used as synonyms. However, philosophers like Ruth Chang distinguish them. According to this view, incommensurability means that there is no common measure to quantify values of different types. Incommensurable values may or may not be comparable. If they are, it is possible to say that one value is better than another, but it is not possible to quantify how much better it is.
Others
Several controversies surround the question of how the intrinsic value of a whole is determined by the intrinsic values of its parts. According to the additivity principle, the intrinsic value of a whole is simply the sum of the intrinsic values of its parts. For example, if a virtuous person becomes happy then the intrinsic value of the happiness is simply added to the intrinsic value of the virtue, thereby increasing the overall value.
Various counterexamples to the additivity principle have been proposed, suggesting that the relation between parts and wholes is more complex. For example, Immanuel Kant argued that if a vicious person becomes happy, this happiness, though good in itself, does not increase the overall value. On the contrary, it makes things worse, according to Kant, since viciousness should not be rewarded with happiness. This situation is known as an organic unity, a whole whose intrinsic value differs from the sum of the intrinsic values of its parts. Another perspective, called holism about value, asserts that the intrinsic value of a thing depends on its context. Holists can argue that happiness has positive intrinsic value in the context of virtue and negative intrinsic value in the context of vice. Atomists reject this view, saying that intrinsic value is context-independent.
Theories of value aggregation provide concrete principles for calculating the overall value of an outcome based on how positively or negatively each individual is affected by it. For example, if a government implements a new policy that affects some people positively and others negatively, theories of value aggregation can be used to determine whether the overall value of the policy is positive or negative. Axiological utilitarianism accepts the additivity principle, saying that the total value is simply the sum of all individual values. Axiological egalitarians are not only interested in the sum total of value but also in how the values are distributed. They argue that an outcome with a balanced advantage distribution is better than an outcome where some benefit a lot while others benefit little, even if the two outcomes have the same sum total. Axiological prioritarians are particularly concerned with the benefits of individuals who are worse off. They say that providing advantages to people in need has more value than providing the same advantages to others.
Formal axiology is a theory of value initially developed by philosopher Robert S. Hartman. This approach treats axiology as a formal science, akin to logic and mathematics. It uses axioms to give an abstract definition of value, understanding it not as a property of things but as a property of concepts. Values measure the extent to which an entity fulfills its concept. For example, a good car has all the desirable qualities of cars, like a reliable engine and effective brakes, whereas a bad car lacks many. Formal axiology distinguishes between three fundamental value types: intrinsic values apply to people; extrinsic values apply to things, actions, and social roles; systemic values apply to conceptual constructs. Formal axiology examines how these value types form a hierarchy and how they can be measured.
Methods
Value theorists employ various methods to conduct their inquiry, justify theories, and measure values. Intuitionists rely on intuitions to assess evaluative claims. In this context, an intuition is an immediate apprehension or understanding of a self-evident claim, meaning that its truth can be assessed without inferring it from another observation. Value theorists often rely on thought experiments to gain this type of understanding. Thought experiments are imagined scenarios that exemplify philosophical problems. Philosophers use counterfactual reasoning to evaluate the possible consequences and gain insight into the underlying problem. For example, philosopher Robert Nozick imagines an experience machine that can virtually simulate an ideal life. Based on his observation that people would not want to spend the rest of their lives in this pleasurable simulation, Nozick argues against the hedonist claim that pleasure is the only source of intrinsic value. According to him, the thought experiment shows that the value of an authentic connection to reality is not reducible to pleasure.
Phenomenologists provide a detailed first-person description of the experience of values. They closely examine emotional experiences, ranging from desire, interest, and preference to feelings in the form of love and hate. However, they do not limit their inquiry to these phenomena, asserting that values permeate experience at large. A key aspect of the phenomenological method is to suspend preconceived ideas and judgments to understand the essence of experiences as they present themselves to consciousness.
The analysis of concepts and ordinary language is another method of inquiry. By examining terms and sentences used to talk about values, value theorists aim to clarify their meanings, uncover crucial distinctions, and formulate arguments for and against axiological theories. For example, a prominent dispute between naturalists and non-naturalists hinges on the conceptual analysis of the term good, in particular, whether its meaning can be analyzed through natural terms, like pleasure.
In the social sciences, value theorists face the challenge of measuring the evaluative outlook of individuals and groups. Specifically, they aim to determine personal value hierarchies, for example, whether a subject gives more weight to truth than to moral goodness or beauty. They distinguish between direct and indirect measurement methods. Direct methods involve asking people straightforward questions about what things they value and which value priorities they have. This approach assumes that people are aware of their evaluative outlook and able to articulate it accurately. Indirect methods do not share this assumption, asserting instead that values guide behavior and choices on an unconscious level. Consequently, they observe how people decide and act, seeking to infer the underlying value attitudes responsible for picking one course of action rather than another.
Various catalogs or scales of values have been proposed to measure value priorities. The Rokeach Value Survey considers a total of 36 values divided into two groups: instrumental values, like honesty and capability, which serve as means to promote terminal values, such as freedom and family security. It asks participants to rank them based on their impact on the participants' lives, aiming to understand the relative importance assigned to each of them. The Schwartz theory of basic human values is a modification of the Rokeach Value Survey that seeks to provide a more cross-cultural and universal assessment. It arranges the values in a circular manner to reflect that neighboring values are compatible with each other, such as tradition and security, while values on opposing sides may conflict with each other, such as tradition and self-direction.
In various fields
Ethics
Ethics and value theory are overlapping fields of inquiry. Ethics studies moral phenomena, focusing on how people should act or which behaviors are morally right. Value theory investigates the nature, sources, and types of values in general. Some philosophers understand value theory as a subdiscipline of ethics. This is based on the idea that what people should do is affected by value considerations but not necessarily limited to them. Another view sees ethics as a subdiscipline of value theory. This outlook follows the idea that ethics is concerned with moral values affecting what people can control, whereas value theory examines a broader horizon of values, including those beyond anyone's control. Some perspectives contrast ethics and value theory, asserting that the normative concepts examined by ethics are distinct from the evaluative concepts examined by value theory. Axiological ethics is a subfield of ethics examining the nature and role of values from a moral perspective, with particular interest in determining which ends are worth pursuing.
The ethical theory of consequentialism combines the perspectives of ethics and value theory, asserting that the rightness of an action depends on the value of its consequences. Consequentialists compare possible courses of action, saying that people should follow the one leading to the best overall consequences. The overall consequences of an action are the totality of its effects, or how it impacts the world by starting a causal chain of events that would not have occurred otherwise. Distinct versions of consequentialism rely on different theories of the sources of value. Classical utilitarianism, a prominent form of consequentialism, says that moral actions produce the greatest amount of pleasure for the greatest number of people. It combines a consequentialist outlook on right action with a hedonist outlook on pleasure as the only source of intrinsic value.
Economics
Economics is a social science studying how goods and services are produced, distributed, and consumed, both from the perspective of individual agents and societal systems. Economists view evaluations as a driving force underlying economic activity. They use the notion of economic value and related evaluative concepts to understand decision-making processes, resource allocation, and the impact of policies. The economic value or benefit of a commodity is the advantage it provides to an economic agent, often measured in terms of the money people are willing to pay for it.
Economic theories of value are frameworks to explain how economic value arises and which factors influence it. Prominent frameworks include the classical labor theory of value and the neo-classical marginal theory of value. The labor theory, initially developed by the economists Adam Smith and David Ricardo, distinguishes between use value—the utility or satisfaction a commodity provides—and exchange value—the proportion at which one commodity can be exchanged with another. It focuses on exchange value, which it says is determined by the amount of labor required to produce the commodity. In its simplest form, it directly correlates exchange value to labor time. For example, if the time needed to hunt a deer is twice the time needed to hunt a beaver then one deer is worth two beavers. The philosopher Karl Marx extended the labor theory of value in various ways. He introduced the concept of surplus value, which goes beyond the time and resources invested to explain how capitalists can profit from the labor of their employees.
The marginal theory of value focuses on consumption rather than production. It says that the utility a commodity is the source of its value. Specifically, it is interested in marginal utility, the additional satisfaction gained from consuming one more unit of the commodity. Marginal utility often diminishes if many units have already been consumed, leading to a decrease in the exchange value of commodities that are abundantly available. Both the labor theory and the marginal theory were later challenged by the Sraffian theory of value.
Sociology
Sociology studies social behavior, relationships, institutions, and society at large. In their analyses and explanations of these phenomena, some sociologists use the concept of values to understand issues like social cohesion and conflict, the norms and practices people follow, and collective action. They usually understand values as subjective attitudes possessed by individuals and shared in social groups. According to this view, values are beliefs or priorities about goals worth pursuing that guide people to act in certain ways. This subjective conception of values as aspects of individuals and social groups contrasts with the objective conceptions of values more prominent in economics, which understands values as aspects of commodities.
Shared values can help unite people in the pursuit of a common cause, fostering social cohesion. Value differences, by contrast, may divide people into antagonistic groups that promote conflicting projects. Some sociologists employ value research to predict how people will behave. Given the observation that someone values the environment, they may conclude that this person is more likely to recycle or support pro-environmental legislation. One approach to this type of research uses value scales, such as the Rokeach Value Survey and the Schwartz theory of basic human values, to measure the value outlook of individuals and groups.
Anthropology
Anthropology also studies human behavior and societies but does not limit itself to contemporary social structures, extending its focus to humanity both past and present. Similar to sociologists, many anthropologists understand values as social representations of goals worth pursuing. For them, values are embedded in mental structures associated with culture and ideology about what is desirable. A slightly different approach in anthropology focuses on the practical side of values, holding that values are constantly created through human activity.
Anthropological value theorists use values to compare cultures. They can be employed to examine similarities as universal concerns present in every society. For example, anthropologist Clyde Kluckhohn and sociologist Fred Strodtbeck proposed a set of value orientations found in every culture. Values can also be used to analyze differences between cultures and value changes within a culture. Anthropologist Louis Dumont followed this idea, suggesting that the cultural meaning systems in distinct societies differ in their value priorities. He argued that values are ordered hierarchically around a set of paramount values that trump all other values.
The contrast between individualism and collectivism is an influential topic in cross-cultural value research. Individualism promotes values associated with the autonomy of individuals, such as self-directedness, independence, and personal goals. Collectivism gives priority to group-related values, like cooperation, conformity, and foregoing personal advantages for the sake of collective benefits. As a rough simplification, it is often suggested that individualism is more prominent in Western cultures, whereas collectivism is more commonly observed in Eastern cultures.
Psychology
As the study of mental phenomena and behavior, psychology contrasts with sociology and anthropology by focusing more on the perspective of individuals than the broader social and cultural contexts. Psychologists tend to understand values as abstract motivational goals or general principles about what matters. From this perspective, values differ from specific plans and intentions since they are stable evaluative tendencies not bound to concrete situations.
Various psychological theories of values establish a close link between an individual's evaluative outlook and their personality. An early theory, formulated by psychologists Philip E. Vernon and Gordon Allport, understands personality as a collection of aspects unified by a coherent value system. It distinguishes between six personality types corresponding to the value spheres of theory, economy, aesthetics, society, politics, and religion. For example, people with theoretical personalities place special importance on the value of knowledge and discovery of truth. Influenced by Vernon and Allport, psychologist Milton Rokeach conceptualized values as enduring beliefs about what goals and conduct are preferable. He divided values into the categories of instrumental and terminal values. He thought that a central aspect of personality lies in how people prioritize the values within each category. Psychologist Shalom Schwartz refined this approach by linking values to emotion and motivation. He explored how value rankings affect decisions in which the values of different options conflict.
History
The origin of value theory lies in the ancient period, with early reflections on the good life and the ends worth pursuing. Socrates identified the highest good as the right combination of knowledge, pleasure, and virtue, holding that active inquiry is associated with pleasure while knowledge of the good leads to virtuous action. Plato conceived the good as a universal and changeless idea. It is the highest form in his theory of forms, acting as the source of all other forms and the foundation of reality and knowledge. Aristotle (384–322 BCE) saw eudaimonia as the highest good and ultimate goal of human life. He understood eudaimonia as a form of happiness or flourishing achieved through the exercise of virtues in accordance with reason, leading to the full realization of human potential. Epicurus proposed a nuanced egoistic hedonism, stating that personal pleasure is the greatest good while recommending moderation to avoid the negative effects of excessive desires and anxiety about the future. According to the Stoics, a virtuous life following nature and reason is the highest good. They thought that self-mastery and rationality lead to a pleasant equanimity independent of external circumstances. Influenced by Plato, Plotinus held that the Good is the ultimate principle of reality from which everything emanates. For him, evil is not a distinct opposing principle but merely a deficiency or absence of being resulting from a missing connection to the Good.
In ancient Indian philosophy, the idea that people are trapped in a cycle of rebirths arose around 600 BCE. Many traditions adopted it, arguing that liberation from this cycle is the highest good. Hindu philosophy distinguishes the four fundamental values of duty, economic wealth, sensory pleasure, and liberation. Many Hindu schools of thought prioritize the value of liberation. A similar outlook is found in ancient Buddhist philosophy, starting between the sixth and the fifth centuries BCE, where the cessation of suffering through the attainment of Nirvana is considered the ultimate goal. In ancient China, Confucius explored the role of self-cultivation in leading a virtuous life, viewing general benevolence towards humanity as the supreme virtue. In comparing the highest virtue to water, Laozi (6th century BCE) emphasized the importance of living in harmony with the natural order of the universe.
Religious teachings influenced value theory in the medieval period. Early Christian thinkers, such as St. Augustine of Hippo (354–430 CE), adapted the theories of Plato and Plotinus into a religious framework. They identified God as the ultimate source of existence and goodness, seeing evil as a mere lack or privation of good. Drawing on Aristotelianism, Christian philosopher Thomas Aquinas (1224–1274 CE) said that communion with the divine, achieved through a beatific vision of God, is the highest end of humans. In Arabic–Persian philosophy, al-Farabi asserted that the supreme form of human perfection is an intellectual happiness, reachable in the afterlife by developing the intellect to its fullest potential. Avicenna (980–1037 CE) also regarded the intellect as the highest human faculty. He thought that a contemplative life prepares humans for the greatest good, which is only attained in the afterlife when humans are free from bodily distractions. In Indian philosophy, Adi Shankara taught that liberation, the highest human end, is reached by realizing that the self is the same as ultimate reality encompassing all of existence. In Chinese thought, the early neo-Confucian philosopher Han Yu (768–824) identified the sage as an ideal role model who, through self-cultivation, achieves personal integrity expressed in harmony between theory and action in daily life.
Thomas Hobbes (1588–1679) understood values as subjective phenomena that depend on a person's interests. He examined how the interests of individuals can be aggregated to guide political decisions. David Hume (1711–1776) agreed with Hobbes's subjectivism, exploring how values differ from objective facts. Immanuel Kant (1724–1804) asserted that the highest good is happiness in proportion to moral virtue. He emphasized the primacy of virtue by respecting the moral law and the inherent value of people, adding that moral virtue is ideally, but not always, accompanied by personal happiness. Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873) formulated classical utilitarianism, combining a hedonist theory about value with a consequentialist theory about right action. Hermann Lotze (1817–1881) developed a philosophy of values, holding that values make the world meaningful as an ordered whole centered around goodness. Influenced by Lotze, the neo-Kantian philosopher Wilhelm Windelband (1848–1915) understood philosophy as a theory of values, claiming that universal values determine the principles that all subjects should follow, including the norms of knowledge and action. Friedrich Nietzsche (1844–1900) held that values are human creations. He criticized traditional values in general and Christian values in particular, calling for a revaluation of all values centered on life-affirmation, power, and excellence.
Pragmatist philosopher John Dewey (1859–1952) formulated an axiological naturalism. He distinguished values from value judgments, adding that the skill of correct value assessment must be learned through experience. G. E. Moore (1873–1958) developed and refined various axiological concepts, such as organic unities and the contrast between intrinsic and extrinsic value. He defended non-naturalism about the nature of values and intuitionism about the knowledge of values. W. D. Ross (1877–1971) accepted and further elaborated on Moore's intuitionism, using it to formulate an axiological pluralism. R. B. Perry (1876–1957) and D. W. Prall (1886–1940) articulated systematic theories of value based on the idea that values originate in affective states such as interest and liking. Robert S. Hartman (1910–1973) developed formal axiology, saying that values measure the level to which a thing embodies its ideal concept. A. J. Ayer (1910–1989) proposed anti-realism about values, arguing that value statements merely express the speaker's approval or disapproval. A different type of anti-realism, formulated by J. L. Mackie (1917–1981), asserts that all value assertions are false since no values exist. G. H. von Wright (1916–2003) provided a conceptual analysis of the term good by distinguishing different meanings or varieties of goodness, such as the technical goodness of a good driver and the hedonic goodness of a good meal.
In continental philosophy, Franz Brentano (1838–1917) formulated an early version of the fitting-attitude theory of value, saying that a thing is good if it is fitting to have a positive attitude towards it, such as love. In the 1890s, his students Alexius Meinong (1853–1920) and Christian von Ehrenfels (1859–1932) conceived the idea of a general theory of values. Edmund Husserl (1859–1938), another of Brentano's students, developed phenomenology and applied this approach to the study of values. Following Husserl's approach, Max Scheler (1874–1928) and Nicolai Hartmann (1882–1950) each proposed a comprehensive system of axiological ethics. Asserting that values have objective reality, they explored how different value types form a value hierarchy and examined the problems of value conflicts and right decisions from this hierarchical perspective. Martin Heidegger (1889–1976) criticized value theory, claiming that it rests on a mistaken metaphysical perspective by understanding values as aspects of things. Existentialist philosopher Jean-Paul Sartre (1905–1980) said that values do not exist by themselves but are actively created, emphasizing the role of human freedom, responsibility, and authenticity in the process.
References
Notes
Citations
Sources
Axiology | 0.787305 | 0.993386 | 0.782098 |
Philosophy of psychology | Philosophy of psychology is concerned with the history and foundations of psychology. It deals with both epistemological and ontological issues and shares interests with other fields, including philosophy of mind and theoretical psychology. Philosophical and theoretical psychology are intimately tied and are therefore sometimes used interchangeably or used together. However, philosophy of psychology relies more on debates general to philosophy and on philosophical methods, whereas theoretical psychology draws on multiple areas.
Epistemology
Some of the issues studied by the philosophy of psychology are epistemological concerns about the methodology of psychological investigation. For example:
What constitutes a psychological explanation?
What is the most appropriate methodology for psychology: mentalism, behaviorism, or a compromise?
Are self-reports a reliable data-gathering method?
What conclusions can be drawn from null hypothesis tests?
Can first-person experiences (emotions, desires, beliefs, etc.) be measured objectively?
Ontology
Philosophers of psychology also concern themselves with ontological issues, like:
Can psychology be theoretically reduced to neuroscience?
What are psychological phenomena?
What is the relationship between subjectivity and objectivity in psychology?
Relations to other fields
Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, cognitive psychology, and artificial intelligence, for example questioning whether psychological phenomena can be explained using the methods of neuroscience, evolutionary theory, and computational modeling, respectively. Although these are all closely related fields, some concerns still arise about the appropriateness of importing their methods into psychology. Some such concerns are whether psychology, as the study of individuals as information processing systems (see Donald Broadbent), is autonomous from what happens in the brain (even if psychologists largely agree that the brain in some sense causes behavior (see supervenience)); whether the mind is "hard-wired" enough for evolutionary investigations to be fruitful; and whether computational models can do anything more than offer possible implementations of cognitive theories that tell us nothing about the mind (Fodor & Pylyshyn 1988).
Related to the philosophy of psychology are philosophical and epistemological inquiries about clinical psychiatry and psychopathology. Philosophy of psychiatry is mainly concerned with the role of values in psychiatry: derived from philosophical value theory and phenomenology, values-based practice is aimed at improving and humanizing clinical decision-making in the highly complex environment of mental health care. Philosophy of psychopathology is mainly involved in the epistemological reflection about the implicit philosophical foundations of psychiatric classification and evidence-based psychiatry. Its aims is to unveil the constructive activity underlying the description of mental phenomena.
Main areas
Different schools and systems of psychology represent approaches to psychological problems, which are often based on different philosophies of consciousness.
Functional psychology
Functionalism treats the psyche as derived from the activity of external stimuli, deprived of its essential autonomy, denying free will, which influenced behaviourism later on; one of the founders of functionalism was James, also close to pragmatism, where human action is put before questions and doubts about the nature of the world and man himself.
Psychoanalysis
Freud`s doctrine, called Metapsychology, was to give the human self greater freedom from instinctive and irrational desires in a dialogue with a psychologist through analysis of the unconscious. Later the psychoanalytic movement split, part of it treating psychoanalysis as a practice of working with archetypes (analytical psychology), part criticising the social limitations of the unconscious (Freudo-Marxism), and later Lacan`s structural psychoanalysis, which interpreted the unconscious as a language.
Phenomenological psychology
Edmund Husserl rejected the physicalism of most of the psychological teachings of his time and began to understand consciousness as the only reality accessible to reliable cognition. His disciple Heidegger added to this the assertion of the fundamental finitude of man and the threat of a loss of authenticity in the technical world, and thus laid the foundation for existential psychology.
Structuralism
The recognised creator of psychology as a science, W. Wundt described the primordial structures of the psyche that determine perception and behaviour, but faced the problem of the impossibility of direct access to these structures and the vagueness of their description. Half a century later his ideas, combined with Sossur`s semiotics, strongly influenced the general humanities of structuralism and the post-structuralism and post-modernism that emerged from it, where structures were treated as linguistic invariants.
References
Further reading
J. Stacy Adams. 1976. Advances in Experimental Social Psychology. Academic Press, 1976 , 9780120152094.
Leonard Berkowitz. 1972. Social psychology. Scott Foresman & Co, 1972.
Ned Block. 1980. Readings in Philosophy of Psychology, Volume 1. Harvard University Press, 1980. , 9780674748767.
Stuart C. Brown, Royal Institute of Philosophy. 1974. Macmillan, 1974. Original from the University of Michigan
Joseph Margolis. 2008. Philosophy of Psychology. Prentice-Hall foundations of philosophy series. Prentice-Hall, 1984. , 9780136643265.
Ken Richardson. 2008. Understanding psychology. Open University Press, 1988. , 9780335098422.
George Botterill, Peter Carruthers. 1999. The Philosophy of Psychology. Cambridge University Press. , 9780521559157.
Craig Steven Titus. 2009. Philosophical Psychology: Psychology, Emotions, and Freedom. CUA Press. , 9780977310364.
Jose Bermudez. 2005. Philosophy of Psychology: A Contemporary Introduction. Routledge. .
Terence Horgan, John Tienson. 1996. Connectionism and the Philosophy of Psychology. MIT Press. , 9780262082488
External links
Part 7 of MindPapers: Philosophy of Cognitive Science (contains over 1,500 articles, many with online copies)
Psy
Psy | 0.797676 | 0.980153 | 0.781845 |
Theology | Theology is the study of religious belief from a religious perspective, with a focus on the nature of divinity. It is taught as an academic discipline, typically in universities and seminaries. It occupies itself with the unique content of analyzing the supernatural, but also deals with religious epistemology, asks and seeks to answer the question of revelation. Revelation pertains to the acceptance of God, gods, or deities, as not only transcendent or above the natural world, but also willing and able to interact with the natural world and to reveal themselves to humankind.
Theologians use various forms of analysis and argument (experiential, philosophical, ethnographic, historical, and others) to help understand, explain, test, critique, defend or promote any myriad of religious topics. As in philosophy of ethics and case law, arguments often assume the existence of previously resolved questions, and develop by making analogies from them to draw new inferences in new situations.
The study of theology may help a theologian more deeply understand their own religious tradition, another religious tradition, or it may enable them to explore the nature of divinity without reference to any specific tradition. Theology may be used to propagate, reform, or justify a religious tradition; or it may be used to compare, challenge (e.g. biblical criticism), or oppose (e.g. irreligion) a religious tradition or worldview. Theology might also help a theologian address some present situation or need through a religious tradition, or to explore possible ways of interpreting the world.
Etymology
The term "theology" derives from the Greek theologia (θεολογία), a combination of theos (Θεός, 'god') and logia (λογία, 'utterances, sayings, oracles')—the latter word relating to Greek logos (λόγος, 'word, discourse, account, reasoning'). The term would pass on to Latin as , then French as , eventually becoming the English theology.
Through several variants (e.g., theologie, teologye), the English theology had evolved into its current form by 1362. The sense that the word has in English depends in large part on the sense that the Latin and Greek equivalents had acquired in patristic and medieval Christian usage although the English term has now spread beyond Christian contexts.
Classical philosophy
Greek theologia (θεολογία) was used with the meaning 'discourse on God' around 380 BC by Plato in The Republic. Aristotle divided theoretical philosophy into mathematike, physike, and theologike, with the latter corresponding roughly to metaphysics, which, for Aristotle, included discourse on the nature of the divine.
Drawing on Greek Stoic sources, the Latin writer Varro distinguished three forms of such discourse:
mythical, concerning the myths of the Greek gods;
rational, philosophical analysis of the gods and of cosmology; and
civil, concerning the rites and duties of public religious observance.
Later usage
Some Latin Christian authors, such as Tertullian and Augustine, followed Varro's threefold usage. However, Augustine also defined theologia as "reasoning or discussion concerning the Deity".
The Latin author Boethius, writing in the early 6th century, used theologia to denote a subdivision of philosophy as a subject of academic study, dealing with the motionless, incorporeal reality; as opposed to physica, which deals with corporeal, moving realities. Boethius' definition influenced medieval Latin usage.
In patristic Greek Christian sources, theologia could refer narrowly to devout and/or inspired knowledge of and teaching about the essential nature of God.
In scholastic Latin sources, the term came to denote the rational study of the doctrines of the Christian religion, or (more precisely) the academic discipline that investigated the coherence and implications of the language and claims of the Bible and of the theological tradition (the latter often as represented in Peter Lombard's Sentences, a book of extracts from the Church Fathers).
In the Renaissance, especially with Florentine Platonist apologists of Dante's poetics, the distinction between 'poetic theology' (theologia poetica) and 'revealed' or Biblical theology serves as stepping stone for a revival of philosophy as independent of theological authority.
It is in the last sense, theology as an academic discipline involving rational study of Christian teaching, that the term passed into English in the 14th century, although it could also be used in the narrower sense found in Boethius and the Greek patristic authors, to mean rational study of the essential nature of God, a discourse now sometimes called theology proper.
From the 17th century onwards, the term theology began to be used to refer to the study of religious ideas and teachings that are not specifically Christian or correlated with Christianity (e.g., in the term natural theology, which denoted theology based on reasoning from natural facts independent of specifically Christian revelation) or that are specific to another religion (such as below).
Theology can also be used in a derived sense to mean "a system of theoretical principles; an (impractical or rigid) ideology".
In religion
The term theology has been deemed by some as only appropriate to the study of religions that worship a supposed deity (a theos), i.e. more widely than monotheism; and presuppose a belief in the ability to speak and reason about this deity (in logia). They suggest the term is less appropriate in religious contexts that are organized differently (i.e., religions without a single deity, or that deny that such subjects can be studied logically). Hierology has been proposed, by such people as Eugène Goblet d'Alviella (1908), as an alternative, more generic term.
Abrahamic religions
Christianity
As defined by Thomas Aquinas, theology is constituted by a triple aspect: what is taught by God, teaches of God, and leads to God. This indicates the three distinct areas of God as theophanic revelation, the systematic study of the nature of divine and, more generally, of religious belief, and the spiritual path. Christian theology as the study of Christian belief and practice concentrates primarily upon the texts of the Old Testament and the New Testament as well as on Christian tradition. Christian theologians use biblical exegesis, rational analysis and argument. Theology might be undertaken to help the theologian better understand Christian tenets, to make comparisons between Christianity and other traditions, to defend Christianity against objections and criticism, to facilitate reforms in the Christian church, to assist in the propagation of Christianity, to draw on the resources of the Christian tradition to address some present situation or need, or for a variety of other reasons.
Islam
Islamic theological discussion that parallels Christian theological discussion is called Kalam; the Islamic analogue of Christian theological discussion would more properly be the investigation and elaboration of Sharia or Fiqh.
Some Universities in Germany established departments of islamic theology. (i.e.)
Judaism
In Jewish theology, the historical absence of political authority has meant that most theological reflection has happened within the context of the Jewish community and synagogue, including through rabbinical discussion of Jewish law and Midrash (rabbinic biblical commentaries). Jewish theology is also linked to ethics, as it is the case with theology in other religions, and therefore has implications for how one behaves.
Indian religions
Buddhism
Some academic inquiries within Buddhism, dedicated to the investigation of a Buddhist understanding of the world, prefer the designation Buddhist philosophy to the term Buddhist theology, since Buddhism lacks the same conception of a theos or a Creator God. Jose Ignacio Cabezon, who argues that the use of theology is in fact appropriate, can only do so, he says, because "I take theology not to be restricted to discourse on God.... I take 'theology' not to be restricted to its etymological meaning. In that latter sense, Buddhism is of course atheological, rejecting as it does the notion of God."
Whatever the case, there are various Buddhist theories and discussions on the nature of Buddhahood and the ultimate reality / highest form of divinity, which has been termed "buddhology" by some scholars like Louis de La Vallée-Poussin. This is a different usage of the term than when it is taken to mean the academic study of Buddhism, and here would refer to the study of the nature of what a Buddha is. In Mahayana Buddhism, a central concept in its buddhology is the doctrine of the three Buddha bodies (Sanskrit: Trikāya). This doctrine is shared by all Mahayana Buddhist traditions.
Hinduism
Within Hindu philosophy, there are numerous traditions of philosophical speculation on the nature of the universe, of God (termed Brahman, Paramatma, Ishvara, and/or Bhagavan in some schools of Hindu thought) and of the ātman (soul). The Sanskrit word for the various schools of Hindu philosophy is darśana ('view, viewpoint'), the most influential one in terms of modern Hindu religion is Vedanta and its various sub-schools, each of which presents a different theory of Ishvara (the Supreme lord, God).
Vaishnava theology has been a subject of study for many devotees, philosophers and scholars in India for centuries. A large part of its study lies in classifying and organizing the manifestations of thousands of gods and their aspects. In recent decades the study of Hinduism has also been taken up by a number of academic institutions in Europe, such as the Oxford Centre for Hindu Studies and Bhaktivedanta College.
There are also other traditions of Hindu theology, including the various theologies of Shaivism (which include dualistic and non-dualistic strands) as well as the theologies of the Goddess centered Shakta traditions which posit a feminine deity as the ultimate.
Other religions
Shinto
In Japan, the term theology has been ascribed to Shinto since the Edo period with the publication of Mano Tokitsuna's . In modern times, other terms are used to denote studies in Shinto—as well as Buddhist—belief, such as and .
Modern Paganism
English academic Graham Harvey has commented that Pagans "rarely indulge in theology". Nevertheless, theology has been applied in some sectors across contemporary Pagan communities, including Wicca, Heathenry, Druidry and Kemetism. As these religions have given precedence to orthopraxy, theological views often vary among adherents. The term is used by Christine Kraemer in her book Seeking The Mystery: An Introduction to Pagan Theologies and by Michael York in Pagan Theology: Paganism as a World Religion.
Topics
Richard Hooker defines theology as "the science of things divine". The term can, however, be used for a variety of disciplines or fields of study. Theology considers whether the divine exists in some form, such as in physical, supernatural, mental, or social realities, and what evidence for and about it may be found via personal spiritual experiences or historical records of such experiences as documented by others. The study of these assumptions is not part of theology proper, but is found in the philosophy of religion, and increasingly through the psychology of religion and neurotheology. Theology's aim, then, is to record, structure and understand these experiences and concepts; and to use them to derive normative prescriptions for how to live our lives.
History of academic discipline
The history of the study of theology in institutions of higher education is as old as the history of such institutions themselves. For instance:
Taxila was an early centre of Vedic learning, possible from the 6th-century BC or earlier;Scharfe, Hartmut. 2002. Education in Ancient India. Leiden: Brill.
the Platonic Academy founded in Athens in the 4th-century BC seems to have included theological themes in its subject matter;
the Chinese Taixue delivered Confucian teaching from the 2nd century BC;
the School of Nisibis was a centre of Christian learning from the 4th century AD;
Nalanda in India was a site of Buddhist higher learning from at least the 5th or 6th century AD; and
the Moroccan University of Al-Karaouine was a centre of Islamic learning from the 10th century, as was Al-Azhar University in Cairo.
The earliest universities were developed under the aegis of the Latin Church by papal bull as studia generalia and perhaps from cathedral schools. It is possible, however, that the development of cathedral schools into universities was quite rare, with the University of Paris being an exception. Later they were also founded by kings (University of Naples Federico II, Charles University in Prague, Jagiellonian University in Kraków) or by municipal administrations (University of Cologne, University of Erfurt).
In the early medieval period, most new universities were founded from pre-existing schools, usually when these schools were deemed to have become primarily sites of higher education. Many historians state that universities and cathedral schools were a continuation of the interest in learning promoted by monasteries. Christian theological learning was, therefore, a component in these institutions, as was the study of church or canon law: universities played an important role in training people for ecclesiastical offices, in helping the church pursue the clarification and defence of its teaching, and in supporting the legal rights of the church over against secular rulers. At such universities, theological study was initially closely tied to the life of faith and of the church: it fed, and was fed by, practices of preaching, prayer and celebration of the Mass.
During the High Middle Ages, theology was the ultimate subject at universities, being named "The Queen of the Sciences". It served as the capstone to the Trivium and Quadrivium that young men were expected to study. This meant that the other subjects (including philosophy) existed primarily to help with theological thought.
In this context, medieval theology in the Christian West could subsume fields of study which would later become more self-sufficient, such as metaphysics (Aristotle's "first philosophy",Ontology - "In the 13th century, appropriating Aristotle's threefold division of the speculative sciences (physics, mathematics, and what Aquinas variously calls 'first philosophy' or 'metaphysics' or 'theology'), Aquinas argues that primary being and being in general are the subject of the same science (eadem enim est scientia primi entis et entis communis ) inasmuch as primary being (s) are principles of the others (nam prima entia sunt principia aliorum; cf. Aquinas' In Boeth. de Trin. 5.1, In 10 meta. 6 and 11, and the Proemium to the latter)."
or ontology (the science of being).Ontology – "In the sixth book [of the Metaphysics] (1026a16–32), Aristotle refers to a first philosophy that is concerned with being as being, but in contrast to physics and mathematics, precisely as the speculative science of what is separate from matter and motion. First philosophy in this context is labeled 'theology' inasmuch as the divine would only be present in something of this nature, i.e., some immutable being (ousia akinetos)."
Christian theology's preeminent place in the university started to come under challenge during the European Enlightenment, especially in Germany. Other subjects gained in independence and prestige, and questions were raised about the place of a discipline that seemed to involve a commitment to the authority of particular religious traditions in institutions that were increasingly understood to be devoted to independent reason.
Since the early 19th century, various different approaches have emerged in the West to theology as an academic discipline. Much of the debate concerning theology's place in the university or within a general higher education curriculum centres on whether theology's methods are appropriately theoretical and (broadly speaking) scientific or, on the other hand, whether theology requires a pre-commitment of faith by its practitioners, and whether such a commitment conflicts with academic freedom.Frei, Hans W. 1992. Types of Christian Theology, edited by W. C. Placher and G. Hunsinger. New Haven, CT: Yale University Press.McClendon, James W. 2000. "Theology and the University." Ch. 10 in Systematic Theology 3: Witness. Nashville, TN: Abingdon.
Ministerial training
In some contexts, theology has been held to belong in institutions of higher education primarily as a form of professional training for Christian ministry. This was the basis on which Friedrich Schleiermacher, a liberal theologian, argued for the inclusion of theology in the new University of Berlin in 1810.
For instance, in Germany, theological faculties at state universities are typically tied to particular denominations, Protestant or Roman Catholic, and those faculties will offer denominationally-bound (konfessionsgebunden) degrees, and have denominationally bound public posts amongst their faculty; as well as contributing "to the development and growth of Christian knowledge" they "provide the academic training for the future clergy and teachers of religious instruction at German schools."
In the United States, several prominent colleges and universities were started in order to train Christian ministers. Harvard, Georgetown, Boston University, Yale, Duke University, and Princeton all had the theological training of clergy as a primary purpose at their foundation.
Seminaries and bible colleges have continued this alliance between the academic study of theology and training for Christian ministry. There are, for instance, numerous prominent examples in the United States, including Phoenix Seminary, Catholic Theological Union in Chicago, The Graduate Theological Union in Berkeley, Criswell College in Dallas, The Southern Baptist Theological Seminary in Louisville, Trinity Evangelical Divinity School in Deerfield, Illinois, Dallas Theological Seminary, North Texas Collegiate Institute in Farmers Branch, Texas, and the Assemblies of God Theological Seminary in Springfield, Missouri. The only Judeo-Christian seminary for theology is the 'Idaho Messianic Bible Seminary' which is part of the Jewish University of Colorado in Denver.
As an academic discipline in its own right
In some contexts, scholars pursue theology as an academic discipline without formal affiliation to any particular church (though members of staff may well have affiliations to churches), and without focussing on ministerial training. This applies, for instance, to the Department of Theological Studies at Concordia University in Canada, and to many university departments in the United Kingdom, including the Faculty of Divinity at the University of Cambridge, the Department of Theology and Religion at the University of Exeter, and the Department of Theology and Religious Studies at the University of Leeds. Traditional academic prizes, such as the University of Aberdeen's Lumsden and Sachs Fellowship, tend to acknowledge performance in theology (or divinity as it is known at Aberdeen) and in religious studies.
Religious studies
In some contemporary contexts, a distinction is made between theology, which is seen as involving some level of commitment to the claims of the religious tradition being studied, and religious studies, which by contrast is normally seen as requiring that the question of the truth or falsehood of the religious traditions studied be kept outside its field. Religious studies involves the study of historical or contemporary practices or of those traditions' ideas using intellectual tools and frameworks that are not themselves specifically tied to any religious tradition and that are normally understood to be neutral or secular. In contexts where 'religious studies' in this sense is the focus, the primary forms of study are likely to include:
Anthropology of religion
Comparative religion
History of religions
Philosophy of religion
Psychology of religion
Sociology of religion
Sometimes, theology and religious studies are seen as being in tension, and at other times, they are held to coexist without serious tension.
Occasionally it is denied that there is as clear a boundary between them.
Criticism
Pre-20th century
Whether or not reasoned discussion about the divine is possible has long been a point of contention. Protagoras, as early as the fifth century BC, who is reputed to have been exiled from Athens because of his agnosticism about the existence of the gods, said that "Concerning the gods I cannot know either that they exist or that they do not exist, or what form they might have, for there is much to prevent one's knowing: the obscurity of the subject and the shortness of man's life."Poster, Carol. "Protagoras (fl. 5th C. BCE) ." Internet Encyclopedia of Philosophy. Retrieved 6 October 2008.
Since at least the eighteenth century, various authors have criticized the suitability of theology as an academic discipline. In 1772, Baron d'Holbach labeled theology "a continual insult to human reason" in Le Bon sens. Lord Bolingbroke, an English politician and political philosopher, wrote in Section IV of his Essays on Human Knowledge, "Theology is in fault not religion. Theology is a science that may justly be compared to the Box of Pandora. Many good things lie uppermost in it; but many evil lie under them, and scatter plagues and desolation throughout the world."
Thomas Paine, a Deistic American political theorist and pamphleteer, wrote in his three-part work The Age of Reason (1794, 1795, 1807):The study of theology, as it stands in Christian churches, is the study of nothing; it is founded on nothing; it rests on no principles; it proceeds by no authorities; it has no data; it can demonstrate nothing; and it admits of no conclusion. Not anything can be studied as a science, without our being in possession of the principles upon which it is founded; and as this is the case with Christian theology, it is therefore the study of nothing.The German atheist philosopher Ludwig Feuerbach sought to dissolve theology in his work Principles of the Philosophy of the Future: "The task of the modern era was the realization and humanization of God – the transformation and dissolution of theology into anthropology." This mirrored his earlier work The Essence of Christianity (1841), for which he was banned from teaching in Germany, in which he had said that theology was a "web of contradictions and delusions".
The American satirist Mark Twain remarked in his essay "The Lowest Animal", originally written in around 1896, but not published until after Twain's death in 1910, that:[Man] is the only animal that loves his neighbor as himself and cuts his throat if his theology isn't straight. He has made a graveyard of the globe in trying his honest best to smooth his brother's path to happiness and heaven.... The higher animals have no religion. And we are told that they are going to be left out in the Hereafter. I wonder why? It seems questionable taste.
20th and 21st centuries
A. J. Ayer, a British former logical-positivist, sought to show in his essay "Critique of Ethics and Theology" that all statements about the divine are nonsensical and any divine-attribute is unprovable. He wrote: "It is now generally admitted, at any rate by philosophers, that the existence of a being having the attributes which define the god of any non-animistic religion cannot be demonstratively proved.... [A]ll utterances about the nature of God are nonsensical."
Jewish atheist philosopher Walter Kaufmann, in his essay "Against Theology", sought to differentiate theology from religion in general:Theology, of course, is not religion; and a great deal of religion is emphatically anti-theological.... An attack on theology, therefore, should not be taken as necessarily involving an attack on religion. Religion can be, and often has been, untheological or even anti-theological. However, Kaufmann found that "Christianity is inescapably a theological religion."
English atheist Charles Bradlaugh believed theology prevented human beings from achieving liberty, although he also noted that many theologians of his time held that, because modern scientific research sometimes contradicts sacred scriptures, the scriptures must therefore be wrong. Robert G. Ingersoll, an American agnostic lawyer, stated that, when theologians had power, the majority of people lived in hovels, while a privileged few had palaces and cathedrals. In Ingersoll's opinion, it was science that improved people's lives, not theology. Ingersoll further maintained that trained theologians reason no better than a person who assumes the devil must exist because pictures resemble the devil so exactly.
The British evolutionary biologist Richard Dawkins has been an outspoken critic of theology. In an article published in The Independent in 1993, he severely criticizes theology as entirely useless, declaring that it has completely and repeatedly failed to answer any questions about the nature of reality or the human condition. He states, "I have never heard any of them [i.e. theologians] ever say anything of the smallest use, anything that was not either platitudinously obvious or downright false." He then states that, if all theology were completely eradicated from the earth, no one would notice or even care. He concludes:The achievements of theologians don't do anything, don't affect anything, don't achieve anything, don't even mean anything. What makes you think that 'theology' is a subject at all?
See also
Thealogy
References
External links
"Theology" on Encyclopædia Britannica''
Chattopadhyay, Subhasis. "Reflections on Hindu Theology" in Prabuddha Bharata or Awakened India 120(12): 664–672 (2014). . Edited by Swami Narasimhananda. | 0.782921 | 0.998506 | 0.781751 |
Philosophy and literature | Philosophy and literature involves the literary treatment of philosophers and philosophical themes (the literature of philosophy), and the philosophical treatment of issues raised by literature (the philosophy of literature).
The philosophy of literature, a subset of aesthetics, examines the nature of art and the significance of verbal arts, often overlooked in traditional aesthetic discussions. It raises philosophical questions about narrative, empathy, and ethics through fictional characters. Philosophers like Plato critiqued literature's ethical influence, while modern thinkers explore language's role in bridging minds and the truth in fiction, differentiating between the reality of characters and their narratives.
The philosophy of literature
Strictly speaking, the philosophy of literature is a branch of aesthetics, the branch of philosophy that deals with the question, "What is art"? Much of aesthetic philosophy has traditionally focused on the plastic arts or music, however, at the expense of the verbal arts. Much traditional discussion of aesthetic philosophy seeks to establish criteria of artistic quality that are indifferent to the subject matter being depicted. Since all literary works, almost by definition, contain notional content, aesthetic theories that rely on purely formal qualities tend to overlook literature.
The very existence of narrative raises philosophical issues. In narrative, a creator can embody, and readers be led to imagine, fictional characters, and even fantastic creatures or technologies. The ability of the human mind to imagine, and even to experience empathy with, these fictional characters is itself revealing about the nature of the human mind. Some fiction can be thought of as a sort of a thought experiment in ethics: it describes fictional characters, their motives, their actions, and the consequences of their actions. It is in this light that some philosophers have chosen various narrative forms to teach their philosophy (see below).
Literature and language
Plato, for instance, believed that literary culture had a strong impact on the ethical outlook of its consumers. In The Republic, Plato displays a strong hostility to the contents of the culture of his period, and proposes a strong censorship of popular literature in his utopia.
More recently, however, philosophers of various stripes have taken different and less hostile approaches to literature. Since the work of the British Empiricists and Immanuel Kant in the late eighteenth century, Western philosophy has long been preoccupied with a fundamental question of epistemology: the relationship between ideas in the human mind and the external world, if such a world exists. In more recent years, these epistemological concerns have shifted toward an extended discussion of words and meaning, exploring the possibility of language bridging the gap between minds. This cluster of issues concerning the meaning of language and "writings" is sometimes referred to as the linguistic turn.
As such, techniques and tools developed for literary criticism and literary theory rose to greater prominence in Western philosophy of the late twentieth century. Philosophers of various stripes paid more attention to literature than their predecessors did. Some sought to examine the question of whether it was in fact truly possible to communicate using words, whether it was possible for an author's intended meaning to be communicated to a reader. Others sought to use literary works as examples of contemporary culture, and sought to reveal unconscious attitudes they felt present in these works for social criticism.
The truth of fiction
Literary works also pose issues concerning truth and the philosophy of language. In educated opinion, at least, it is commonly reputed as true that Sherlock Holmes lived in London. (see David Lewis 'Truth in Fiction', American Philosophical Quarterly, Vol. 15. No. 1, January 1978) It is also considered true that Samuel Pepys lived in London. Yet Sherlock Holmes never lived anywhere at all; he is a fictional character. Samuel Pepys, contrarily, is judged to have been a real person. Contemporary interests in Holmes and Pepys share strong similarities; the only reason why anyone knows either of their names is because of an abiding interest in reading about their alleged deeds and words. These two statements would appear to belong to two different orders of truth. Further problems arise concerning the truth value of statements about fictional worlds and characters that can be implied but are nowhere explicitly stated by the sources for our knowledge about them, such as Sherlock Holmes had only one head or Sherlock Holmes never traveled to the moon.
The literature of philosophy
Philosophical poems
Several poets have written poems on philosophical themes, and some important philosophers have expressed their philosophy in verse. The cosmogony of Hesiod and the De Rerum Natura of Lucretius are important philosophical poems. The genre of epic poetry was also used to teach philosophy. Vyasa narrated the ancient Indian epic Mahabharata in order to teach Indian philosophy and Hindu philosophy. Homer also presented some philosophical teachings in his Odyssey.
Many of the Eastern philosophers worked out their thought in a poetical fashion. Some of the important names include:
Vyasa
Laozi
Jalal ad-Din Muhammad Rumi
Omar Khayyám
Al-Ma'arri
Nizami Ganjavi
Sheikh Saadi
Hafiz Shirazi
Muhammad Iqbal
Matsuo Bashō
Farid ud-Din Attar
Salah Abdel Sabour
Mahmoud Darwish
Notable Western philosophical poets include:
John Ashbery
Georges Bataille
Giannina Braschi
G. K. Chesterton
Robert Creeley
Samuel Taylor Coleridge
T. S. Eliot
Homer
Søren Kierkegaard
Lucretius
John Milton
Marianne Moore
Pablo Neruda
Friedrich Nietzsche
Mary Oliver
Fernando Pessoa
Rainer Maria Rilke
Percy Bysshe Shelley
St. John of the Cross
Leslie Marmon Silko
Hildegard von Bingen
William Carlos Williams
C. K. Williams
James Wright
Philosophical fiction
Some philosophers have undertaken to write philosophy in the form of fiction, including novels and short stories (see separate article on philosophical fiction). This is apparent early on in the literature of philosophy, where philosophers such as Plato wrote dialogues in which fictional or fictionalized characters discuss philosophical subjects; Socrates frequently appears as a protagonist in Plato's dialogues, and the dialogues are one of the prime sources of knowledge about Socrates' teaching, though at this remove it is sometimes hard to distinguish Socrates' actual positions from Plato's own. Numerous early Christian writers, including Augustine, Boethius, and Peter Abelard produced dialogues; several early modern philosophers, such as George Berkeley and David Hume, wrote occasionally in this genre.
Some philosophers have turned to storytelling to convey their teachings. The 12th century Islamic philosopher Ibn Tufayl wrote a fictional Arabic narrative Hayy ibn Yaqdhan as a response to al-Ghazali's The Incoherence of the Philosophers; the 13th century Islamic theologian-philosopher Ibn al-Nafis later wrote a fictional narrative Theologus Autodidactus as a response to Ibn Tufayl's work. The German philosopher Friedrich Nietzsche often articulated his ideas in literary modes, most notably in Thus Spoke Zarathustra, a re-imagined account of the teachings of Zoroaster. Marquis de Sade and Ayn Rand wrote novels in which characters served as mouthpieces for philosophical positions, and acted by them in the plot. George Santayana was also a philosopher who wrote novels and poetry; the relationship between Santayana's characters and his beliefs is more complex. The existentialists include among their numbers important French authors who used fiction to convey their philosophical views; these include Jean-Paul Sartre's novel Nausea and play No Exit, and Albert Camus's The Stranger. Maurice Blanchot's entire fictional production, whose titles include The Step Not Beyond, The Madness of the Day, and The Writing of Disaster, among others, constitutes an indispensable corpus for the treatment of the relationship between philosophy and literature. So does Jacques Derrida's The Post Card: From Socrates to Freud and Beyond.
Several philosophers have had an important influence on literature. Arthur Schopenhauer, largely as a result of his system of aesthetics, is perhaps the most influential recent philosopher in the history of literature; Thomas Hardy's later novels frequently allude to Schopenhauerian themes, particularly in Jude the Obscure. Schopenhauer also had an important influence on Joseph Conrad. Schopenhauer also had a less specific but more widely diffused influence on the Symbolist movement in European literature. Lionel Johnson also refers to Schopenhauer's aesthetics in his essay The Cultured Faun. Jacques Derrida's entire oeuvre has been hugely influential for so-called continental philosophy and the understanding of the role of literature in modernity.
Other works of fiction considered to have philosophical content include:
Joseph Conrad, Heart of Darkness
Fyodor Dostoevsky, Brothers Karamazov and Crime and Punishment
Jostein Gaarder, Sophie's World
Hermann Hesse, The Glass Bead Game
James Joyce, Ulysses
Franz Kafka, The Metamorphosis
Milan Kundera, The Unbearable Lightness of Being
Thomas Mann, The Magic Mountain
Iris Murdoch, The Sea, the Sea
Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance
Marcel Proust, In Search of Lost Time
Dante Alighieri, Divine Comedy
William Shakespeare, Macbeth and Hamlet
Leo Tolstoy, The Death of Ivan Ilyich and War and Peace
Sergio Troncoso, The Nature of Truth
Marguerite Yourcenar, Memoirs of Hadrian
Philosophical writing as literature
Several philosophers are read for the literary merits of their works apart from their philosophical content. The philosophy in the Meditations of the Roman emperor Marcus Aurelius is unoriginal Stoicism, but the Meditations are still read for their literary merit and for the insight they give into the workings of the emperor's mind.
Arthur Schopenhauer's philosophy is noted for the quality and readability of its prose, as are some of the works of the British Empiricists, such as Locke and Hume. Søren Kierkegaard's style is frequently regarded as poetic artistry as well as philosophical, especially in Fear and Trembling and Either/Or. Friedrich Nietzsche's works such as Thus Spoke Zarathustra frequently resemble prose poetry and contain imagery and allusion instead of argument.
Philosophy in literature
Philosophers in literature
Socrates appears in a highly fictionalized guise, as a comic figure and the object of mockery, in The Clouds by Aristophanes. In the play, Socrates appears hanging from a basket, where he delivers oracles such as:
I'd never come up with a single thing about celestial phenomena,if I did not suspend my mind up high,to mix my subtle thoughts with what's like them—the air. If I turned my mind to lofty things,but stayed there on the ground, I'd never makethe least discovery. For the earth, you see,draws moist thoughts down by force into itself—the same process takes place with watercress.
Early Taoist philosopher Zhuang Zhou expressed his ideas primarily through short literary anecdotes and fables such as "Dream of the Butterfly". The other major philosophers of the time appear as characters within these stories, allowing Zhuangzi to playfully explore their ideas and contrast them with his own, as he does with Laozi, Liezi, Hui Shi, and many others. Most prominently in his work is the presence of Confucius and his prominent disciples, who are sometimes used to undermine popular understandings of Confucian philosophy or to reinforce Zhuangzi's own understanding of how one lives by the Dao.
Jorge Luis Borges is perhaps the twentieth century's preeminent author of philosophical fiction. He wrote a short story in which the philosopher Averroes is the chief protagonist, Averroes's Search. Many plot points in his stories paraphrase the thought of philosophers, including George Berkeley, Arthur Schopenhauer, and Bertrand Russell; he also attributes various opinions to figures including George Dalgarno.
A key plot point in Umberto Eco's novel The Name of the Rose turns on the discovery of a mysterious book that turns out to contain a lost manuscript by Aristotle. Eco's later novel Foucault's Pendulum became the forerunner of a run of thrillers or detective fiction that toss around learned allusions and the names of historical thinkers; more recent examples include Dan Brown's The Da Vinci Code and The Rule of Four by Ian Caldwell and Dustin Thomason.
Also, Philip K. Dick, who has often been compared to Borges, raises a significant number of philosophical issues in his novels, everything from the problem of solipsism to many questions of perception and reality.
Fictional philosophers
Jorge Luis Borges introduces many philosophical themes, and several fictional philosophers, in his short stories. A fictional philosophical movement is a part of the premise of his story Tlön, Uqbar, Orbis Tertius, and the unnamed narrator of his story The Library of Babel could also be called a fictional philosopher. A fictional theologian is the subject of his story Three Versions of Judas.
Fictional philosophers occasionally occur throughout the works of Robert A. Heinlein and Ray Bradbury. Heinlein's Stranger in a Strange Land contains long passages that could be considered successors to the fictionalized philosophical dialogues of the ancient world, set within the plot.
See also
The arts and politics
Literary translation
Translation criticism
Science fiction as thought experiment
References
Sources
The Oxford Companion to Philosophy, Ted Honderich, ed., (Oxford University Press, 1995) .
Borges, Jorge Luis, Collected Fictions, 1998. Translated by Andrew Hurley. .
Magee, Bryan, The Philosophy of Schopenhauer (Oxford University Press, revised edition, 1977) .
External links
Philosophy and Literature at Paideia Archive
Philosophy and Literature at Stanford, directed by R. Lanier Anderson and Joshua Landy
Duke's Center for Philosophy, Arts, and Literature, directed by Toril Moi
Andrew Miller, The Truth Value of Statements Containing Names of Literary Characters as Subjects (2002 thesis)
Liter
Philosophy of literature | 0.797574 | 0.979857 | 0.781508 |
Constructivism (philosophy of science) | Constructivism is a view in the philosophy of science that maintains that scientific knowledge is constructed by the scientific community, which seeks to measure and construct models of the natural world. According to constructivists, natural science consists of mental constructs that aim to explain sensory experiences and measurements, and that there is no single valid methodology in science but rather a diversity of useful methods. They also hold that the world is independent of human minds, but knowledge of the world is always a human and social construction. Constructivism opposes the philosophy of objectivism, embracing the belief that human beings can come to know the truth about the natural world not mediated by scientific approximations with different degrees of validity and accuracy.
Constructivism and sciences
Social constructivism in sociology
One version of social constructivism contends that categories of knowledge and reality are actively created by social relationships and interactions. These interactions also alter the way in which scientific episteme is organized.
Social activity presupposes human interaction, and in the case of social construction, utilizing semiotic resources (meaning-making and signifying) with reference to social structures and institutions. Several traditions use the term Social Constructivism: psychology (after Lev Vygotsky), sociology (after Peter Berger and Thomas Luckmann, themselves influenced by Alfred Schütz), sociology of knowledge (David Bloor), sociology of mathematics (Sal Restivo), philosophy of mathematics (Paul Ernest). Ludwig Wittgenstein's later philosophy can be seen as a foundation for social constructivism, with its key theoretical concepts of language games embedded in forms of life.
Constructivism in philosophy of science
Thomas Kuhn argued that changes in scientists' views of reality not only contain subjective elements but result from group dynamics, "revolutions" in scientific practice, and changes in "paradigms". As an example, Kuhn suggested that the Sun-centric Copernican "revolution" replaced the Earth-centric views of Ptolemy not because of empirical failures but because of a new "paradigm" that exerted control over what scientists felt to be the more fruitful way to pursue their goals.
The view of reality as accessible only through models was called model-dependent realism by Stephen Hawking and Leonard Mlodinow. While not rejecting an independent reality, model-dependent realism says that we can know only an approximation of it provided by the intermediary of models.
These models evolve over time as guided by scientific inspiration and experiments.
In the field of the social sciences, constructivism as an epistemology urges that researchers reflect upon the paradigms that may be underpinning their research, and in the light of this that they become more open to considering other ways of interpreting any results of the research. Furthermore, the focus is on presenting results as negotiable constructs rather than as models that aim to "represent" social realities more or less accurately. Norma Romm, in her book Accountability in Social Research (2001), argues that social researchers can earn trust from participants and wider audiences insofar as they adopt this orientation and invite inputs from others regarding their inquiry practices and the results thereof.
Constructivism and psychology
In psychology, constructivism refers to many schools of thought that, though extraordinarily different in their techniques (applied in fields such as education and psychotherapy), are all connected by a common critique of previous standard objectivist approaches. Constructivist psychology schools share assumptions about the active constructive nature of human knowledge. In particular, the critique is aimed at the "associationist" postulate of empiricism, "by which the mind is conceived as a passive system that gathers its contents from its environment and, through the act of knowing, produces a copy of the order of reality."
In contrast, "constructivism is an epistemological premise grounded on the assertion that, in the act of knowing, it is the human mind that actively gives meaning and order to that reality to which it is responding".
The constructivist psychologies theorize about and investigate how human beings create systems for meaningfully understanding their worlds and experiences.
Constructivism and education
Joe L. Kincheloe has published numerous social and educational books on critical constructivism (2001, 2005, 2008), a version of constructivist epistemology that places emphasis on the exaggerated influence of political and cultural power in the construction of knowledge, consciousness, and views of reality. In the contemporary mediated electronic era, Kincheloe argues, dominant modes of power have never exerted such influence on human affairs. Coming from a critical pedagogical perspective, Kincheloe argues that understanding a critical constructivist epistemology is central to becoming an educated person and to the institution of just social change.
Kincheloe's characteristics of critical constructivism:
Knowledge is socially constructed: World and information co-construct one another
Consciousness is a social construction
Political struggles: Power plays an exaggerated role in the production of knowledge and consciousness
The necessity of understanding consciousness—even though it does not lend itself to traditional reductionistic modes of measurability
The importance of uniting logic and emotion in the process of knowledge and producing knowledge
The inseparability of the knower and the known
The centrality of the perspectives of oppressed peoples—the value of the insights of those who have suffered as the result of existing social arrangements
The existence of multiple realities: Making sense of a world far more complex than we originally imagined
Becoming humble knowledge workers: Understanding our location in the tangled web of reality
Standpoint epistemology: Locating ourselves in the web of reality, we are better equipped to produce our own knowledge
Constructing practical knowledge for critical social action
Complexity: Overcoming reductionism
Knowledge is always entrenched in a larger process
The centrality of interpretation: Critical hermeneutics
The new frontier of classroom knowledge: Personal experiences intersecting with pluriversal information
Constructing new ways of being human: Critical ontology
Constructivist approaches
Critical constructivism
A series of articles published in the journal Critical Inquiry (1991) served as a manifesto for the movement of critical constructivism in various disciplines, including the natural sciences. Not only truth and reality, but also "evidence", "document", "experience", "fact", "proof", and other central categories of empirical research (in physics, biology, statistics, history, law, etc.) reveal their contingent character as a social and ideological construction. Thus, a "realist" or "rationalist" interpretation is subjected to criticism. Kincheloe's political and pedagogical notion (above) has emerged as a central articulation of the concept.
Cultural constructivism
Cultural constructivism asserts that knowledge and reality are a product of their cultural context, meaning that two independent cultures will likely form different observational methodologies.
Genetic epistemology
James Mark Baldwin invented this expression, which was later popularized by Jean Piaget. From 1955 to 1980, Piaget was Director of the International Centre for Genetic Epistemology in Geneva.
Radical constructivism
Ernst von Glasersfeld was a prominent proponent of radical constructivism. This claims that knowledge is not a commodity that is transported from one mind into another. Rather, it is up to the individual to "link up" specific interpretations of experiences and ideas with their own reference of what is possible and viable. That is, the process of constructing knowledge, of understanding, is dependent on the individual's subjective interpretation of their active experience, not what "actually" occurs. Understanding and acting are seen by radical constructivists not as dualistic processes but "circularly conjoined".
Radical constructivism is closely related to second-order cybernetics.
Constructivist Foundations is a free online journal publishing peer-reviewed articles on radical constructivism by researchers from multiple domains.
Relational constructivism
Relational constructivism can be perceived as a relational consequence of radical constructivism. In contrary to social constructivism, it picks up the epistemological threads. It maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception (i.e., self-referentially operating cognition). Therefore, humans are not able to come to objective conclusions about the world.
In spite of the subjectivity of human constructions of reality, relational constructivism focuses on the relational conditions applying to human perceptional processes. Björn Kraus puts it in a nutshell:
Social Constructivism
Criticisms
Numerous criticisms have been levelled at Constructivism. The most common one is that it either explicitly advocates or implicitly reduces to relativism.
Another criticism of constructivism is that it holds that the concepts of two different social formations be entirely different and incommensurate. This being the case, it is impossible to make comparative judgments about statements made according to each worldview. This is because the criteria of judgment will themselves have to be based on some worldview or other. If this is the case, then it brings into question how communication between them about the truth or falsity of any given statement could be established.
The Wittgensteinian philosopher Gavin Kitching argues that constructivists usually implicitly presuppose a deterministic view of language, which severely constrains the minds and use of words by members of societies: they are not just "constructed" by language on this view but are literally "determined" by it. Kitching notes the contradiction here: somehow, the advocate of constructivism is not similarly constrained. While other individuals are controlled by the dominant concepts of society, the advocate of constructivism can transcend these concepts and see through them.
See also
Autopoiesis
Consensus reality
Constructivism in international relations
Cultural pluralism
Epistemological pluralism
Tinkerbell effect
Map–territory relation
Meaning making
Metacognition
Ontological pluralism
Personal construct psychology
Perspectivism
Pragmatism
References
Further reading
Devitt, M. 1997. Realism and Truth, Princeton University Press.
Gillett, E. 1998. "Relativism and the Social-constructivist Paradigm", Philosophy, Psychiatry, & Psychology, Vol.5, No.1, pp. 37–48
Ernst von Glasersfeld 1987. The construction of knowledge, Contributions to conceptual semantics.
Ernst von Glasersfeld 1995. Radical constructivism: A way of knowing and learning.
Joe L. Kincheloe 2001. Getting beyond the Facts: Teaching Social Studies/Social Science in the Twenty-First Century, NY: Peter Lang.
Joe L. Kincheloe 2005. Critical Constructivism Primer, NY: Peter Lang.
Joe L. Kincheloe 2008. Knowledge and Critical Pedagogy, Dordrecht, The Netherlands: Springer.
Kitching, G. 2008. The Trouble with Theory: The Educational Costs of Postmodernism, Penn State University Press.
Björn Kraus 2014: Introducing a model for analyzing the possibilities of power, help and control. In: Social Work and Society. International Online Journal. Retrieved 3 April 2019.(http://www.socwork.net/sws/article/view/393)
Björn Kraus 2015: The Life We Live and the Life We Experience: Introducing the Epistemological Difference between "Lifeworld" (Lebenswelt) and "Life Conditions" (Lebenslage). In: Social Work and Society. International Online Journal. Retrieved 27 August 2018.(http://www.socwork.net/sws/article/view/438).
Björn Kraus 2019: Relational constructivism and relational social work. In: Webb, Stephen, A. (edt.) The Routledge Handbook of Critical Social Work. Routledge international Handbooks. London and New York: Taylor & Francis Ltd.
Friedrich Kratochwil: Constructivism: what it is (not) and how it matters, in Donatella della Porta & Michael Keating (eds.) 2008, Approaches and Methodologies in the Social Sciences: A Pluralist Perspective, Cambridge University Press, 80–98.
Mariyani-Squire, E. 1999. "Social Constructivism: A flawed Debate over Conceptual Foundations", Capitalism, Nature, Socialism, vol.10, no.4, pp. 97–125
Matthews, M.R. (ed.) 1998. Constructivism in Science Education: A Philosophical Examination, Kluwer Academic Publishers.
Edgar Morin 1986, La Méthode, Tome 3, La Connaissance de la connaissance.
Nola, R. 1997. "Constructivism in Science and in Science Education: A Philosophical Critique", Science & Education, Vol.6, no.1-2, pp. 55–83.
Jean Piaget (ed.) 1967. Logique et connaissance scientifique, Encyclopédie de la Pléiade, vol. 22. Editions Gallimard.
Herbert A. Simon 1969. The Sciences of the Artificial (3rd Edition MIT Press 1996).
Slezak, P. 2000. "A Critique of Radical Social Constructivism", in D.C. Philips, (ed.) 2000, Constructivism in Education: Opinions and Second Opinions on Controversial Issues, The University of Chicago Press.
Suchting, W.A. 1992. "Constructivism Deconstructed", Science & Education, vol.1, no.3, pp. 223–254
Paul Watzlawick 1984. The Invented Reality: How Do We Know What We Believe We Know? (Contributions to Constructivism), W W. Norton.
Tom Rockmore 2008. On Constructivist Epistemology.
Romm, N.R.A. 2001. Accountability in Social Research, Dordrecht, The Netherlands: Springer. https://www.springer.com/social+sciences/book/978-0-306-46564-2
External links
Journal of Constructivist Psychology
Radical Constructivism
Constructivist Foundations
Epistemological theories
Epistemology of science
Metatheory of science
Philosophical analogies
Social constructionism
Social epistemology
Systems theory
Theories of truth
Constructivism | 0.794906 | 0.98307 | 0.781448 |
Jurisprudence | Jurisprudence, also known as theory of law or philosophy of law, is the examination in a general perspective of what law is and what it ought to be. It investigates issues such as the definition of law; legal validity; legal norms and values; as well as the relationship between law and other fields of study, including economics, ethics, history, sociology, and political philosophy.
Modern jurisprudence began in the 18th century and was based on the first principles of natural law, civil law, and the law of nations. Contemporary philosophy of law addresses problems internal to law and legal systems and problems of law as a social institution that relates to the larger political and social context in which it exists. Jurisprudence can be divided into categories both by the type of question scholars seek to answer and by the theories of jurisprudence, or schools of thought, regarding how those questions are best answered:
Natural law holds that there are rational objective limits to the power of rulers, the foundations of law are accessible through reason, and it is from these laws of nature that human laws gain force.
Analytic jurisprudence rejects natural law's fusing of what law is and what it ought to be, espousing the use of a neutral point of view and descriptive language when referring to aspects of legal systems. It encompasses theories such as legal positivism and legal realism.
Normative jurisprudence is concerned with evaluative theories of law, dealing with what the goal or purpose of law is and what moral or political theories provide a foundation for the law. It attempts to determine what the proper function of law should be, what sorts of acts should be subject to legal sanctions, and what sorts of punishment should be permitted.
Experimental jurisprudence seeks to investigate the content of legal concepts using the methods of social science, unlike the philosophical methods of traditional jurisprudence.
The terms "philosophy of law" and "jurisprudence" are often used interchangeably, though jurisprudence sometimes encompasses forms of reasoning that fit into economics or sociology.
Overview
Whereas lawyers are interested in what the law is on a specific issue in a specific jurisdiction, analytical philosophers of law are interested in identifying the features of law shared across cultures, times, and places. Taken together, these foundational features of law offer the kind of universal definition philosophers are after. The general approach allows philosophers to ask questions about, for example, what separates law from morality, politics, or practical reason. While the field has traditionally focused on giving an account of law's nature, some scholars have begun to examine the nature of domains within law, e.g. tort law, contract law, or criminal law. These scholars focus on what makes certain domains of law distinctive and how one domain differs from another. A particularly fecund area of research has been the distinction between tort law and criminal law, which more generally bears on the difference between civil and criminal law.
In addition to analytic jurisprudence, legal philosophy is also concerned with normative theories of law. "Normative jurisprudence involves normative, evaluative, and otherwise prescriptive questions about the law."
Etymology and terminology
The English word is derived from the Latin, . is the genitive form of meaning law, and meaning prudence (also: discretion, foresight, forethought, circumspection). It refers to the exercise of good judgment, common sense, and caution, especially in the conduct of practical matters. The word first appeared in written English in 1628, at a time when the word prudence meant knowledge of, or skill in, a matter. It may have entered English via the French , which appeared earlier.
The terms "philosophy of law" and "jurisprudence" are often used interchangeably, though jurisprudence sometimes encompasses forms of reasoning that fit into economics or sociology.
History
Ancient jurisprudence begins with various Dharmaśāstra texts of India. Dharmasutras of Āpastaṃba and Baudhāyana are examples.
In Ancient China, the Daoists, Confucians, and Legalists all had competing theories of jurisprudence.
Jurisprudence in ancient Rome had its origins with the periti—experts in the jus mos maiorum (traditional law), a body of oral laws and customs. Praetors established a working body of laws by judging whether or not singular cases were capable of being prosecuted either by the edicta, the annual pronunciation of prosecutable offences, or in extraordinary situations, additions made to the edicta. A iudex (originally a magistrate, later a private individual appointed to judge a specific case) would then prescribe a remedy according to the facts of the case.
The sentences of the iudex were supposed to be simple interpretations of the traditional customs, but—apart from considering what traditional customs applied in each case—soon developed a more equitable interpretation, coherently adapting the law to newer social exigencies. The law was then adjusted with evolving institutiones (legal concepts), while remaining in the traditional mode. Praetors were replaced in the 3rd century BC by a laical body of prudentes. Admission to this body was conditional upon proof of competence or experience. Under the Roman Empire, schools of law were created, and practice of the law became more academic. From the early Roman Empire to the 3rd century, a relevant body of literature was produced by groups of scholars, including the Proculians and Sabinians. The scientific nature of the studies was unprecedented in ancient times. After the 3rd century, juris prudentia became a more bureaucratic activity, with few notable authors. It was during the Eastern Roman Empire (5th century) that legal studies were once again undertaken in depth, and it is from this cultural movement that Justinian's was born.
Modern jurisprudence began in the 18th century and was based on the first principles of natural law, civil law, and the law of nations.
Natural law
Natural law holds that there are rational objective limits to the power of rulers, the foundations of law are accessible through reason, and it is from these laws of nature that human laws gain force. The moral theory of natural law asserts that law is inherent in nature and constitutive of morality, at least in part, and that an objective moral order, external to human legal systems, underlies natural law.
On this view, while legislators can enact and even successfully enforce immoral laws, such laws are legally invalid. The view is captured by the maxim: "an unjust law is no law at all", where 'unjust' means 'contrary to the natural law.' Natural law theory has medieval origins in the philosophy of Thomas Aquinas, especially in his Treatise on law. In late 20th century, John Finnis revived interest in the theory and provided a modern reworking of it. For one, Finnis has argued that the maxim "an unjust law is no law at all" is a poor guide to the classical Thomist position.
In its general sense, natural law theory may be compared to both state-of-nature law and general law understood on the basis of being analogous to the laws of physical science. Natural law is often contrasted to positive law which asserts law as the product of human activity and human volition. Another approach to natural-law jurisprudence generally asserts that human law must be in response to compelling reasons for action. There are two readings of the natural-law jurisprudential stance.
The strong natural law thesis holds that if a human law fails to be in response to compelling reasons, then it is not properly a "law" at all. This is captured, imperfectly, in the famous maxim: lex iniusta non est lex (an unjust law is no law at all).
The weak natural law thesis holds that if a human law fails to be in response to compelling reasons, then it can still be called a "law", but it must be recognised as a defective law.
Aristotle
Aristotle is often said to be the father of natural law. Like his philosophical forefathers Socrates and Plato, Aristotle posited the existence of natural justice or natural right (dikaion physikon, δικαίον φυσικόν, Latin ius naturale). His association with natural law is largely due to how he was interpreted by Thomas Aquinas. This was based on Aquinas' conflation of natural law and natural right, the latter of which Aristotle posits in Book V of the Nicomachean Ethics (Book IV of the Eudemian Ethics). Aquinas's influence was such as to affect a number of early translations of these passages, though more recent translations render them more literally.
Aristotle's theory of justice is bound up in his idea of the golden mean. Indeed, his treatment of what he calls "political justice" derives from his discussion of "the just" as a moral virtue derived as the mean between opposing vices, just like every other virtue he describes. His longest discussion of his theory of justice occurs in Nicomachean Ethics and begins by asking what sort of mean a just act is. He argues that the term "justice" actually refers to two different but related ideas: general justice and particular justice. When a person's actions toward others are completely virtuous in all matters, Aristotle calls them "just" in the sense of "general justice"; as such, this idea of justice is more or less coextensive with virtue. "Particular" or "partial justice", by contrast, is the part of "general justice" or the individual virtue that is concerned with treating others equitably.
Aristotle moves from this unqualified discussion of justice to a qualified view of political justice, by which he means something close to the subject of modern jurisprudence. Of political justice, Aristotle argues that it is partly derived from nature and partly a matter of convention. This can be taken as a statement that is similar to the views of modern natural law theorists. But it must also be remembered that Aristotle is describing a view of morality, not a system of law, and therefore his remarks as to nature are about the grounding of the morality enacted as law, not the laws themselves.
The best evidence of Aristotle's having thought there was a natural law comes from the Rhetoric, where Aristotle notes that, aside from the "particular" laws that each people has set up for itself, there is a "common" law that is according to nature. The context of this remark, however, suggests only that Aristotle thought that it could be rhetorically advantageous to appeal to such a law, especially when the "particular" law of one's own city was adverse to the case being made, not that there actually was such a law. Aristotle, moreover, considered certain candidates for a universally valid, natural law to be wrong. Aristotle's theoretical paternity of the natural law tradition is consequently disputed.
Thomas Aquinas
Thomas Aquinas is the foremost classical proponent of natural theology, and the father of the Thomistic school of philosophy, for a long time the primary philosophical approach of the Roman Catholic Church. The work for which he is best known is the Summa Theologiae. One of the thirty-five Doctors of the Church, he is considered by many Catholics to be the Church's greatest theologian. Consequently, many institutions of learning have been named after him.
Aquinas distinguished four kinds of law: eternal, natural, divine, and human:
Eternal law refers to divine reason, known only to God. It is God's plan for the universe. Man needs this plan, for without it he would totally lack direction.
Natural law is the "participation" in the eternal law by rational human creatures, and is discovered by reason
Divine law is revealed in the scriptures and is God's positive law for mankind
Human law is supported by reason and enacted for the common good.
Natural law is based on "first principles":
... this is the first precept of the law, that good is to be done and promoted, and evil is to be avoided. All other precepts of the natural law are based on this ...
The desires to live and to procreate are counted by Aquinas among those basic (natural) human values on which all other human values are based.
School of Salamanca
Francisco de Vitoria was perhaps the first to develop a theory of ius gentium (law of nations), and thus is an important figure in the transition to modernity. He extrapolated his ideas of legitimate sovereign power to international affairs, concluding that such affairs ought to be determined by forms respecting of the rights of all and that the common good of the world should take precedence before the good of any single state. This meant that relations between states ought to pass from being justified by force to being justified by law and justice. Some scholars have upset the standard account of the origins of International law, which emphasises the seminal text De iure belli ac pacis by Hugo Grotius, and argued for Vitoria and, later, Suárez's importance as forerunners and, potentially, founders of the field. Others, such as Koskenniemi, have argued that none of these humanist and scholastic thinkers can be understood to have founded international law in the modern sense, instead placing its origins in the post-1870 period.
Francisco Suárez, regarded as among the greatest scholastics after Aquinas, subdivided the concept of ius gentium. Working with already well-formed categories, he carefully distinguished ius inter gentes from ius intra gentes. Ius inter gentes (which corresponds to modern international law) was something common to the majority of countries, although, being positive law, not natural law, it was not necessarily universal. On the other hand, ius intra gentes, or civil law, is specific to each nation.
Lon Fuller
Writing after World War II, Lon L. Fuller defended a secular and procedural form of natural law. He emphasised that the (natural) law must meet certain formal requirements (such as being impartial and publicly knowable). To the extent that an institutional system of social control falls short of these requirements, Fuller argued, we are less inclined to recognise it as a system of law, or to give it our respect. Thus, the law must have a morality that goes beyond the societal rules under which laws are made.
John Finnis
Sophisticated positivist and natural law theories sometimes resemble each other and may have certain points in common. Identifying a particular theorist as a positivist or a natural law theorist sometimes involves matters of emphasis and degree, and the particular influences on the theorist's work. The natural law theorists of the distant past, such as Aquinas and John Locke made no distinction between analytic and normative jurisprudence, while modern natural law theorists, such as John Finnis, who claim to be positivists, still argue that law is moral by nature. In his book Natural Law and Natural Rights (1980, 2011), John Finnis provides a restatement of natural law doctrine.
Analytic jurisprudence
Unlike experimental jurisprudence, which investigates the content of legal concepts using the methods of social science, analytical jurisprudence seeks to provide a general account of the nature of law through the tools of conceptual analysis. The account is general in the sense of targeting universal features of law that hold at all times and places.
Analytic, or clarificatory, jurisprudence takes a neutral point of view and uses descriptive language when referring to various aspects of legal systems. This was a philosophical development that rejected natural law's fusing of what law is and what it ought to be. David Hume argued, in A Treatise of Human Nature, that people invariably slip from describing what the world is to asserting that we therefore ought to follow a particular course of action. But as a matter of pure logic, one cannot conclude that we ought to do something merely because something is the case. So analysing and clarifying the way the world is must be treated as a strictly separate question from normative and evaluative questions of what ought to be done.
The most important questions of analytic jurisprudence are: "What are laws?"; "What is the law?"; "What is the relationship between law and power/sociology?"; and "What is the relationship between law and morality?" Legal positivism is the dominant theory, although there is a growing number of critics who offer their own interpretations.
Historical school
Historical jurisprudence came to prominence during the debate on the proposed codification of German law. In his book On the Vocation of Our Age for Legislation and Jurisprudence, Friedrich Carl von Savigny argued that Germany did not have a legal language that would support codification because the traditions, customs, and beliefs of the German people did not include a belief in a code. Historicists believe that law originates with society.
Sociological jurisprudence
An effort to systematically inform jurisprudence from sociological insights developed from the beginning of the twentieth century, as sociology began to establish itself as a distinct social science, especially in the United States and in continental Europe. In Germany, Austria and France, the work of the "free law" theorists (e.g. Ernst Fuchs, Hermann Kantorowicz, Eugen Ehrlich and François Gény) encouraged the use of sociological insights in the development of legal and juristic theory. The most internationally influential advocacy for a "sociological jurisprudence" occurred in the United States, where, throughout the first half of the twentieth century, Roscoe Pound, for many years the Dean of Harvard Law School, used this term to characterise his legal philosophy. In the United States, many later writers followed Pound's lead or developed distinctive approaches to sociological jurisprudence. In Australia, Julius Stone strongly defended and developed Pound's ideas.
In the 1930s, a significant split between the sociological jurists and the American legal realists emerged. In the second half of the twentieth century, sociological jurisprudence as a distinct movement declined as jurisprudence came more strongly under the influence of analytical legal philosophy; but with increasing criticism of dominant orientations of legal philosophy in English-speaking countries in the present century, it has attracted renewed interest. Increasingly, its contemporary focus is on providing theoretical resources for jurists to aid their understanding of new types of regulation (for example, the diverse kinds of developing transnational law) and the increasingly important interrelations of law and culture, especially in multicultural Western societies.
Legal positivism
Legal positivism is the view that the content of law is dependent on social facts and that a legal system's existence is not constrained by morality. Within legal positivism, theorists agree that law's content is a product of social facts, but theorists disagree whether law's validity can be explained by incorporating moral values. Legal positivists who argue against the incorporation of moral values to explain law's validity are labeled exclusive (or hard) legal positivists. Joseph Raz's legal positivism is an example of exclusive legal positivism. Legal positivists who argue that law's validity can be explained by incorporating moral values are labeled inclusive (or soft) legal positivists. The legal positivist theories of H. L. A. Hart and Jules Coleman are examples of inclusive legal positivism.
Legal positivism has traditionally been associated with three doctrines: the pedigree thesis, the separability thesis, and the discretion thesis. The pedigree thesis says that the right way to determine whether a directive is law is to look at the directive's source. The thesis claims that it is the fact that the directive was issued by the proper official within a legitimate government, for example, that determines the directive's legal validity—not the directive's moral or practical merits. The separability thesis states that law is conceptually distinct from morality. While law might contain morality, the separability thesis states that "it is in no sense a necessary truth that laws reproduce or satisfy certain demands of morality, though in fact they have often done so." Legal positivists disagree about the extent of the separability thesis. Exclusive legal positivists, notably Joseph Raz, go further than the standard thesis and deny that it is possible for morality to be a part of law at all. The discretion thesis states that judges create new law when they are given discretion to adjudicate cases where existing law underdetermines the result.
Thomas Hobbes
Hobbes was a social contractarian and believed that the law had peoples' tacit consent. He believed that society was formed from a state of nature to protect people from the state of war that would exist otherwise. In Leviathan, Hobbes argues that without an ordered society life would be "solitary, poor, nasty, brutish and short." It is commonly said that Hobbes's views on human nature were influenced by his times. The English Civil War and the Cromwellian dictatorship had taken place; and, in reacting to that, Hobbes felt that absolute authority vested in a monarch, whose subjects obeyed the law, was the basis of a civilized society.
Bentham and Austin
John Austin and Jeremy Bentham were early legal positivists who sought to provide a descriptive account of law that describes the law as it is. Austin explained the descriptive focus for legal positivism by saying, "The existence of law is one thing; its merit and demerit another. Whether it be or be not is one enquiry; whether it be or be not conformable to an assumed standard, is a different enquiry." For Austin and Bentham, a society is governed by a sovereign who has de facto authority. Through the sovereign's authority come laws, which for Austin and Bentham are commands backed by sanctions for non-compliance. Along with Hume, Bentham was an early and staunch supporter of the utilitarian concept, and was an avid prison reformer, advocate for democracy, and firm atheist. Bentham's views about law and jurisprudence were popularized by his student John Austin. Austin was the first chair of law at the new University of London, from 1829. Austin's utilitarian answer to "what is law?" was that law is "commands, backed by threat of sanctions, from a sovereign, to whom people have a habit of obedience". H. L. A. Hart criticized Austin and Bentham's early legal positivism because the command theory failed to account for individual's compliance with the law.
Hans Kelsen
Hans Kelsen is considered one of the preeminent jurists of the 20th century and has been highly influential in Europe and Latin America, although less so in common law countries. His Pure Theory of Law describes law as "binding norms", while at the same time refusing to evaluate those norms. That is, "legal science" is to be separated from "legal politics". Central to the Pure Theory of Law is the notion of a —a hypothetical norm, presupposed by the jurist, from which all "lower" norms in the hierarchy of a legal system, beginning with constitutional law, are understood to derive their authority or the extent to which they are binding. Kelsen contends that the extent to which legal norms are binding, their specifically "legal" character, can be understood without tracing it ultimately to some suprahuman source such as God, personified Nature or—of great importance in his time—a personified State or Nation.
H. L. A. Hart
In the English-speaking world, the most influential legal positivist of the twentieth century was H. L. A. Hart, professor of jurisprudence at Oxford University. Hart argued that the law should be understood as a system of social rules. In The Concept of Law, Hart rejected Kelsen's views that sanctions were essential to law and that a normative social phenomenon, like law, cannot be grounded in non-normative social facts.
Hart claimed that law is the union of primary rules and secondary rules. Primary rules require individuals to act or not act in certain ways and create duties for the governed to obey.
Secondary rules are rules that confer authority to create new primary rules or modify existing ones. Secondary rules are divided into rules of adjudication (how to resolve legal disputes), rules of change (how laws are amended), and the rule of recognition (how laws are identified as valid). The validity of a legal system comes from the "rule of recognition", which is a customary practice of officials (especially barristers and judges) who identify certain acts and decisions as sources of law. In 1981, Neil MacCormick wrote a pivotal book on Hart (second edition published in 2008), which further refined and offered some important criticisms that led MacCormick to develop his own theory (the best example of which is his Institutions of Law, 2007). Other important critiques include those of Ronald Dworkin, John Finnis, and Joseph Raz.
In recent years, debates on the nature of law have become increasingly fine-grained. One important debate is within legal positivism. One school is sometimes called "exclusive legal positivism" and is associated with the view that the legal validity of a norm can never depend on its moral correctness. A second school is labeled "inclusive legal positivism", a major proponent of which is Wil Waluchow, and is associated with the view that moral considerations , but do not necessarily, determine the legal validity of a norm.
Joseph Raz
Joseph Raz's theory of legal positivism argues against the incorporation of moral values to explain law's validity. In Raz's 1979 book The Authority of Law, he criticised what he called the "weak social thesis" to explain law. He formulates the weak social thesis as "(a) Sometimes the identification of some laws turn on moral arguments, but also with, (b) In all legal systems the identification of some law turns on moral argument." Raz argues that law's authority is identifiable purely through social sources, without reference to moral reasoning. This view he calls "the sources thesis". Raz suggests that any categorisation of rules beyond their role as authority is better left to sociology than to jurisprudence. Some philosophers used to contend that positivism was the theory that held that there was "no necessary connection" between law and morality; but influential contemporary positivists—including Joseph Raz, John Gardner, and Leslie Green—reject that view. Raz claims it is a necessary truth that there are vices that a legal system cannot possibly have (for example, it cannot commit rape or murder).
Legal realism
Legal realism is the view that a theory of law should be descriptive and account for the reasons why judges decide cases as they do. Legal realism had some affinities with the sociology of law and sociological jurisprudence. The essential tenet of legal realism is that all law is made by humans and thus should account for reasons besides legal rules that led to a legal decision.
There are two separate schools of legal realism: American legal realism and Scandinavian legal realism. American legal realism grew out of the writings of Oliver Wendell Holmes. At the start of Holmes's The Common Law, he claims that "[t]he life of the law has not been logic: it has been experience". This view was a reaction to legal formalism that was popular the time due to the Christopher Columbus Langdell. Holmes's writings on jurisprudence also laid the foundations for the predictive theory of law. In his article "The Path of the Law", Holmes argues that "the object of [legal] study...is prediction, the prediction of the incidence of the public force through the instrumentality of the courts."
For the American legal realists of the early twentieth century, legal realism sought to describe the way judges decide cases. For legal realists such as Jerome Frank, judges start with the facts before them and then move to legal principles. Before legal realism, theories of jurisprudence turned this method around where judges were thought to begin with legal principles and then look to facts.
It has become common today to identify Justice Oliver Wendell Holmes Jr., as the main precursor of American Legal Realism (other influences include Roscoe Pound, Karl Llewellyn, and Justice Benjamin Cardozo). Karl Llewellyn, another founder of the U.S. legal realism movement, similarly believed that the law is little more than putty in the hands of judges who are able to shape the outcome of cases based on their personal values or policy choices.
The Scandinavian school of legal realism argued that law can be explained through the empirical methods used by social scientists. Prominent Scandinavian legal realists are Alf Ross, Axel Hägerström, and Karl Olivecrona. Scandinavian legal realists also took a naturalist approach to law.
Despite its decline in popularity, legal realism continues to influence a wide spectrum of jurisprudential schools today, including critical legal studies, feminist legal theory, critical race theory, sociology of law, and law and economics.
Critical legal studies
Critical legal studies are a new theory of jurisprudence that has developed since the 1970s. The theory can generally be traced to American legal realism and is considered "the first movement in legal theory and legal scholarship in the United States to have espoused a committed Left political stance and perspective". It holds that the law is largely contradictory, and can be best analyzed as an expression of the policy goals of a dominant social group.
Constitutionalism
Legal interpretivism
American legal philosopher Ronald Dworkin's legal theory attacks legal positivists that separate law's content from morality. In his book Law's Empire, Dworkin argued that law is an "interpretive" concept that requires barristers to find the best-fitting and most just solution to a legal dispute, given their constitutional traditions. According to him, law is not entirely based on social facts, but includes the best moral justification for the institutional facts and practices that form a society's legal tradition. It follows from Dworkin's view that one cannot know whether a society has a legal system in force, or what any of its laws are, until one knows some truths about the moral justifications of the social and political practices of that society. It is consistent with Dworkin's view—in contrast with the views of legal positivists or legal realists—that in a society may know what its laws are, because no-one may know the best moral justification for its practices.
Interpretation, according to Dworkin's "integrity theory of law", has two dimensions. To count as an interpretation, the reading of a text must meet the criterion of "fit". Of those interpretations that fit, however, Dworkin maintains that the correct interpretation is the one that portrays the practices of the community in their best light, or makes them "the best that they can be". But many writers have doubted whether there a single best moral justification for the complex practices of any given community, and others have doubted whether, even if there is, it should be counted as part of the law of that community.
Therapeutic jurisprudence
Consequences of the operation of legal rules or legal procedures—or of the behavior of legal actors (such as lawyers and judges)—may be either beneficial (therapeutic) or harmful (anti-therapeutic) to people. Therapeutic jurisprudence ("TJ") studies law as a social force (or agent) and uses social science methods and data to study the extent to which a legal rule or practice affects the psychological well-being of the people it impacts.
Normative jurisprudence
In addition to the question, "What is law?", legal philosophy is also concerned with normative, or "evaluative" theories of law. What is the goal or purpose of law? What moral or political theories provide a foundation for the law? What is the proper function of law? What sorts of acts should be subject to punishment, and what sorts of punishment should be permitted? What is justice? What rights do we have? Is there a duty to obey the law? What value has the rule of law? Some of the different schools and leading thinkers are discussed below.
Virtue jurisprudence
Aretaic moral theories, such as contemporary virtue ethics, emphasize the role of character in morality. Virtue jurisprudence is the view that the laws should promote the development of virtuous character in citizens. Historically, this approach has been mainly associated with Aristotle or Thomas Aquinas. Contemporary virtue jurisprudence is inspired by philosophical work on virtue ethics.
Deontology
Deontology is the "theory of duty or moral obligation". The philosopher Immanuel Kant formulated one influential deontological theory of law. He argued that any rule we follow must be able to be universally applied, i.e. we must be willing for everyone to follow that rule. A contemporary deontological approach can be found in the work of the legal philosopher Ronald Dworkin.
Utilitarianism
Utilitarianism is the view that the laws should be crafted so as to produce the best consequences for the greatest number of people. Historically, utilitarian thinking about law has been associated with the philosopher Jeremy Bentham. John Stuart Mill was a pupil of Bentham's and was the torch bearer for utilitarian philosophy throughout the late nineteenth century. In contemporary legal theory, the utilitarian approach is frequently championed by scholars who work in the law and economics tradition.
John Rawls
John Rawls was an American philosopher; a professor of political philosophy at Harvard University; and author of A Theory of Justice (1971), Political Liberalism, Justice as Fairness: A Restatement, and The Law of Peoples. He is widely considered one of the most important English-language political philosophers of the 20th century. His theory of justice uses a method called "original position" to ask us which principles of justice we would choose to regulate the basic institutions of our society if we were behind a "veil of ignorance". Imagine we do not know who we are—our race, sex, wealth, status, class, or any distinguishing feature—so that we would not be biased in our own favour. Rawls argued from this "original position" that we would choose exactly the same political liberties for everyone, like freedom of speech, the right to vote, and so on. Also, we would choose a system where there is only inequality because that produces incentives enough for the economic well-being of all society, especially the poorest. This is Rawls's famous "difference principle". Justice is fairness, in the sense that the fairness of the original position of choice guarantees the fairness of the principles chosen in that position.
There are many other normative approaches to the philosophy of law, including constitutionalism, critical legal studies and libertarian theories of law.
Experimental jurisprudence
Experimental jurisprudence seeks to investigate the content of legal concepts using the methods of social science, unlike the philosophical methods of traditional jurisprudence.
List of philosophers of law
Plato
Aristotle
Thomas Aquinas
Francis Bacon
John Locke
Francisco Suarez
Francisco de Vitoria
Hugo Grotius
John Austin (legal philosophy)
Frederic Bastiat
Evgeny Pashukanis
Jeremy Bentham
Emilio Betti
Norberto Bobbio
António Castanheira Neves
Jules Coleman
Ronald Dworkin
Francesco D'Agostino
Francisco Elías de Tejada y Spínola
Carlos Cossio
Miguel Reale
John Finnis
Lon L. Fuller
Leslie Green
Robert P. George
Germain Grisez
H. L. A. Hart
Georg Wilhelm Friedrich Hegel
Oliver Wendell Holmes Jr.
Alf Ross
Tony Honoré
Rudolf Jhering
Johann Gottlieb Fichte
Hans Kelsen
Joel Feinberg
David Lyons
Robert Alexy
Reinhold Zippelius
Neil MacCormick
William E. May
Martha Nussbaum
Gustav Radbruch
Joseph Raz
Jeremy Waldron
Friedrich Carl von Savigny
Robert Summers
Roberto Unger
Catharine MacKinnon
John Rawls
Pierre Schlag
Robin West
Carl Schmitt
Jürgen Habermas
Carlos Santiago Nino
Geoffrey Warnock
Scott J. Shapiro
Shen Buhai
Shang Yang
Han Fei
Zhu Xi
Roscoe Pound
See also
Analytical jurisprudence
Artificial intelligence and law
Brocard (law)
Cautelary jurisprudence
Comparative law
Constitution
Constitutional law
Constitutional economics
Critical race theory
Critical rationalism
Defeasible reasoning
Divine law
Feminist jurisprudence
Feminist legal theory
Fiqh
International legal theory
Judicial activism
Justice
Law and economics
Law and literature
Legal formalism
Legal history
Legalism
Legal pluralism
Legal positivism
Legal realism
Legal science
Libertarian theories of law
Living Constitution
Models of judicial decision making
Originalism
Natural law
New legal realism
Political jurisprudence
Postmodernist jurisprudence
Publius Juventius Celsus
Philosophy of law
Rule of law
Rule according to higher law
Sociological jurisprudence
Sociology of law
Strict interpretation
Virtue jurisprudence
References
Citations
Notes
Bibliography
Further reading
Hartzler, H. Richard (1976). Justice, Legal Systems, and Social Structure. Port Washington, NY: Kennikat Press.
Hutchinson, Allan C., ed. (1989). Critical Legal Studies. Totowa, NJ: Rowman & Littlefield.
Kempin Jr., Frederick G. (1963). Legal History: Law and Social Change. Englewood Cliffs, NJ: Prentice-Hall.
Llewellyn, Karl N. (1986). Karl N. Llewellyn on Legal Realism. Birmingham, AL: Legal Classics Library. (Contains penetrating classic "The Bramble Bush" on nature of law).
Murphy, Cornelius F. (1977). Introduction to Law, Legal Process, and Procedure. St. Paul, MN: West Publishing.
Rawls, John (1999). A Theory of Justice, revised ed. Cambridge: Harvard University Press. (Philosophical treatment of justice).
Wacks, Raymond (2009). Understanding Jurisprudence: An Introduction to Legal Theory Oxford University Press.
Washington, Ellis (2002). The Inseparability of Law and Morality: Essays on Law, Race, Politics and Religion University Press of America.
Washington, Ellis (2013). The Progressive Revolution, 2007–08 Writings-Vol. 1; 2009 Writings-Vol. 2, Liberal Fascism through the Ages University Press of America.
Zinn, Howard (1990). Declarations of Independence: Cross-Examining American Ideology. New York: Harper Collins Publishers.
Zippelius, Reinhold (2011). Rechtsphilosophie, 6th ed. Munich: C.H. Beck.
Zippelius, Reinhold (2012). Das Wesen des Rechts (The Concept of Law), an introduction to Legal Theory, 6th ed., Stuttgart: W. Kohlhammer.
Zippelius, Reinhold (2008). Introduction to German Legal Methods (Juristische Methodenlehre), translated from the tenth German Edition by Kirk W. Junker, P. Matthew Roy. Durham: Carolina Academic Press.
Heinze, Eric, The Concept of Injustice (Routledge, 2013)
Pillai, P. S. A. (2016). Jurisprudence and Legal Theory, 3rd Edition, Reprinted 2016: Eastern Book Company.
External links
LII Law about ... Jurisprudence.
The Roman Law Library, incl. Responsa prudentium by Professor Yves Lassard and Alexandr Koptev.
Evgeny Pashukanis - General Theory of Law and Marxism.
Internet Encyclopedia: Philosophy of Law.
The Opticon: Online Repository of Materials covering Spectrum of U.S. Jurisprudence.
Bibliography on the Philosophy of Law. Peace Palace Library
Academic disciplines
Legal ethics
Roman law
Social philosophy
Social sciences | 0.782631 | 0.998261 | 0.78127 |
Platonism | Platonism is the philosophy of Plato and philosophical systems closely derived from it, though contemporary Platonists do not necessarily accept all doctrines of Plato. Platonism has had a profound effect on Western thought. At the most fundamental level, Platonism affirms the existence of abstract objects, which are asserted to exist in a third realm distinct from both the sensible external world and from the internal world of consciousness, and is the opposite of nominalism. This can apply to properties, types, propositions, meanings, numbers, sets, truth values, and so on (see abstract object theory). Philosophers who affirm the existence of abstract objects are sometimes called Platonists; those who deny their existence are sometimes called nominalists. The terms "Platonism" and "nominalism" also have established senses in the history of philosophy. They denote positions that have little to do with the modern notion of an abstract object.
In a narrower sense, the term might indicate the doctrine of Platonic realism, a form of mysticism. The central concept of Platonism, a distinction essential to the Theory of Forms, is the distinction between the reality which is perceptible but unintelligible, associated with the flux of Heraclitus and studied by the likes of science, and the reality which is imperceptible but intelligible, associated with the unchanging being of Parmenides and studied by the likes of mathematics. Geometry was the main motivation of Plato, and this also shows the influence of Pythagoras. The Forms are typically described in dialogues such as the Phaedo, Symposium and Republic as perfect archetypes of which objects in the everyday world are imperfect copies. Aristotle's Third Man Argument is its most famous criticism in antiquity.
In the Republic the highest form is identified as the Form of the Good, the source of all other Forms, which could be known by reason. In the Sophist, a later work, the Forms being, sameness and difference are listed among the primordial "Great Kinds". Plato established the academy, and in the 3rd century BC, Arcesilaus adopted academic skepticism, which became a central tenet of the school until 90 BC when Antiochus added Stoic elements, rejected skepticism, and began a period known as Middle Platonism.
In the 3rd century AD, Plotinus added additional mystical elements, establishing Neoplatonism, in which the summit of existence was the One or the Good, the source of all things; in virtue and meditation the soul had the power to elevate itself to attain union with the One. Many Platonic notions were adopted by the Christian church which understood Plato's Forms as God's thoughts (a position also known as divine conceptualism), while Neoplatonism became a major influence on Christian mysticism in the West through Saint Augustine, Doctor of the Catholic Church, who was heavily influenced by Plotinus' Enneads, and in turn were foundations for the whole of Western Christian thought.<ref>Pelikan, Jaroslav. The Christian Tradition: A History of the Development of Doctrine. Vol 1: The Emergence of the Catholic Tradition 100–600; Pelikan, Jaroslav. The Christian Tradition: A History of the Development of Doctrine. Vol 3: The Growth of Mediaeval Theology 600–1300, section, "The Augustinian Synthesis".</ref> Many ideas of Plato were incorporated by the Roman Catholic Church.
Philosophy
The primary concept is the Theory of Forms. The only true being is founded upon the forms, the eternal, unchangeable, perfect types, of which particular objects of moral and responsible sense are imperfect copies. The multitude of objects of sense, being involved in perpetual change, are thereby deprived of all genuine existence. The number of the forms is defined by the number of universal concepts which can be derived from the particular objects of sense. The following excerpt may be representative of Plato's middle period metaphysics and epistemology:
[Socrates:] "Since the beautiful is opposite of the ugly, they are two."
[Glaucon:] "Of course."
"And since they are two, each is one?"
"I grant that also."
"And the same account is true of the just and unjust, the good and the bad, and all the forms. Each of them is itself one, but because they manifest themselves everywhere in association with actions, bodies, and one another, each of them appears to be many."
"That's right."
"So, I draw this distinction: On one side are those you just now called lovers of sights, lovers of crafts, and practical people; on the other side are those we are now arguing about and whom one would alone call philosophers."
"How do you mean?"
"The lovers of sights and sounds like beautiful sounds, colors, shapes, and everything fashioned out of them, but their thought is unable to see and embrace the nature of the beautiful itself."
"That's for sure."
"In fact, there are very few people who would be able to reach the beautiful itself and see it by itself. Isn't that so?"
"Certainly."
"What about someone who believes in beautiful things, but doesn't believe in the beautiful itself and isn't able to follow anyone who could lead him to the knowledge of it? Don't you think he is living in a dream rather than a wakened state? Isn't this dreaming: whether asleep or awake, to think that a likeness is not a likeness but rather the thing itself that it is like?"
"I certainly think that someone who does that is dreaming."
"But someone who, to take the opposite case, believes in the beautiful itself, can see both it and the things that participate in it and doesn't believe that the participants are it or that it itself is the participants—is he living in a dream or is he awake?
"He's very much awake."
(Republic Bk. V, 475e-476d, translation G. M. A. Grube)
Book VI of the Republic identifies the highest form as the Form of the Good, the cause of all other Ideas, and that on which the being and knowing of all other Forms is contingent. Conceptions derived from the impressions of sense can never give us the knowledge of true being, i.e., of the forms. It can only be obtained by the soul's activity within itself, apart from the troubles and disturbances of sense; that is to say, by the exercise of reason. Dialectic, as the instrument in this process, leading us to knowledge of the forms, and finally to the highest form of the Good, is the first of sciences. Later Neoplatonism, beginning with Plotinus, identified the Good of the Republic with the transcendent, absolute One of the first hypothesis of the Parmenides (137c-142a).
Platonist ethics is based on the Form of the Good. Virtue is knowledge, the recognition of the supreme form of the good. And, since in this cognition, the three parts of the soul, which are reason, spirit, and appetite, all have their share, we get the three virtues, Wisdom, Courage, and Moderation. The bond which unites the other virtues is the virtue of Justice, by which each part of the soul is confined to the performance of its proper function.
Platonism had a profound effect on Western thought. In many interpretations of the Timaeus Platonism, like Aristotelianism, poses an eternal universe, as opposed to the nearby Judaic tradition that the universe had been created in historical time, with its continuous history recorded. Unlike Aristotelianism, Platonism describes idea as prior to matter and identifies the person with the soul. Many Platonic notions secured a permanent place in Christianity.
At the heart of Plato's philosophy is the theory of the soul. Francis Cornford described the twin pillars of Platonism as being the theory of the Forms, on the one hand, and, on the other hand, the doctrine of the immortality of the soul. Indeed, Plato was the first person in the history of philosophy to believe that the soul was both the source of life and the mind. In Plato's dialogues, the soul plays many disparate roles. Among other things, Plato believes that the soul is what gives life to the body (which was articulated most of all in the Laws and Phaedrus) in terms of self-motion: to be alive is to be capable of moving oneself; the soul is a self-mover. He also thinks that the soul is the bearer of moral properties (i.e., when I am virtuous, it is my soul that is virtuous as opposed to, say, my body). The soul is also the mind: it is that which thinks in us.
This casual oscillation between different roles of the soul is seen in many dialogues. First of all, in the Republic:Is there any function of the soul that you could not accomplish with anything else, such as taking care of something (epimeleisthai), ruling, and deliberating, and other such things? Could we correctly assign these things to anything besides the soul, and say that they are characteristic (idia) of it?
No, to nothing else.
What about living? Will we deny that this is a function of the soul?
That absolutely is.The Phaedo most famously caused problems to scholars who were trying to make sense of this aspect of Plato's theory of the soul, such as Broadie and Dorothea Frede.
More-recent scholarship has overturned this accusation arguing that part of the novelty of Plato's theory of the soul is that it was the first to unite the different features and powers of the soul that became commonplace in later ancient and medieval philosophy. For Plato, the soul moves things by means of its thoughts, as one scholar puts it, and accordingly, the soul is both a mover (i.e., the principle of life, where life is conceived of as self-motion) and a thinker.
History
Ancient philosophy
The Academy
Platonism was originally expressed in the dialogues of Plato, in which the figure of Socrates is used to expound certain doctrines, that may or may not be similar to the thought of the historical Socrates, Plato's master. Plato delivered his lectures at the Platonic Academy, a precinct containing a sacred grove outside the walls of Athens. The school continued there long after Plato's death. There were three periods: the Old, Middle, and New Academy. The chief figures in the Old Academy were Speusippus (Plato's nephew), who succeeded him as the head of the school (until 339 BC), and Xenocrates (until 313 BC). Both of them sought to fuse Pythagorean speculations on number with Plato's theory of forms.
The Skeptical Academy
Around 266 BC, Arcesilaus became head of the academy. This phase, known as the Middle Academy, strongly emphasized philosophical skepticism. It was characterized by its attacks on the Stoics and their assertion of the certainty of truth and our knowledge of it. The New Academy began with Carneades in 155 BC, the fourth head in succession from Arcesilaus. It was still largely skeptical, denying the possibility of knowing an absolute truth; both Arcesilaus and Carneades argued that they were maintaining a genuine tenet of Plato.
Middle Platonism
Around 90 BC, Antiochus of Ascalon rejected skepticism, making way for the period known as Middle Platonism, in which Platonism was fused with certain Peripatetic and many Stoic dogmas. In Middle Platonism, the Platonic Forms were not transcendent but immanent to rational minds, and the physical world was a living, ensouled being, the World-Soul. Pre-eminence in this period belongs to Plutarch. The eclectic nature of Platonism during this time is shown by its incorporation into Pythagoreanism (Numenius of Apamea) and into Jewish philosophy (Philo of Alexandria).
Neoplatonism
In the third century, Plotinus recast Plato's system, establishing Neoplatonism, in which Middle Platonism was fused with mysticism. At the summit of existence stands the One or the Good, as the source of all things. It generates from itself, as if from the reflection of its own being, reason, the nous, wherein is contained the infinite store of ideas. The world-soul, the copy of the nous, is generated by and contained in it, as the nous is in the One, and, by informing matter in itself nonexistent, constitutes bodies whose existence is contained in the world-soul. Nature therefore is a whole, endowed with life and soul. Soul, being chained to matter, longs to escape from the bondage of the body and return to its original source. In virtue and philosophical thought it has the power to elevate itself above the reason into a state of ecstasy, where it can behold, or ascend to, that one good primary Being whom reason cannot know. To attain this union with the Good, or the One is the true function of human beings.
Plotinus' disciple, Porphyry, followed by Iamblichus, developed the system in conscious opposition to Christianity—even as many influential early Christian writers took inspiration from it in their conceptions of monotheistic theology. The Platonic Academy was re-established during this period; its most renowned head was Proclus (died 485), a celebrated commentator on Plato's writings. The academy persisted until Roman emperor Justinian closed it in 529.
Medieval philosophy
Christianity and Platonism
Platonism has had some influence on Christianity through Clement of Alexandria and Origen, and the Cappadocian Fathers. St. Augustine was heavily influenced by Platonism as well, which he encountered through the Latin translations of Marius Victorinus of the works of Porphyry and/or Plotinus.
Platonism was considered authoritative in the Middle Ages. Platonism also influenced both Eastern and Western mysticism.Louth, Andrew. The Origins of the Christian Mystical Tradition: From Plato to Denys. Oxford: Oxford University Press, 1983. Meanwhile, Platonism influenced various philosophers. While Aristotle became more influential than Plato in the 13th century, St. Thomas Aquinas's philosophy was still in certain respects fundamentally Platonic.
Modern philosophy
Renaissance
The Renaissance also saw a renewed interest in Platonic thought, including more interest in Plato himself. In 16th-, 17th-, and 19th-century England, Plato's ideas influenced many religious thinkers including the Cambridge Platonists. Orthodox Protestantism in continental Europe, however, distrusts natural reason and has often been critical of Platonism.
An issue in the reception of Plato in early modern Europe was how to deal with the same-sex elements of his corpus.Christoplatonism is a term used to refer to a dualism opined by Plato, which holds spirit is good but matter is evil, which influenced some Christian churches, though the Bible's teaching directly contradicts this philosophy and thus it receives constant criticism from many teachers in the Christian Church today. According to the Methodist Church, Christoplatonism directly "contradicts the Biblical record of God calling everything He created good."
Contemporary philosophy
Modern Platonism
Apart from historical Platonism originating from thinkers such as Plato and Plotinus, we also encounter the theory of abstract objects in the modern sense.
Platonism is the view that there exist such things as abstract objects — where an abstract object is an object that does not exist in space or time and which is therefore entirely non-physical and non-mental. Platonism in this sense is a contemporary view.
This modern Platonism has been endorsed in one way or another at one time or another by numerous philosophers, such as Bernard Bolzano, who argue for anti-psychologism. Plato's works have been decisively influential for 20th century philosophers such as Alfred North Whitehead and his Process Philosophy; and for the critical realism and metaphysics of Nicolai Hartmann.
Analytic
In contemporary philosophy, most Platonists trace their ideas to Gottlob Frege's influential paper "Thought", which argues for Platonism with respect to propositions, and his influential book, The Foundations of Arithmetic, which argues for Platonism with respect to numbers and is a seminal text of the logicist project. Contemporary analytic philosophers who espoused Platonism in metaphysics include Bertrand Russell, Alonzo Church, Kurt Gödel, W. V. O. Quine, David Kaplan, Saul Kripke, Edward Zalta and Peter van Inwagen. Iris Murdoch espoused Platonism in moral philosophy in her 1970 book The Sovereignty of Good.
Paul Benacerraf's epistemological challenge to contemporary Platonism has proved its most influential criticism.
Continental
In contemporary Continental philosophy, Edmund Husserl's arguments against psychologism are believed to derive from a Platonist conception of logic, influenced by Frege and his mentor Bolzano.—Husserl explicitly mentioned Bolzano, G. W. Leibniz and Hermann Lotze as inspirations for his position in his Logical Investigations (1900–1). Other prominent contemporary Continental philosophers interested in Platonism in a general sense include Leo Strauss, Simone Weil, and Alain Badiou.
Influence on religions
Platonism has not only influenced the tenets of Christianity and Islam that are today classified as 'orthodox' teachings, but also the gnostic or esoteric 'heterodox' traditions of these religions that circulated in the ancient world, such as the former major world religion Manichaeism, Mandaeism, and Hermeticism. Through European Renaissance scholarship on Hermeticism and direct Platonic philosophy (among other esoteric and philosophical scholarship of the time, such as Jewish magic and mysticism and Islamic alchemy), the magic and alchemy of the period represents a culmination of several permutations of Platonic philosophy.
See also
Innatism
List of ancient Platonists
Plato's unwritten doctrines, debates over Plato's esotericism
Neoplatonism and Christianity
Alchemy
Hermeticism
Marsilio Ficino
Giovanni Pico della Mirandolla
World Soul
People
Harold F. Cherniss, scholar of Plato's relation to Aristotle
References
Further reading
Ackermann, C. The Christian Element in Plato and the Platonic philosophy. Translated by Asbury Samuel Ralph. Edinburgh: T. & T. Clark, 1861.
Cassirer, Ernst. The Platonic Renaissance in England. Translated by James P. Pettegrove. Edinburgh: Nelson, 1953.
Campbell, Douglas. 2021. "Self‐Motion and Cognition: Plato's Theory of the Soul." Southern Journal of Philosophy 59 (4): 523–544.
Dorter, Kenneth. 1982. Plato's Phaedo: An Interpretation. Toronto: Univ. of Toronto Press
Crombie, Ian 1962. An Examination of Plato's Doctrines, vol. 1. London: Routledge.
Frede, Dorothea. 1978. "The Final Proof of the Immortality of the Soul in Plato's Phaedo 102a–107a". Phronesis, 23.1: 27–41.
Kristeller, Paul Oskar, "Renaissance Platonism." In Renaissance Thought: the Classic, Scholastic, and Humanistic Strains. New York: Harper, 1961.
Walker, Daniel Pickering. The Ancient Theology: Studies in Christian Platonism from the Fifteenth to the Eighteenth Century''. London: Duckworth, 1972.
External links
Christian Platonism and Christian Neoplatonism
Islamic Platonists and Neoplatonists
Platonism
Philosophical schools and traditions
Classical theism | 0.782813 | 0.998017 | 0.78126 |
Semantics | Semantics is the study of linguistic meaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction between sense and reference. Sense is given by the ideas and concepts associated with an expression while reference is the object to which an expression points. Semantics contrasts with syntax, which studies the rules that dictate how to create grammatically correct sentences, and pragmatics, which investigates how people use language in communication.
Lexical semantics is the branch of semantics that studies word meaning. It examines whether words have one or several meanings and in what lexical relations they stand to one another. Phrasal semantics studies the meaning of sentences by exploring the phenomenon of compositionality or how new meanings can be created by arranging words. Formal semantics relies on logic and mathematics to provide precise frameworks of the relation between language and meaning. Cognitive semantics examines meaning from a psychological perspective and assumes a close relation between language ability and the conceptual structures used to understand the world. Other branches of semantics include conceptual semantics, computational semantics, and cultural semantics.
Theories of meaning are general explanations of the nature of meaning and how expressions are endowed with it. According to referential theories, the meaning of an expression is the part of reality to which it points. Ideational theories identify meaning with mental states like the ideas that an expression evokes in the minds of language users. According to causal theories, meaning is determined by causes and effects, which behaviorist semantics analyzes in terms of stimulus and response. Further theories of meaning include truth-conditional semantics, verificationist theories, the use theory, and inferentialist semantics.
The study of semantic phenomena began during antiquity but was not recognized as an independent field of inquiry until the 19th century. Semantics is relevant to the fields of formal logic, computer science, and psychology.
Definition and related fields
Semantics is the study of meaning in languages. It is a systematic inquiry that examines what linguistic meaning is and how it arises. It investigates how expressions are built up from different layers of constituents, like morphemes, words, clauses, sentences, and texts, and how the meanings of the constituents affect one another. Semantics can focus on a specific language, like English, but in its widest sense, it investigates meaning structures relevant to all languages. As a descriptive discipline, it aims to determine how meaning works without prescribing what meaning people should associate with particular expressions. Some of its key questions are "How do the meanings of words combine to create the meanings of sentences?", "How do meanings relate to the minds of language users, and to the things words refer to?", and "What is the connection between what a word means, and the contexts in which it is used?". The main disciplines engaged in semantics are linguistics, semiotics, and philosophy. Besides its meaning as a field of inquiry, semantics can also refer to theories within this field, like truth-conditional semantics, and to the meaning of particular expressions, like the semantics of the word fairy.
As a field of inquiry, semantics has both an internal and an external side. The internal side is interested in the connection between words and the mental phenomena they evoke, like ideas and conceptual representations. The external side examines how words refer to objects in the world and under what conditions a sentence is true.
Many related disciplines investigate language and meaning. Semantics contrasts with other subfields of linguistics focused on distinct aspects of language. Phonology studies the different types of sounds used in languages and how sounds are connected to form words while syntax examines the rules that dictate how to arrange words to create sentences. These divisions are reflected in the fact that it is possible to master some aspects of a language while lacking others, like when a person knows how to pronounce a word without knowing its meaning. As a subfield of semiotics, semantics has a more narrow focus on meaning in language while semiotics studies both linguistic and non-linguistic signs. Semiotics investigates additional topics like the meaning of non-verbal communication, conventional symbols, and natural signs independent of human interaction. Examples include nodding to signal agreement, stripes on a uniform signifying rank, and the presence of vultures indicating a nearby animal carcass.
Semantics further contrasts with pragmatics, which is interested in how people use language in communication. An expression like "That's what I'm talking about" can mean many things depending on who says it and in what situation. Semantics is interested in the possible meanings of expressions: what they can and cannot mean in general. In this regard, it is sometimes defined as the study of context-independent meaning. Pragmatics examines which of these possible meanings is relevant in a particular case. In contrast to semantics, it is interested in actual performance rather than in the general linguistic competence underlying this performance. This includes the topic of additional meaning that can be inferred even though it is not literally expressed, like what it means if a speaker remains silent on a certain topic. A closely related distinction by the semiotician Charles W. Morris holds that semantics studies the relation between words and the world, pragmatics examines the relation between words and users, and syntax focuses on the relation between different words.
Semantics is related to etymology, which studies how words and their meanings changed in the course of history. Another connected field is hermeneutics, which is the art or science of interpretation and is concerned with the right methodology of interpreting text in general and scripture in particular. Metasemantics examines the metaphysical foundations of meaning and aims to explain where it comes from or how it arises.
The word semantics originated from the Ancient Greek adjective , meaning 'relating to signs', which is a derivative of , the noun for 'sign'. It was initially used for medical symptoms and only later acquired its wider meaning regarding any type of sign, including linguistic signs. The word semantics entered the English language from the French term , which the linguist Michel Bréal first introduced at the end of the 19th century.
Basic concepts
Meaning
Semantics studies meaning in language, which is limited to the meaning of linguistic expressions. It concerns how signs are interpreted and what information they contain. An example is the meaning of words provided in dictionary definitions by giving synonymous expressions or paraphrases, like defining the meaning of the term ram as adult male sheep. There are many forms of non-linguistic meaning that are not examined by semantics. Actions and policies can have meaning in relation to the goal they serve. Fields like religion and spirituality are interested in the meaning of life, which is about finding a purpose in life or the significance of existence in general.
Linguistic meaning can be analyzed on different levels. Word meaning is studied by lexical semantics and investigates the denotation of individual words. It is often related to concepts of entities, like how the word dog is associated with the concept of the four-legged domestic animal. Sentence meaning falls into the field of phrasal semantics and concerns the denotation of full sentences. It usually expresses a concept applying to a type of situation, as in the sentence "the dog has ruined my blue skirt". The meaning of a sentence is often referred to as a proposition. Different sentences can express the same proposition, like the English sentence "the tree is green" and the German sentence . Utterance meaning is studied by pragmatics and is about the meaning of an expression on a particular occasion. Sentence meaning and utterance meaning come apart in cases where expressions are used in a non-literal way, as is often the case with irony.
Semantics is primarily interested in the public meaning that expressions have, like the meaning found in general dictionary definitions. Speaker meaning, by contrast, is the private or subjective meaning that individuals associate with expressions. It can diverge from the literal meaning, like when a person associates the word needle with pain or drugs.
Sense and reference
Meaning is often analyzed in terms of sense and reference, also referred to as intension and extension or connotation and denotation. The referent of an expression is the object to which the expression points. The sense of an expression is the way in which it refers to that object or how the object is interpreted. For example, the expressions morning star and evening star refer to the same planet, just like the expressions 2 + 2 and 3 + 1 refer to the same number. The meanings of these expressions differ not on the level of reference but on the level of sense. Sense is sometimes understood as a mental phenomenon that helps people identify the objects to which an expression refers. Some semanticists focus primarily on sense or primarily on reference in their analysis of meaning. To grasp the full meaning of an expression, it is usually necessary to understand both to what entities in the world it refers and how it describes them.
The distinction between sense and reference can explain identity statements, which can be used to show how two expressions with a different sense have the same referent. For instance, the sentence "the morning star is the evening star" is informative and people can learn something from it. The sentence "the morning star is the morning star", by contrast, is an uninformative tautology since the expressions are identical not only on the level of reference but also on the level of sense.
Compositionality
Compositionality is a key aspect of how languages construct meaning. It is the idea that the meaning of a complex expression is a function of the meanings of its parts. It is possible to understand the meaning of the sentence "Zuzana owns a dog" by understanding what the words Zuzana, owns, a and dog mean and how they are combined. In this regard, the meaning of complex expressions like sentences is different from word meaning since it is normally not possible to deduce what a word means by looking at its letters and one needs to consult a dictionary instead.
Compositionality is often used to explain how people can formulate and understand an almost infinite number of meanings even though the amount of words and cognitive resources is finite. Many sentences that people read are sentences that they have never seen before and they are nonetheless able to understand them.
When interpreted in a strong sense, the principle of compositionality states that the meaning of a complex expression is not just affected by its parts and how they are combined but fully determined this way. It is controversial whether this claim is correct or whether additional aspects influence meaning. For example, context may affect the meaning of expressions; idioms like "kick the bucket" carry figurative or non-literal meanings that are not directly reducible to the meanings of their parts.
Truth and truth conditions
Truth is a property of statements that accurately present the world and true statements are in accord with reality. Whether a statement is true usually depends on the relation between the statement and the rest of the world. The truth conditions of a statement are the way the world needs to be for the statement to be true. For example, it belongs to the truth conditions of the sentence "it is raining outside" that raindrops are falling from the sky. The sentence is true if it is used in a situation in which the truth conditions are fulfilled, i.e., if there is actually rain outside.
Truth conditions play a central role in semantics and some theories rely exclusively on truth conditions to analyze meaning. To understand a statement usually implies that one has an idea about the conditions under which it would be true. This can happen even if one does not know whether the conditions are fulfilled.
Semiotic triangle
The semiotic triangle, also called the triangle of meaning, is a model used to explain the relation between language, language users, and the world, represented in the model as Symbol, Thought or Reference, and Referent. The symbol is a linguistic signifier, either in its spoken or written form. The central idea of the model is that there is no direct relation between a linguistic expression and what it refers to, as was assumed by earlier dyadic models. This is expressed in the diagram by the dotted line between symbol and referent.
The model holds instead that the relation between the two is mediated through a third component. For example, the term apple stands for a type of fruit but there is no direct connection between this string of letters and the corresponding physical object. The relation is only established indirectly through the mind of the language user. When they see the symbol, it evokes a mental image or a concept, which establishes the connection to the physical object. This process is only possible if the language user learned the meaning of the symbol before. The meaning of a specific symbol is governed by the conventions of a particular language. The same symbol may refer to one object in one language, to another object in a different language, and to no object in another language.
Others
Many other concepts are used to describe semantic phenomena. The semantic role of an expression is the function it fulfills in a sentence. In the sentence "the boy kicked the ball", the boy has the role of the agent who performs an action. The ball is the theme or patient of this action as something that does not act itself but is involved in or affected by the action. The same entity can be both agent and patient, like when someone cuts themselves. An entity has the semantic role of an instrument if it is used to perform the action, for instance, when cutting something with a knife then the knife is the instrument. For some sentences, no action is described but an experience takes place, like when a girl sees a bird. In this case, the girl has the role of the experiencer. Other common semantic roles are location, source, goal, beneficiary, and stimulus.
Lexical relations describe how words stand to one another. Two words are synonyms if they share the same or a very similar meaning, like car and automobile or buy and purchase. Antonyms have opposite meanings, such as the contrast between alive and dead or fast and slow. One term is a hyponym of another term if the meaning of the first term is included in the meaning of the second term. For example, ant is a hyponym of insect. A prototype is a hyponym that has characteristic features of the type it belongs to. A robin is a prototype of a bird but a penguin is not. Two words with the same pronunciation are homophones like flour and flower, while two words with the same spelling are homonyms, like a bank of a river in contrast to a bank as a financial institution. Hyponymy is closely related to meronymy, which describes the relation between part and whole. For instance, wheel is a meronym of car. An expression is ambiguous if it has more than one possible meaning. In some cases, it is possible to disambiguate them to discern the intended meaning. The term polysemy is used if the different meanings are closely related to one another, like the meanings of the word head, which can refer to the topmost part of the human body or the top-ranking person in an organization.
The meaning of words can often be subdivided into meaning components called semantic features. The word horse has the semantic feature animate but lacks the semantic feature human. It may not always be possible to fully reconstruct the meaning of a word by identifying all its semantic features.
A semantic or lexical field is a group of words that are all related to the same activity or subject. For instance, the semantic field of cooking includes words like bake, boil, spice, and pan.
The context of an expression refers to the situation or circumstances in which it is used and includes time, location, speaker, and audience. It also encompasses other passages in a text that come before and after it. Context affects the meaning of various expressions, like the deictic expression here and the anaphoric expression she.
A syntactic environment is extensional or transparent if it is always possible to exchange expressions with the same reference without affecting the truth value of the sentence. For example, the environment of the sentence "the number 8 is even" is extensional because replacing the expression the number 8 with the number of planets in the solar system does not change its truth value. For intensional or opaque contexts, this type of substitution is not always possible. For instance, the embedded clause in "Paco believes that the number 8 is even" is intensional since Paco may not know that the number of planets in the solar system is 8.
Semanticists commonly distinguish the language they study, called object language, from the language they use to express their findings, called metalanguage. When a professor uses Japanese to teach their student how to interpret the language of first-order logic then the language of first-order logic is the object language and Japanese is the metalanguage. The same language may occupy the role of object language and metalanguage at the same time. This is the case in monolingual English dictionaries, in which both the entry term belonging to the object language and the definition text belonging to the metalanguage are taken from the English language.
Branches
Lexical semantics
Lexical semantics is the sub-field of semantics that studies word meaning. It examines semantic aspects of individual words and the vocabulary as a whole. This includes the study of lexical relations between words, such as whether two terms are synonyms or antonyms. Lexical semantics categorizes words based on semantic features they share and groups them into semantic fields unified by a common subject. This information is used to create taxonomies to organize lexical knowledge, for example, by distinguishing between physical and abstract entities and subdividing physical entities into stuff and individuated entities. Further topics of interest are polysemy, ambiguity, and vagueness.
Lexical semantics is sometimes divided into two complementary approaches: semasiology and onomasiology. Semasiology starts from words and examines what their meaning is. It is interested in whether words have one or several meanings and how those meanings are related to one another. Instead of going from word to meaning, onomasiology goes from meaning to word. It starts with a concept and examines what names this concept has or how it can be expressed in a particular language.
Some semanticists also include the study of lexical units other than words in the field of lexical semantics. Compound expressions like being under the weather have a non-literal meaning that acts as a unit and is not a direct function of its parts. Another topic concerns the meaning of morphemes that make up words, for instance, how negative prefixes like in- and dis- affect the meaning of the words they are part of, as in inanimate and dishonest.
Phrasal semantics
Phrasal semantics studies the meaning of sentences. It relies on the principle of compositionality to explore how the meaning of complex expressions arises from the combination of their parts. The different parts can be analyzed as subject, predicate, or argument. The subject of a sentence usually refers to a specific entity while the predicate describes a feature of the subject or an event in which the subject participates. Arguments provide additional information to complete the predicate. For example, in the sentence "Mary hit the ball", Mary is the subject, hit is the predicate, and the ball is an argument. A more fine-grained categorization distinguishes between different semantic roles of words, such as agent, patient, theme, location, source, and goal.
Verbs usually function as predicates and often help to establish connections between different expressions to form a more complex meaning structure. In the expression "Beethoven likes Schubert", the verb like connects a liker to the object of their liking. Other sentence parts modify meaning rather than form new connections. For instance, the adjective red modifies the color of another entity in the expression red car. A further compositional device is variable binding, which is used to determine the reference of a term. For example, the last part of the expression "the woman who likes Beethoven" specifies which woman is meant. Parse trees can be used to show the underlying hierarchy employed to combine the different parts. Various grammatical devices, like the gerund form, also contribute to meaning and are studied by grammatical semantics.
Formal semantics
Formal semantics uses formal tools from logic and mathematics to analyze meaning in natural languages. It aims to develop precise logical formalisms to clarify the relation between expressions and their denotation. One of its key tasks is to provide frameworks of how language represents the world, for example, using ontological models to show how linguistic expressions map to the entities of that model. A common idea is that words refer to individual objects or groups of objects while sentences relate to events and states. Sentences are mapped to a truth value based on whether their description of the world is in correspondence with its ontological model.
Formal semantics further examines how to use formal mechanisms to represent linguistic phenomena such as quantification, intensionality, noun phrases, plurals, mass terms, tense, and modality. Montague semantics is an early and influential theory in formal semantics that provides a detailed analysis of how the English language can be represented using mathematical logic. It relies on higher-order logic, lambda calculus, and type theory to show how meaning is created through the combination of expressions belonging to different syntactic categories.
Dynamic semantics is a subfield of formal semantics that focuses on how information grows over time. According to it, "meaning is context change potential": the meaning of a sentence is not given by the information it contains but by the information change it brings about relative to a context.
Cognitive semantics
Cognitive semantics studies the problem of meaning from a psychological perspective or how the mind of the language user affects meaning. As a subdiscipline of cognitive linguistics, it sees language as a wide cognitive ability that is closely related to the conceptual structures used to understand and represent the world. Cognitive semanticists do not draw a sharp distinction between linguistic knowledge and knowledge of the world and see them instead as interrelated phenomena. They study how the interaction between language and human cognition affects the conceptual organization in very general domains like space, time, causation, and action. The contrast between profile and base is sometimes used to articulate the underlying knowledge structure. The profile of a linguistic expression is the aspect of the knowledge structure that it brings to the foreground while the base is the background that provides the context of this aspect without being at the center of attention. For example, the profile of the word hypotenuse is a straight line while the base is a right-angled triangle of which the hypotenuse forms a part.
Cognitive semantics further compares the conceptual patterns and linguistic typologies across languages and considers to what extent the cognitive conceptual structures of humans are universal or relative to their linguistic background. Another research topic concerns the psychological processes involved in the application of grammar. Other investigated phenomena include categorization, which is understood as a cognitive heuristic to avoid information overload by regarding different entities in the same way, and embodiment, which concerns how the language user's bodily experience affects the meaning of expressions.
Frame semantics is an important subfield of cognitive semantics. Its central idea is that the meaning of terms cannot be understood in isolation from each other but needs to be analyzed on the background of the conceptual structures they depend on. These structures are made explicit in terms of semantic frames. For example, words like bride, groom, and honeymoon evoke in the mind the frame of marriage.
Others
Conceptual semantics shares with cognitive semantics the idea of studying linguistic meaning from a psychological perspective by examining how humans conceptualize and experience the world. It holds that meaning is not about the objects to which expressions refer but about the cognitive structure of human concepts that connect thought, perception, and action. Conceptual semantics differs from cognitive semantics by introducing a strict distinction between meaning and syntax and by relying on various formal devices to explore the relation between meaning and cognition.
Computational semantics examines how the meaning of natural language expressions can be represented and processed on computers. It often relies on the insights of formal semantics and applies them to problems that can be computationally solved. Some of its key problems include computing the meaning of complex expressions by analyzing their parts, handling ambiguity, vagueness, and context-dependence, and using the extracted information in automatic reasoning. It forms part of computational linguistics, artificial intelligence, and cognitive science. Its applications include machine learning and machine translation.
Cultural semantics studies the relation between linguistic meaning and culture. It compares conceptual structures in different languages and is interested in how meanings evolve and change because of cultural phenomena associated with politics, religion, and customs. For example, address practices encode cultural values and social hierarchies, as in the difference of politeness of expressions like and in Spanish or and in German in contrast to English, which lacks these distinctions and uses the pronoun you in either case. Closely related fields are intercultural semantics, cross-cultural semantics, and comparative semantics.
Pragmatic semantics studies how the meaning of an expression is shaped by the situation in which it is used. It is based on the idea that communicative meaning is usually context-sensitive and depends on who participates in the exchange, what information they share, and what their intentions and background assumptions are. It focuses on communicative actions, of which linguistic expressions only form one part. Some theorists include these topics within the scope of semantics while others consider them part of the distinct discipline of pragmatics.
Theories of meaning
Theories of meaning explain what meaning is, what meaning an expression has, and how the relation between expression and meaning is established.
Referential
Referential theories state that the meaning of an expression is the entity to which it points. The meaning of singular terms like names is the individual to which they refer. For example, the meaning of the name George Washington is the person with this name. General terms refer not to a single entity but to the set of objects to which this term applies. In this regard, the meaning of the term cat is the set of all cats. Similarly, verbs usually refer to classes of actions or events and adjectives refer to properties of individuals and events.
Simple referential theories face problems for meaningful expressions that have no clear referent. Names like Pegasus and Santa Claus have meaning even though they do not point to existing entities. Other difficulties concern cases in which different expressions are about the same entity. For instance, the expressions Roger Bannister and the first man to run a four-minute mile refer to the same person but do not mean exactly the same thing. This is particularly relevant when talking about beliefs since a person may understand both expressions without knowing that they point to the same entity. A further problem is given by expressions whose meaning depends on the context, like the deictic terms here and I.
To avoid these problems, referential theories often introduce additional devices. Some identify meaning not directly with objects but with functions that point to objects. This additional level has the advantage of taking the context of an expression into account since the same expression may point to one object in one context and to another object in a different context. For example, the reference of the word here depends on the location in which it is used. A closely related approach is possible world semantics, which allows expressions to refer not only to entities in the actual world but also to entities in other possible worlds. According to this view, expressions like the first man to run a four-minute mile refer to different persons in different worlds. This view can also be used to analyze sentences that talk about what is possible or what is necessary: possibility is what is true in some possible worlds while necessity is what is true in all possible worlds.
Ideational
Ideational theories, also called mentalist theories, are not primarily interested in the reference of expressions and instead explain meaning in terms of the mental states of language users. One historically influential approach articulated by John Locke holds that expressions stand for ideas in the speaker's mind. According to this view, the meaning of the word dog is the idea that people have of dogs. Language is seen as a medium used to transfer ideas from the speaker to the audience. After having learned the same meaning of signs, the speaker can produce a sign that corresponds to the idea in their mind and the perception of this sign evokes the same idea in the mind of the audience.
A closely related theory focuses not directly on ideas but on intentions. This view is particularly associated with Paul Grice, who observed that people usually communicate to cause some reaction in their audience. He held that the meaning of an expression is given by the intended reaction. This means that communication is not just about decoding what the speaker literally said but requires an understanding of their intention or why they said it. For example, telling someone looking for petrol that "there is a garage around the corner" has the meaning that petrol can be obtained there because of the speaker's intention to help. This goes beyond the literal meaning, which has no explicit connection to petrol.
Causal
Causal theories hold that the meaning of an expression depends on the causes and effects it has. According to behaviorist semantics, also referred to as stimulus-response theory, the meaning of an expression is given by the situation that prompts the speaker to use it and the response it provokes in the audience. For instance, the meaning of yelling "Fire!" is given by the presence of an uncontrolled fire and attempts to control it or seek safety. Behaviorist semantics relies on the idea that learning a language consists in adopting behavioral patterns in the form of stimulus-response pairs. One of its key motivations is to avoid private mental entities and define meaning instead in terms of publicly observable language behavior.
Another causal theory focuses on the meaning of names and holds that a naming event is required to establish the link between name and named entity. This naming event acts as a form of baptism that establishes the first link of a causal chain in which all subsequent uses of the name participate. According to this view, the name Plato refers to an ancient Greek philosopher because, at some point, he was originally named this way and people kept using this name to refer to him. This view was originally formulated by Saul Kripke to apply to names only but has been extended to cover other types of speech as well.
Others
Truth-conditional semantics analyzes the meaning of sentences in terms of their truth conditions. According to this view, to understand a sentence means to know what the world needs to be like for the sentence to be true. Truth conditions can themselves be expressed through possible worlds. For example, the sentence "Hillary Clinton won the 2016 American presidential election" is false in the actual world but there are some possible worlds in which it is true. The extension of a sentence can be interpreted as its truth value while its intension is the set of all possible worlds in which it is true. Truth-conditional semantics is closely related to verificationist theories, which introduce the additional idea that there should be some kind of verification procedure to assess whether a sentence is true. They state that the meaning of a sentence consists in the method to verify it or in the circumstances that justify it. For instance, scientific claims often make predictions, which can be used to confirm or disconfirm them using observation. According to verificationism, sentences that can neither be verified nor falsified are meaningless.
The use theory states that the meaning of an expression is given by the way it is utilized. This view was first introduced by Ludwig Wittgenstein, who understood language as a collection of language games. The meaning of expressions depends on how they are used inside a game and the same expression may have different meanings in different games. Some versions of this theory identify meaning directly with patterns of regular use. Others focus on social norms and conventions by additionally taking into account whether a certain use is considered appropriate in a given society.
Inferentialist semantics, also called conceptual role semantics, holds that the meaning of an expression is given by the role it plays in the premises and conclusions of good inferences. For example, one can infer from "x is a male sibling" that "x is a brother" and one can infer from "x is a brother" that "x has parents". According to inferentialist semantics, the meaning of the word brother is determined by these and all similar inferences that can be drawn.
History
Semantics was established as an independent field of inquiry in the 19th century but the study of semantic phenomena began as early as the ancient period as part of philosophy and logic. In ancient Greece, Plato (427–347 BCE) explored the relation between names and things in his dialogue Cratylus. It considers the positions of naturalism, which holds that things have their name by nature, and conventionalism, which states that names are related to their referents by customs and conventions among language users. The book On Interpretation by Aristotle (384–322 BCE) introduced various conceptual distinctions that greatly influenced subsequent works in semantics. He developed an early form of the semantic triangle by holding that spoken and written words evoke mental concepts, which refer to external things by resembling them. For him, mental concepts are the same for all humans, unlike the conventional words they associate with those concepts. The Stoics incorporated many of the insights of their predecessors to develop a complex theory of language through the perspective of logic. They discerned different kinds of words by their semantic and syntactic roles, such as the contrast between names, common nouns, and verbs. They also discussed the difference between statements, commands, and prohibitions.
In ancient India, the orthodox school of Nyaya held that all names refer to real objects. It explored how words lead to an understanding of the thing meant and what consequence this relation has to the creation of knowledge. Philosophers of the orthodox school of Mīmāṃsā discussed the relation between the meanings of individual words and full sentences while considering which one is more basic. The book Vākyapadīya by Bhartṛhari (4th–5th century CE) distinguished between different types of words and considered how they can carry different meanings depending on how they are used. In ancient China, the Mohists argued that names play a key role in making distinctions to guide moral behavior. They inspired the School of Names, which explored the relation between names and entities while examining how names are required to identify and judge entities.
In the Middle Ages, Augustine of Hippo (354–430) developed a general conception of signs as entities that stand for other entities and convey them to the intellect. He was the first to introduce the distinction between natural and linguistic signs as different types belonging to a common genus. Boethius (480–528) wrote a translation of and various comments on Aristotle's book On Interpretation, which popularized its main ideas and inspired reflections on semantic phenomena in the scholastic tradition. An innovation in the semantics of Peter Abelard (1079–1142) was his interest in propositions or the meaning of sentences in contrast to the focus on the meaning of individual words by many of his predecessors. He further explored the nature of universals, which he understood as mere semantic phenomena of common names caused by mental abstractions that do not refer to any entities. In the Arabic tradition, Ibn Faris (920–1004) identified meaning with the intention of the speaker while Abu Mansur al-Azhari (895–980) held that meaning resides directly in speech and needs to be extracted through interpretation.
An important topic towards the end of the Middle Ages was the distinction between categorematic and syncategorematic terms. Categorematic terms have an independent meaning and refer to some part of reality, like horse and Socrates. Syncategorematic terms lack independent meaning and fulfill other semantic functions, such as modifying or quantifying the meaning of other expressions, like the words some, not, and necessarily. An early version of the causal theory of meaning was proposed by Roger Bacon (c. 1219/20 – c. 1292), who held that things get names similar to how people get names through some kind of initial baptism. His ideas inspired the tradition of the speculative grammarians, who proposed that there are certain universal structures found in all languages. They arrived at this conclusion by drawing an analogy between the modes of signification on the level of language, the modes of understanding on the level of mind, and the modes of being on the level of reality.
In the early modern period, Thomas Hobbes (1588–1679) distinguished between marks, which people use privately to recall their own thoughts, and signs, which are used publicly to communicate their ideas to others. In their Port-Royal Logic, Antoine Arnauld (1612–1694) and Pierre Nicole (1625–1695) developed an early precursor of the distinction between intension and extension. The Essay Concerning Human Understanding by John Locke (1632–1704) presented an influential version of the ideational theory of meaning, according to which words stand for ideas and help people communicate by transferring ideas from one mind to another. Gottfried Wilhelm Leibniz (1646–1716) understood language as the mirror of thought and tried to conceive the outlines of a universal formal language to express scientific and philosophical truths. This attempt inspired theorists Christian Wolff (1679–1754), Georg Bernhard Bilfinger (1693–1750), and Johann Heinrich Lambert (1728–1777) to develop the idea of a general science of sign systems. Étienne Bonnot de Condillac (1715–1780) accepted and further developed Leibniz's idea of the linguistic nature of thought. Against Locke, he held that language is involved in the creation of ideas and is not merely a medium to communicate them.
In the 19th century, semantics emerged and solidified as an independent field of inquiry. Christian Karl Reisig (1792–1829) is sometimes credited as the father of semantics since he clarified its concept and scope while also making various contributions to its key ideas. Michel Bréal (1832–1915) followed him in providing a broad conception of the field, for which he coined the French term . John Stuart Mill (1806–1873) gave great importance to the role of names to refer to things. He distinguished between the connotation and denotation of names and held that propositions are formed by combining names. Charles Sanders Peirce (1839–1914) conceived semiotics as a general theory of signs with several subdisciplines, which were later identified by Charles W. Morris (1901–1979) as syntactics, semantics, and pragmatics. In his pragmatist approach to semantics, Peirce held that the meaning of conceptions consists in the entirety of their practical consequences. The philosophy of Gottlob Frege (1848–1925) contributed to semantics on many different levels. Frege first introduced the distinction between sense and reference, and his development of predicate logic and the principle of compositionality formed the foundation of many subsequent developments in formal semantics. Edmund Husserl (1859–1938) explored meaning from a phenomenological perspective by considering the mental acts that endow expressions with meaning. He held that meaning always implies reference to an object and expressions that lack a referent, like green is or, are meaningless.
In the 20th century, Alfred Tarski (1901–1983) defined truth in formal languages through his semantic theory of truth, which was influential in the development of truth-conditional semantics by Donald Davidson (1917–2003). Tarski's student Richard Montague (1930–1971) formulated a complex formal framework of the semantics of the English language, which was responsible for establishing formal semantics as a major area of research. According to structural semantics, which was inspired by the structuralist philosophy of Ferdinand de Saussure (1857–1913), language is a complex network of structural relations and the meanings of words are not fixed individually but depend on their position within this network. The theory of general semantics was developed by Alfred Korzybski (1879–1950) as an inquiry into how language represents reality and affects human thought. The contributions of George Lakoff (1941–present) and Ronald Langacker (1942–present) provided the foundation of cognitive semantics. Charles J. Fillmore (1929–2014) developed frame semantics as a major approach in this area. The closely related field of conceptual semantics was inaugurated by Ray Jackendoff (1945–present).
In various disciplines
Logic
Logicians study correct reasoning and often develop formal languages to express arguments and assess their correctness. One part of this process is to provide a semantics for a formal language to precisely define what its terms mean. A semantics of a formal language is a set of rules, usually expressed as a mathematical function, that assigns meanings to formal language expressions. For example, the language of first-order logic uses lowercase letters for individual constants and uppercase letters for predicates. To express the sentence "Bertie is a dog", the formula can be used where is an individual constant for Bertie and is a predicate for dog. Classical model-theoretic semantics assigns meaning to these terms by defining an interpretation function that maps individual constants to specific objects and predicates to sets of objects or tuples. The function maps to Bertie and to the set of all dogs. This way, it is possible to calculate the truth value of the sentence: it is true if Bertie is a member of the set of dogs and false otherwise.
Formal logic aims to determine whether arguments are deductively valid, that is, whether the premises entail the conclusion. Entailment can be defined in terms of syntax or in terms of semantics. Syntactic entailment, expressed with the symbol , relies on rules of inference, which can be understood as procedures to transform premises and arrive at a conclusion. These procedures only take the logical form of the premises on the level of syntax into account and ignore what meaning they express. Semantic entailment, expressed with the symbol , looks at the meaning of the premises, in particular, at their truth value. A conclusion follows semantically from a set of premises if the truth of the premises ensures the truth of the conclusion, that is, if any semantic interpretation function that assigns the premises the value true also assigns the conclusion the value true.
Computer science
In computer science, the semantics of a program is how it behaves when a computer runs it. Semantics contrasts with syntax, which is the particular form in which instructions are expressed. The same behavior can usually be described with different forms of syntax. In JavaScript, this is the case for the commands i += 1 and i = i + 1, which are syntactically different expressions to increase the value of the variable i by one. This difference is also reflected in different programming languages since they rely on different syntax but can usually be employed to create programs with the same behavior on the semantic level.
Static semantics focuses on semantic aspects that affect the compilation of a program. In particular, it is concerned with detecting errors of syntactically correct programs, such as type errors, which arise when an operation receives an incompatible data type. This is the case, for instance, if a function performing a numerical calculation is given a string instead of a number as an argument. Dynamic semantics focuses on the run time behavior of programs, that is, what happens during the execution of instructions. The main approaches to dynamic semantics are denotational, axiomatic, and operational semantics. Denotational semantics relies on mathematical formalisms to describe the effects of each element of the code. Axiomatic semantics uses deductive logic to analyze which conditions must be in place before and after the execution of a program. Operational semantics interprets the execution of a program as a series of steps, each involving the transition from one state to another state.
Psychology
Psychological semantics examines psychological aspects of meaning. It is concerned with how meaning is represented on a cognitive level and what mental processes are involved in understanding and producing language. It further investigates how meaning interacts with other mental processes, such as the relation between language and perceptual experience. Other issues concern how people learn new words and relate them to familiar things and concepts, how they infer the meaning of compound expressions they have never heard before, how they resolve ambiguous expressions, and how semantic illusions lead them to misinterpret sentences.
One key topic is semantic memory, which is a form of general knowledge of meaning that includes the knowledge of language, concepts, and facts. It contrasts with episodic memory, which records events that a person experienced in their life. The comprehension of language relies on semantic memory and the information it carries about word meanings. According to a common view, word meanings are stored and processed in relation to their semantic features. The feature comparison model states that sentences like "a robin is a bird" are assessed on a psychological level by comparing the semantic features of the word robin with the semantic features of the word bird. The assessment process is fast if their semantic features are similar, which is the case if the example is a prototype of the general category. For atypical examples, as in the sentence "a penguin is a bird", there is less overlap in the semantic features and the psychological process is significantly slower.
See also
Contronym
References
Notes
Citations
Sources
External links
Concepts in logic
Grammar
+
Meaning (philosophy of language) | 0.781964 | 0.998911 | 0.781113 |
Reflexivity (social theory) | In epistemology, and more specifically, the sociology of knowledge, reflexivity refers to circular relationships between cause and effect, especially as embedded in human belief structures. A reflexive relationship is multi-directional when the causes and the effects affect the reflexive agent in a layered or complex sociological relationship. The complexity of this relationship can be furthered when epistemology includes religion.
Within sociology more broadly—the field of origin—reflexivity means an act of self-reference where existence engenders examination, by which the thinking action "bends back on", refers to, and affects the entity instigating the action or examination. It commonly refers to the capacity of an agent to recognise forces of socialisation and alter their place in the social structure. A low level of reflexivity would result in individuals shaped largely by their environment (or "society"). A high level of social reflexivity would be defined by individuals shaping their own norms, tastes, politics, desires, and so on. This is similar to the notion of autonomy. (See also structure and agency and social mobility.)
Within economics, reflexivity refers to the self-reinforcing effect of market sentiment, whereby rising prices attract buyers whose actions drive prices higher still until the process becomes unsustainable. This is an instance of a positive feedback loop. The same process can operate in reverse leading to a catastrophic collapse in prices.
Overview
In social theory, reflexivity may occur when theories in a discipline should apply equally to the discipline itself; for example, in the case that the theories of knowledge construction in the field of sociology of scientific knowledge should apply equally to knowledge construction by sociology of scientific knowledge practitioners, or when the subject matter of a discipline should apply equally to the individual practitioners of that discipline (e.g., when psychological theory should explain the psychological processes of psychologists). More broadly, reflexivity is considered to occur when the observations of observers in the social system affect the very situations they are observing, or when theory being formulated is disseminated to and affects the behaviour of the individuals or systems the theory is meant to be objectively modelling. Thus, for example, an anthropologist living in an isolated village may affect the village and the behaviour of its citizens under study. The observations are not independent of the participation of the observer.
Reflexivity is, therefore, a methodological issue in the social sciences analogous to the observer effect. Within that part of recent sociology of science that has been called the strong programme, reflexivity is suggested as a methodological norm or principle, meaning that a full theoretical account of the social construction of, say, scientific, religious or ethical knowledge systems, should itself be explainable by the same principles and methods as used for accounting for these other knowledge systems. This points to a general feature of naturalised epistemologies, that such theories of knowledge allow for specific fields of research to elucidate other fields as part of an overall self-reflective process: any particular field of research occupied with aspects of knowledge processes in general (e.g., history of science, cognitive science, sociology of science, psychology of perception, semiotics, logic, neuroscience) may reflexively study other such fields yielding to an overall improved reflection on the conditions for creating knowledge.
Reflexivity includes both a subjective process of self-consciousness inquiry and the study of social behaviour with reference to theories about social relationships.
History
The principle of reflexivity was perhaps first enunciated by the sociologists William I. Thomas and Dorothy Swaine Thomas, in their 1928 book The child in America: "If men define situations as real, they are real in their consequences". The theory was later termed the "Thomas theorem".
Sociologist Robert K. Merton (1948, 1949) built on the Thomas principle to define the notion of a self-fulfilling prophecy: that once a prediction or prophecy is made, actors may accommodate their behaviours and actions so that a statement that would have been false becomes true or, conversely, a statement that would have been true becomes false - as a consequence of the prediction or prophecy being made. The prophecy has a constitutive impact on the outcome or result, changing the outcome from what would otherwise have happened.
Reflexivity was taken up as an issue in science in general by Karl Popper (1957), who in his book The poverty of historicism highlighted the influence of a prediction upon the event predicted, calling this the 'Oedipus effect' in reference to the Greek tale in which the sequence of events fulfilling the Oracle's prophecy is greatly influenced by the prophecy itself. Popper initially considered such self-fulfilling prophecy a distinguishing feature of social science, but later came to see that in the natural sciences, particularly biology and even molecular biology, something equivalent to expectation comes into play and can act to bring about that which has been expected. It was also taken up by Ernest Nagel (1961). Reflexivity presents a problem for science because if a prediction can lead to changes in the system that the prediction is made in relation to, it becomes difficult to assess scientific hypotheses by comparing the predictions they entail with the events that actually occur. The problem is even more difficult in the social sciences.
Reflexivity has been taken up as the issue of "reflexive prediction" in economic science by Grunberg and Modigliani (1954) and Herbert A. Simon (1954), has been debated as a major issue in relation to the Lucas critique, and has been raised as a methodological issue in economic science arising from the issue of reflexivity in the sociology of scientific knowledge (SSK) literature.
Reflexivity has emerged as both an issue and a solution in modern approaches to the problem of structure and agency, for example in the work of Anthony Giddens in his structuration theory and Pierre Bourdieu in his genetic structuralism.
Giddens, for example, noted that constitutive reflexivity is possible in any social system, and that this presents a distinct methodological problem for the social sciences. Giddens accentuated this theme with his notion of "reflexive modernity" – the argument that, over time, society is becoming increasingly more self-aware, reflective, and hence reflexive.
Bourdieu argued that the social scientist is inherently laden with biases, and only by becoming reflexively aware of those biases can the social scientists free themselves from them and aspire to the practice of an objective science. For Bourdieu, therefore, reflexivity is part of the solution, not the problem.
Michel Foucault's The order of things can be said to touch on the issue of Reflexivity. Foucault examines the history of Western thought since the Renaissance and argues that each historical epoch (he identifies three and proposes a fourth) has an episteme, or "a historical a priori", that structures and organises knowledge. Foucault argues that the concept of man emerged in the early 19th century, what he calls the "Age of Man", with the philosophy of Immanuel Kant. He finishes the book by posing the problem of the age of man and our pursuit of knowledge- where "man is both knowing subject and the object of his own study"; thus, Foucault argues that the social sciences, far from being objective, produce truth in their own mutually exclusive discourses.
In economics
Economic philosopher George Soros, influenced by ideas put forward by his tutor, Karl Popper (1957), has been an active promoter of the relevance of reflexivity to economics, first propounding it publicly in his 1987 book The alchemy of finance. He regards his insights into market behaviour from applying the principle as a major factor in the success of his financial career.
Reflexivity is inconsistent with general equilibrium theory, which stipulates that markets move towards equilibrium and that non-equilibrium fluctuations are merely random noise that will soon be corrected. In equilibrium theory, prices in the long run at equilibrium reflect the underlying economic fundamentals, which are unaffected by prices. Reflexivity asserts that prices do in fact influence the fundamentals and that these newly influenced sets of fundamentals then proceed to change expectations, thus influencing prices; the process continues in a self-reinforcing pattern. Because the pattern is self-reinforcing, markets tend towards disequilibrium. Sooner or later they reach a point where the sentiment is reversed and negative expectations become self-reinforcing in the downward direction, thereby explaining the familiar pattern of boom and bust cycles. An example Soros cites is the procyclical nature of lending, that is, the willingness of banks to ease lending standards for real estate loans when prices are rising, then raising standards when real estate prices are falling, reinforcing the boom and bust cycle. He further suggests that property price inflation is essentially a reflexive phenomenon: house prices are influenced by the sums that banks are prepared to advance for their purchase, and these sums are determined by the banks' estimation of the prices that the property would command.
Soros has often claimed that his grasp of the principle of reflexivity is what has given him his "edge" and that it is the major factor contributing to his successes as a trader. For several decades there was little sign of the principle being accepted in mainstream economic circles, but there has been an increase of interest following the crash of 2008, with academic journals, economists, and investors discussing his theories.
Economist and former columnist of the Financial Times, Anatole Kaletsky, argued that Soros' concept of reflexivity is useful in understanding China's economy and how the Chinese government manages it.
In 2009, Soros funded the launch of the Institute for New Economic Thinking with the hope that it would develop reflexivity further. The Institute works with several types of heterodox economics, particularly the post-Keynesian branch.
In sociology
Margaret Archer has written extensively on laypeople's reflexivity. For her, human reflexivity is a mediating mechanism between structural properties, or the individual's social context, and action, or the individual's ultimate concerns. Reflexive activity, according to Archer, increasingly takes the place of habitual action in late modernity since routine forms prove ineffective in dealing with the complexity of modern life trajectories.
While Archer emphasises the agentic aspect of reflexivity, reflexive orientations can themselves be seen as being "socially and temporally embedded". For example, Elster points out that reflexivity cannot be understood without taking into account the fact that it draws on background configurations (e.g., shared meanings, as well as past social engagement and lived experiences of the social world) to be operative.
In anthropology
In anthropology, reflexivity has come to have two distinct meanings, one that refers to the researcher's awareness of an analytic focus on his or her relationship to the field of study, and the other that attends to the ways that cultural practices involve consciousness and commentary on themselves.
The first sense of reflexivity in anthropology is part of social science's more general self-critique in the wake of theories by Michel Foucault and others about the relationship of power and knowledge production. Reflexivity about the research process became an important part of the critique of the colonial roots and scientistic methods of anthropology in the "writing cultures" movement associated with James Clifford and George Marcus, as well as many other anthropologists. Rooted in literary criticism and philosophical analysis of the relationship among the anthropologists, the people represented in texts, and their textual representations, this approach has fundamentally changed ethical and methodological approaches in anthropology. As with the feminist and anti-colonial critiques that provide some of reflexive anthropology's inspiration, the reflexive understanding of the academic and political power of representations, analysis of the process of "writing culture" has become a necessary part of understanding the situation of the ethnographer in the fieldwork situation. Objectification of people and cultures and analysis of them only as objects of study has been largely rejected in favor of developing more collaborative approaches that respect local people's values and goals. Nonetheless, many anthropologists have accused the "writing cultures" approach of muddying the scientific aspects of anthropology with too much introspection about fieldwork relationships, and reflexive anthropology have been heavily attacked by more positivist anthropologists. Considerable debate continues in anthropology over the role of postmodernism and reflexivity, but most anthropologists accept the value of the critical perspective, and generally only argue about the relevance of critical models that seem to lead anthropology away from its earlier core foci.
The second kind of reflexivity studied by anthropologists involves varieties of self-reference in which people and cultural practices call attention to themselves. One important origin for this approach is Roman Jakobson in his studies of deixis and the poetic function in language, but the work of Mikhail Bakhtin on carnival has also been important. Within anthropology, Gregory Bateson developed ideas about meta-messages (subtext) as part of communication, while Clifford Geertz's studies of ritual events such as the Balinese cock-fight point to their role as foci for public reflection on the social order. Studies of play and tricksters further expanded ideas about reflexive cultural practices. Reflexivity has been most intensively explored in studies of performance, public events, rituals, and linguistic forms but can be seen any time acts, things, or people are held up and commented upon or otherwise set apart for consideration. In researching cultural practices, reflexivity plays an important role, but because of its complexity and subtlety, it often goes under-investigated or involves highly specialised analyses.
One use of studying reflexivity is in connection to authenticity. Cultural traditions are often imagined as perpetuated as stable ideals by uncreative actors. Innovation may or may not change tradition, but since reflexivity is intrinsic to many cultural activities, reflexivity is part of tradition and not inauthentic. The study of reflexivity shows that people have both self-awareness and creativity in culture. They can play with, comment upon, debate, modify, and objectify culture through manipulating many different features in recognised ways. This leads to the metaculture of conventions about managing and reflecting upon culture.
In international relations
In international relations, the question of reflexivity was first raised in the context of the so-called ‘Third Debate’ of the late 1980s. This debate marked a break with the positivist orthodoxy of the discipline. The post-positivist theoretical restructuring was seen to introduce reflexivity as a cornerstone of critical scholarship. For Mark Neufeld, reflexivity in International Relations was characterized by 1) self-awareness of underlying premises, 2) an acknowledgment of the political-normative dimension of theoretical paradigms, and 3) the affirmation that judgement about the merits of paradigms is possible despite the impossibility of neutral or apolitical knowledge production.
Since the nineties, reflexivity has become an explicit concern of constructivist, poststructuralist, feminist, and other critical approaches to International Relations. In The Conduct of Inquiry in International Relations, Patrick Thaddeus Jackson identified reflexivity of one of the four main methodologies into which contemporary International Relations research can be divided, alongside neopositivism, critical realism, and analyticism.
Reflexivity and the status of the social sciences
Flanagan has argued that reflexivity complicates all three of the traditional roles that are typically played by a classical science: explanation, prediction and control. The fact that individuals and social collectivities are capable of self-inquiry and adaptation is a key characteristic of real-world social systems, differentiating the social sciences from the physical sciences. Reflexivity, therefore, raises real issues regarding the extent to which the social sciences may ever be viewed as "hard" sciences analogous to classical physics, and raises questions about the nature of the social sciences.
Methods for the implementation of reflexivity
A new generation of scholars has gone beyond (meta-)theoretical discussion to develop concrete research practices for the implementation of reflexivity. These scholars have addressed the ‘how to’ question by turning reflexivity from an informal process into a formal research practice. While most research focuses on how scholars can become more reflexive toward their positionality and situatedness, some have sought to build reflexive methods in relation to other processes of knowledge production, such as the use of language. The latter has been advanced by the work of Professor Audrey Alejandro in a trilogy on reflexive methods. The first article of the trilogy develops what is referred to as Reflexive Discourse Analysis, a critical methodology for the implementation of reflexivity that integrates discourse theory. The second article further expands the methodological tools for practicing reflexivity by introducing a three-stage research method for problematizing linguistic categories. The final piece of the trilogy adds a further method for linguistic reflexivity, namely the Reflexive Review. This method provides four steps that aim to add a linguistic and reflexive dimension to the practice of writing a literature review.
See also
References
Further reading
Bryant, C. G. A. (2002). "George Soros's theory of reflexivity: a comparison with the theories of Giddens and Beck and a consideration of its practical value", Economy and society, 31 (1), pp. 112–131.
Flanagan, O. J. (1981). "Psychology, progress, and the problem of reflexivity: a study in the epistemological foundations of psychology", Journal of the history of the behavioral sciences, 17, pp. 375–386.
Gay, D. (2009). Reflexivity and development economics. London: Palgrave Macmillan
Grunberg, E. and F. Modigliani (1954). "The predictability of social events", Journal of political economy, 62 (6), pp. 465–478.
Merton, R. K. (1948). "The self-fulfilling prophecy", Antioch Review, 8, pp. 193–210.
Merton, R. K. (1949/1957), Social theory and social structure. Rev. ed. The Free Press, Glencoe, IL.
Nagel, E. (1961), The structure of science: problems in the logic of scientific explanation, Harcourt, New York.
Popper, K. (1957), The poverty of historicism, Harper and Row, New York.
Simon, H. (1954). "Bandwagon and underdog effects of election predictions", Public opinion quarterly, 18, pp. 245–253.
Soros, G (1987) The alchemy of finance (Simon & Schuster, 1988) (paperback: Wiley, 2003; )
Soros, G (2008) The new paradigm for financial markets: the credit crisis of 2008 and what it means (PublicAffairs, 2008)
Soros, G (2006) The age of fallibility: consequences of the war on terror (PublicAffairs, 2006)
Soros, G The bubble of American supremacy: correcting the misuse of American power (PublicAffairs, 2003) (paperback; PublicAffairs, 2004; )
Soros, G George Soros on globalization (PublicAffairs, 2002) (paperback; PublicAffairs, 2005; )
Soros, G (2000) Open society: reforming global capitalism (PublicAffairs, 2001)
Thomas, W. I. (1923), The unadjusted girl : with cases and standpoint for behavior analysis, Little, Brown, Boston, MA.
Thomas, W. I. and D. S. Thomas (1928), The child in America : behavior problems and programs, Knopf, New York.
Tsekeris, C. (2013). "Toward a chaos-friendly reflexivity", Entelequia, 16, pp. 71–89.
Woolgar, S. (1988). Knowledge and reflexivity: new frontiers in the sociology of knowledge. London and Beverly Hills: Sage.
Sociological terminology
Sociological theories
George Soros
Self-reference | 0.785374 | 0.994522 | 0.781072 |
Dogma | Dogma, in its broadest sense, is any belief held definitively and without the possibility of reform. It may be in the form of an official system of principles or doctrines of a religion, such as Judaism, Roman Catholicism, Protestantism, or Islam, the positions of a philosopher or philosophical school, such as Stoicism, and political belief systems such as fascism, socialism, progressivism, liberalism, and conservatism.
In the pejorative sense, dogma refers to enforced decisions, such as those of aggressive political interests or authorities. More generally, it is applied to some strong belief that its adherents are not willing to discuss rationally. This attitude is named as a dogmatic one, or dogmatism, and is often used to refer to matters related to religion, though this pejorative sense strays far from the formal sense in which it is applied to religious belief. The pejorative sense is not limited to theistic attitudes alone and is often used with respect to political or philosophical dogmas.
Etymology
The word dogma was adopted in the 17th century from , derived from the from the . The plural is based on the , though dogmas may be more commonly used in English.
In philosophy
Pyrrhonism
In Pyrrhonism, "dogma" refers to assent to a proposition about a non-evident matter. The main principle of Pyrrhonism is expressed by the word acatalepsia, which connotes the ability to withhold assent from doctrines regarding the truth of things in their own nature; against every statement its contradiction may be advanced with equal justification. Consequently, Pyrrhonists withhold assent with regard to non-evident propositions, i.e., dogmas. Pyrrhonists argue that dogmatists, such as the Stoics, Epicureans, and Peripatetics, have failed to demonstrate that their doctrines regarding non-evident matters are true.
In religion
Christianity
In Christianity, a dogma is a belief communicated by divine revelation and defined by the Church, The organization's formal religious positions may be taught to new members or simply communicated to those who choose to become members. It is rare for agreement with an organization's formal positions to be a requirement for attendance, though membership may be required for some church activities.
In the narrower sense of the church's official interpretation of divine revelation, theologians distinguish between defined and non-defined dogmas, the former being those set out by authoritative bodies such as the Roman Curia for the Catholic Church, the latter being those which are universally held but have not been officially defined, the nature of Christ as universal redeemer being an example. The term originated in late Greek philosophy legal usage, in which it meant a decree or command, and came to be used in the same sense in early Christian theology. Protestants to differing degrees are less formal about doctrine, and often rely on denomination-specific beliefs, but seldom refer to these beliefs as dogmata. The first unofficial institution of dogma in the Christian church was by Saint Irenaeus in his Demonstration of Apostolic Teaching, which provides a 'manual of essentials' constituting the 'body of truth'.
Catholicism and Eastern Christianity
For Catholicism and Eastern Christianity, the dogmata are contained in the Nicene Creed and the canon laws of two, three, seven, or twenty ecumenical councils (depending on whether one is Church of the East, Oriental Orthodox, Eastern Orthodox, or Roman Catholic). These tenets are summarized by John of Damascus in his Exact Exposition of the Orthodox Faith, which is the third book of his main work, titled The Fount of Knowledge. In this book he takes a dual approach in explaining each article of the faith: one, directed at Christians, where he uses quotes from the Bible and, occasionally, from works of other Church Fathers, and the second, directed both at members of non-Christian religions and at atheists, for whom he employs Aristotelian logic and dialectics.
The decisions of fourteen later councils that Catholics hold as dogmatic and a small number of decrees promulgated by popes exercising papal infallibility (for examples, see Immaculate Conception and Assumption of Mary) are considered as being a part of the Catholic Church's sacred body of doctrine.
Judaism
In the Jewish commentary tradition, dogma is a principle by which the Rabbanim can try the proofs of faith about the existence of God and truth; dogma is what is necessarily true for rational thinking. In Jewish Kabbalah, a dogma is an archetype of the Pardes or Torah Nistar, the secrets of Bible. In the relation between "logical thinking" and "rational Kabbalah" the "Partzuf" is the means to identify "dogma".
Buddhism
View or position (; ) is a central idea in Buddhism that corresponds with the Western notion of dogma. In Buddhist thought, a view is not a simple, abstract collection of propositions, but a charged interpretation of experience which intensely shapes and affects thought, sensation, and action. Having the proper mental attitude toward views is therefore considered an integral part of the Buddhist path, as sometimes correct views need to be put into practice and incorrect views abandoned, while at other times all views are seen as obstacles to enlightenment.
Islam
Taqlid is a term in Islam that refers to conforming to the teachings of a particular person. Classical usage of the term differs between Sunni Islam and Shia Islam. In Sunni Islam, taqlid refers to the unjustified conformity to the teachings of a person without inquiring or thinking about said teachings, rather than the justified conformity of a layperson to the teaching of mujtahid (a person who is qualified for independent reasoning). In Shia Islam, taqlid refers to the general conformity of non-mujtahid to the teaching of mujtahid, without a negative connotation. The discrepancy corresponds to differing views on Shia views on the Imamate and Sunni imams. Taqlid can be seen as a form of dogma, as no particular scholar can always be correct, so their rulings should not be taken uncritically.
See also
Faith – Confidence or trust, often characterized as without evidence
References
Bibliography
External links
Dogma – Strong's N.T. Greek Lexicon
Il Domani – terribile o radioso? – del Dogma , a book by Enrico Maria Radaelli with a Preface by Roger Scruton and comments by Brunero Gherardini, Alessandro Gnocchi-Mario Palmaro, and Mario Oliveri (Roma 2012)
Irenaeus. Demonstration of the Apostolic Preaching. pp. 70–75. [online] available at: Christian Classics ethereal library St. Irenaeus: Demonstration of the Apostolic Preaching – Christian Classics Ethereal Library [Accessed 20 June 2017]
Christian terminology
Epicureanism
Epistemology of religion
Greek words and phrases
Justification (epistemology)
Principles
Pyrrhonism
Religious belief and doctrine
Stoicism
Concepts in ancient Greek epistemology | 0.782444 | 0.997828 | 0.780744 |
Truth | Truth or verity is the property of being in accord with fact or reality. In everyday language, it is typically ascribed to things that aim to represent reality or otherwise correspond to it, such as beliefs, propositions, and declarative sentences.
Truth is usually held to be the opposite of false statement. The concept of truth is discussed and debated in various contexts, including philosophy, art, theology, law, and science. Most human activities depend upon the concept, where its nature as a concept is assumed rather than being a subject of discussion, including journalism and everyday life. Some philosophers view the concept of truth as basic, and unable to be explained in any terms that are more easily understood than the concept of truth itself. Most commonly, truth is viewed as the correspondence of language or thought to a mind-independent world. This is called the correspondence theory of truth.
Various theories and views of truth continue to be debated among scholars, philosophers, and theologians. There are many different questions about the nature of truth which are still the subject of contemporary debates. These include the question of defining truth; whether it is even possible to give an informative definition of truth; identifying things as truth-bearers capable of being true or false; if truth and falsehood are bivalent, or if there are other truth values; identifying the criteria of truth that allow us to identify it and to distinguish it from falsehood; the role that truth plays in constituting knowledge; and, if truth is always absolute or if it can be relative to one's perspective.
Definition and etymology
The English word truth is derived from Old English , Middle English , cognate to Old High German , Old Norse . Like troth, it is a -th nominalisation of the adjective true (Old English ).
The English word true is from Old English (West Saxon) , cognate to Old Saxon , Old High German (Modern German "faithful"), Old Norse , Gothic , all from a Proto-Germanic *trewwj- "having good faith", perhaps ultimately from PIE *dru- "tree", on the notion of "steadfast as an oak" (e.g., Sanskrit "(piece of) wood"). Old Norse , "faith, word of honour; religious faith, belief" (archaic English troth "loyalty, honesty, good faith", compare ).
Thus, "truth" involves both the quality of "faithfulness, fidelity, loyalty, sincerity, veracity", and that of "agreement with fact or reality", in Anglo-Saxon expressed by sōþ (Modern English sooth).
All Germanic languages besides English have introduced a terminological distinction between truth "fidelity" and truth "factuality". To express "factuality", North Germanic opted for nouns derived from "to assert, affirm", while continental West Germanic (German and Dutch) opted for continuations of "faith, trust, pact" (cognate to Slavic věra "(religious) faith", but influenced by Latin ). Romance languages use terms following the Latin , while the Greek , Russian , South Slavic istina and Sanskrit (related to English sooth and North Germanic ) have separate etymological origins.
In some modern contexts, the word "truth" is used to refer to fidelity to an original or standard. It can also be used in the context of being "true to oneself" in the sense of acting with authenticity.
Major theories
The question of what is a proper basis for deciding how words, symbols, ideas and beliefs may properly be considered true, whether by a single person or an entire society, is dealt with by the five most prevalent substantive theories of truth listed below. Each presents perspectives that are widely shared by published scholars.
Theories other than the most prevalent substantive theories are also discussed. According to a survey of professional philosophers and others on their philosophical views which was carried out in November 2009 (taken by 3226 respondents, including 1803 philosophy faculty members and/or PhDs and 829 philosophy graduate students) 45% of respondents accept or lean toward correspondence theories, 21% accept or lean toward deflationary theories and 14% epistemic theories.
Substantive
Correspondence
Correspondence theories emphasize that true beliefs and true statements correspond to the actual state of affairs. This type of theory stresses a relationship between thoughts or statements on one hand, and things or objects on the other. It is a traditional model tracing its origins to ancient Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined in principle entirely by how it relates to "things" according to whether it accurately describes those "things". A classic example of correspondence theory is the statement by the thirteenth century philosopher and theologian Thomas Aquinas: "Veritas est adaequatio rei et intellectus" ("Truth is the adequation of things and intellect"), which Aquinas attributed to the ninth century Neoplatonist Isaac Israeli. Aquinas also restated the theory as: "A judgment is said to be true when it conforms to the external reality".
Correspondence theory centres heavily around the assumption that truth is a matter of accurately copying what is known as "objective reality" and then representing it in thoughts, words, and other symbols. Many modern theorists have stated that this ideal cannot be achieved without analysing additional factors. For example, language plays a role in that all languages have words to represent concepts that are virtually undefined in other languages. The German word Zeitgeist is one such example: one who speaks or understands the language may "know" what it means, but any translation of the word apparently fails to accurately capture its full meaning (this is a problem with many abstract words, especially those derived in agglutinative languages). Thus, some words add an additional parameter to the construction of an accurate truth predicate. Among the philosophers who grappled with this problem is Alfred Tarski, whose semantic theory is summarized further on.
Proponents of several of the theories below have gone further to assert that there are yet other issues necessary to the analysis, such as interpersonal power struggles, community interactions, personal biases, and other factors involved in deciding what is seen as truth.
Coherence
For coherence theories in general, truth requires a proper fit of elements within a whole system. Very often, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. So, for example, the completeness and comprehensiveness of the underlying set of concepts is a critical factor in judging the validity and usefulness of a coherent system. A pervasive tenet of coherence theories is the idea that truth is primarily a property of whole systems of propositions, and can be ascribed to individual propositions only according to their coherence with the whole. Among the assortment of perspectives commonly regarded as coherence theory, theorists differ on the question of whether coherence entails many possible true systems of thought or only a single absolute system.
Some variants of coherence theory are claimed to describe the essential and intrinsic properties of formal systems in logic and mathematics. Formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries. On the whole, coherence theories have been rejected for lacking justification in their application to other areas of truth, especially with respect to assertions about the natural world, empirical data in general, assertions about practical matters of psychology and society, especially when used without support from the other major theories of truth.
Coherence theories distinguish the thought of rationalist philosophers, particularly of Baruch Spinoza, Gottfried Wilhelm Leibniz, and Georg Wilhelm Friedrich Hegel, along with the British philosopher F. H. Bradley. They have found a resurgence also among several proponents of logical positivism, notably Otto Neurath and Carl Hempel.
Pragmatic
The three most influential forms of the pragmatic theory of truth were introduced around the turn of the 20th century by Charles Sanders Peirce, William James, and John Dewey. Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they hold in common that truth is verified and confirmed by the results of putting one's concepts into practice.
Peirce defines it: "Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth." This statement stresses Peirce's view that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and "reference to the future", are essential to a proper conception of truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions.
James' version of pragmatic theory, while complex, is often summarized by his statement that "the 'true' is only the expedient in our way of thinking, just as the 'right' is only the expedient in our way of behaving." By this, James meant that truth is a quality, the value of which is confirmed by its effectiveness when applying concepts to practice (thus, "pragmatic").
Dewey, less broadly than James but more broadly than Peirce, held that inquiry, whether scientific, technical, sociological, philosophical, or cultural, is self-corrective over time if openly submitted for testing by a community of inquirers in order to clarify, justify, refine, and/or refute proposed truths.
Though not widely known, a new variation of the pragmatic theory was defined and wielded successfully from the 20th century forward. Defined and named by William Ernest Hocking, this variation is known as "negative pragmatism". Essentially, what works may or may not be true, but what fails cannot be true because the truth always works. Philosopher of science Richard Feynman also subscribed to it: "We never are definitely right, we can only be sure we are wrong." This approach incorporates many of the ideas from Peirce, James, and Dewey. For Peirce, the idea of "endless investigation would tend to bring about scientific belief" fits negative pragmatism in that a negative pragmatist would never stop testing. As Feynman noted, an idea or theory "could never be proved right, because tomorrow's experiment might succeed in proving wrong what you thought was right." Similarly, James and Dewey's ideas also ascribe truth to repeated testing which is "self-corrective" over time.
Pragmatism and negative pragmatism are also closely aligned with the coherence theory of truth in that any testing should not be isolated but rather incorporate knowledge from all human endeavors and experience. The universe is a whole and integrated system, and testing should acknowledge and account for its diversity. As Feynman said, "... if it disagrees with experiment, it is wrong."
Constructivist
Social constructivism holds that truth is constructed by social processes, is historically and culturally specific, and that it is in part shaped through the power struggles within a community. Constructivism views all of our knowledge as "constructed," because it does not reflect any external "transcendent" realities (as a pure correspondence theory might hold). Rather, perceptions of truth are viewed as contingent on convention, human perception, and social experience. It is believed by constructivists that representations of physical and biological reality, including race, sexuality, and gender, are socially constructed.
Giambattista Vico was among the first to claim that history and culture were man-made. Vico's epistemological orientation unfolds in one axiom: verum ipsum factum—"truth itself is constructed". Hegel and Marx were among the other early proponents of the premise that truth is, or can be, socially constructed. Marx, like many critical theorists who followed, did not reject the existence of objective truth, but rather distinguished between true knowledge and knowledge that has been distorted through power or ideology. For Marx, scientific and true knowledge is "in accordance with the dialectical understanding of history" and ideological knowledge is "an epiphenomenal expression of the relation of material forces in a given economic arrangement".
Consensus
Consensus theory holds that truth is whatever is agreed upon, or in some versions, might come to be agreed upon, by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person.
Among the current advocates of consensus theory as a useful accounting of the concept of "truth" is the philosopher Jürgen Habermas. Habermas maintains that truth is what would be agreed upon in an ideal speech situation. Among the current strong critics of consensus theory is the philosopher Nicholas Rescher.
Minimalist
Deflationary
Modern developments in the field of philosophy have resulted in the rise of a new thesis: that the term truth does not denote a real property of sentences or propositions. This thesis is in part a response to the common use of truth predicates (e.g., that some particular thing "...is true") which was particularly prevalent in philosophical discourse on truth in the first half of the 20th century. From this point of view, to assert that "'2 + 2 = 4' is true" is logically equivalent to asserting that "2 + 2 = 4", and the phrase "is true" is—philosophically, if not practically (see: "Michael" example, below)—completely dispensable in this and every other context. In common parlance, truth predicates are not commonly heard, and it would be interpreted as an unusual occurrence were someone to utilize a truth predicate in an everyday conversation when asserting that something is true. Newer perspectives that take this discrepancy into account, and work with sentence structures as actually employed in common discourse, can be broadly described:
as deflationary theories of truth, since they attempt to deflate the presumed importance of the words "true" or truth,
as disquotational theories, to draw attention to the disappearance of the quotation marks in cases like the above example, or
as minimalist theories of truth.
Whichever term is used, deflationary theories can be said to hold in common that "the predicate 'true' is an expressive convenience, not the name of a property requiring deep analysis." Once we have identified the truth predicate's formal features and utility, deflationists argue, we have said all there is to be said about truth. Among the theoretical concerns of these views is to explain away those special cases where it does appear that the concept of truth has peculiar and interesting properties. (See, e.g., Semantic paradoxes, and below.)
The scope of deflationary principles is generally limited to representations that resemble sentences. They do not encompass a broader range of entities that are typically considered true or otherwise. In addition, some deflationists point out that the concept employed in "...is true" formulations does enable us to express things that might otherwise require infinitely long sentences; for example, one cannot express confidence in Michael's accuracy by asserting the endless sentence:
Michael says, 'snow is white' and snow is white, or he says 'roses are red' and roses are red or he says ... etc.
This assertion can instead be succinctly expressed by saying: What Michael says is true.
Redundancy and related
An early variety of deflationary theory is the redundancy theory of truth, so-called because—in examples like those above, e.g. "snow is white [is true]"—the concept of "truth" is redundant and need not have been articulated; that is, it is merely a word that is traditionally used in conversation or writing, generally for emphasis, but not a word that actually equates to anything in reality. This theory is commonly attributed to Frank P. Ramsey, who held that the use of words like fact and truth was nothing but a roundabout way of asserting a proposition, and that treating these words as separate problems in isolation from judgment was merely a "linguistic muddle".
A variant of redundancy theory is the "disquotational" theory, which uses a modified form of the logician Alfred Tarski's schema: proponents observe that to say that "'P' is true" is to assert "P". A version of this theory was defended by C. J. F. Williams (in his book What is Truth?). Yet another version of deflationism is the prosentential theory of truth, first developed by Dorothy Grover, Joseph Camp, and Nuel Belnap as an elaboration of Ramsey's claims. They argue that utterances such as "that's true", when said in response to (e.g.) "it's raining", are "prosentences"—expressions that merely repeat the content of other expressions. In the same way that it means the same as my dog in the statement "my dog was hungry, so I fed it", that's true is supposed to mean the same as it's raining when the former is said in reply to the latter.
As noted above, proponents of these ideas do not necessarily follow Ramsey in asserting that truth is not a property; rather, they can be understood to say that, for instance, the assertion "P" may well involve a substantial truth—it is only the redundancy involved in statements such as "that's true" (i.e., a prosentence) which is to be minimized.
Performative
Attributed to philosopher P. F. Strawson is the performative theory of truth which holds that to say "'Snow is white' is true" is to perform the speech act of signaling one's agreement with the claim that snow is white (much like nodding one's head in agreement). The idea that some statements are more actions than communicative statements is not as odd as it may seem. For example, when a wedding couple says "I do" at the appropriate time in a wedding, they are performing the act of taking the other to be their lawful wedded spouse. They are not describing themselves as taking the other, but actually doing so (perhaps the most thorough analysis of such "illocutionary acts" is J. L. Austin, most notably in "How to Do Things With Words").
Strawson holds that a similar analysis is applicable to all speech acts, not just illocutionary ones: "To say a statement is true is not to make a statement about a statement, but rather to perform the act of agreeing with, accepting, or endorsing a statement. When one says 'It's true that it's raining,' one asserts no more than 'It's raining.' The function of [the statement] 'It's true that...' is to agree with, accept, or endorse the statement that 'it's raining.'"
Philosophical skepticism
Philosophical skepticism is generally any doubt of one or more items of knowledge or belief which ascribe truth to their assertions and propositions. The primary target of philosophical skepticism is epistemology, but it can be applied to any domain, such as the supernatural, morality (moral skepticism), and religion (skepticism about the existence of God).
Philosophical skepticism comes in various forms. Radical forms of skepticism deny that knowledge or rational belief is possible and urge us to suspend judgment regarding ascription of truth on many or all controversial matters. More moderate forms of skepticism claim only that nothing can be known with certainty, or that we can know little or nothing about the "big questions" in life, such as whether God exists or whether there is an afterlife. Religious skepticism is "doubt concerning basic religious principles (such as immortality, providence, and revelation)". Scientific skepticism concerns testing beliefs for reliability, by subjecting them to systematic investigation using the scientific method, to discover empirical evidence for them.
Pluralist
Several of the major theories of truth hold that there is a particular property the having of which makes a belief or proposition true. Pluralist theories of truth assert that there may be more than one property that makes propositions true: ethical propositions might be true by virtue of coherence. Propositions about the physical world might be true by corresponding to the objects and properties they are about.
Some of the pragmatic theories, such as those by Charles Peirce and William James, included aspects of correspondence, coherence and constructivist theories. Crispin Wright argued in his 1992 book Truth and Objectivity that any predicate which satisfied certain platitudes about truth qualified as a truth predicate. In some discourses, Wright argued, the role of the truth predicate might be played by the notion of superassertibility. Michael Lynch, in a 2009 book Truth as One and Many, argued that we should see truth as a functional property capable of being multiply manifested in distinct properties like correspondence or coherence.
Formal theories
Logic
Logic is concerned with the patterns in reason that can help tell if a proposition is true or not. Logicians use formal languages to express the truths they are concerned with, and as such there is only truth under some interpretation or truth within some logical system.
A logical truth (also called an analytic truth or a necessary truth) is a statement that is true in all possible worlds or under all possible interpretations, as contrasted to a fact (also called a synthetic claim or a contingency), which is only true in this world as it has historically unfolded. A proposition such as "If p and q, then p" is considered to be a logical truth because of the meaning of the symbols and words in it and not because of any fact of any particular world. They are such that they could not be untrue.
Degrees of truth in logic may be represented using two or more discrete values, as with bivalent logic (or binary logic), three-valued logic, and other forms of finite-valued logic. Truth in logic can be represented using numbers comprising a continuous range, typically between 0 and 1, as with fuzzy logic and other forms of infinite-valued logic. In general, the concept of representing truth using more than two values is known as many-valued logic.
Mathematics
There are two main approaches to truth in mathematics. They are the model theory of truth and the proof theory of truth.
Historically, with the nineteenth century development of Boolean algebra, mathematical models of logic began to treat "truth", also represented as "T" or "1", as an arbitrary constant. "Falsity" is also an arbitrary constant, which can be represented as "F" or "0". In propositional logic, these symbols can be manipulated according to a set of axioms and rules of inference, often given in the form of truth tables.
In addition, from at least the time of Hilbert's program at the turn of the twentieth century to the proof of Gödel's incompleteness theorems and the development of the Church–Turing thesis in the early part of that century, true statements in mathematics were generally assumed to be those statements that are provable in a formal axiomatic system.
The works of Kurt Gödel, Alan Turing, and others shook this assumption, with the development of statements that are true but cannot be proven within the system. Two examples of the latter can be found in Hilbert's problems. Work on Hilbert's 10th problem led in the late twentieth century to the construction of specific Diophantine equations for which it is undecidable whether they have a solution, or even if they do, whether they have a finite or infinite number of solutions. More fundamentally, Hilbert's first problem was on the continuum hypothesis. Gödel and Paul Cohen showed that this hypothesis cannot be proved or disproved using the standard axioms of set theory. In the view of some, then, it is equally reasonable to take either the continuum hypothesis or its negation as a new axiom.
Gödel thought that the ability to perceive the truth of a mathematical or logical proposition is a matter of intuition, an ability he admitted could be ultimately beyond the scope of a formal theory of logic or mathematics and perhaps best considered in the realm of human comprehension and communication. But he commented, "The more I think about language, the more it amazes me that people ever understand each other at all".
Tarski's semantics
The semantic theory of truth has as its general case for a given language:
'P' is true if and only if P
where 'P' refers to the sentence (the sentence's name), and P is just the sentence itself.
Tarski's theory of truth (named after Alfred Tarski) was developed for formal languages, such as formal logic. Here he restricted it in this way: no language could contain its own truth predicate, that is, the expression is true could only apply to sentences in some other language. The latter he called an object language, the language being talked about. (It may, in turn, have a truth predicate that can be applied to sentences in still another language.) The reason for his restriction was that languages that contain their own truth predicate will contain paradoxical sentences such as, "This sentence is not true". As a result, Tarski held that the semantic theory could not be applied to any natural language, such as English, because they contain their own truth predicates. Donald Davidson used it as the foundation of his truth-conditional semantics and linked it to radical interpretation in a form of coherentism.
Bertrand Russell is credited with noticing the existence of such paradoxes even in the best symbolic formations of mathematics in his day, in particular the paradox that came to be named after him, Russell's paradox. Russell and Whitehead attempted to solve these problems in Principia Mathematica by putting statements into a hierarchy of types, wherein a statement cannot refer to itself, but only to statements lower in the hierarchy. This in turn led to new orders of difficulty regarding the precise natures of types and the structures of conceptually possible type systems that have yet to be resolved to this day.
Kripke's semantics
Kripke's theory of truth (named after Saul Kripke) contends that a natural language can in fact contain its own truth predicate without giving rise to contradiction. He showed how to construct one as follows:
Beginning with a subset of sentences of a natural language that contains no occurrences of the expression "is true" (or "is false"). So, The barn is big is included in the subset, but not "The barn is big is true", nor problematic sentences such as "This sentence is false".
Defining truth just for the sentences in that subset.
Extending the definition of truth to include sentences that predicate truth or falsity of one of the original subset of sentences. So "The barn is big is true" is now included, but not either "This sentence is false" nor "'The barn is big is true' is true".
Defining truth for all sentences that predicate truth or falsity of a member of the second set. Imagine this process repeated infinitely, so that truth is defined for The barn is big; then for "The barn is big is true"; then for "'The barn is big is true' is true", and so on.
Truth never gets defined for sentences like This sentence is false, since it was not in the original subset and does not predicate truth of any sentence in the original or any subsequent set. In Kripke's terms, these are "ungrounded." Since these sentences are never assigned either truth or falsehood even if the process is carried out infinitely, Kripke's theory implies that some sentences are neither true nor false. This contradicts the principle of bivalence: every sentence must be either true or false. Since this principle is a key premise in deriving the liar paradox, the paradox is dissolved.
The proof sketch for Gödel's first incompleteness theorem shows self-reference cannot be avoided naively, since propositions about seemingly unrelated objects can have an informal self-referential meaning; in Gödel's work, these objects are integer numbers, and they have an informal meaning regarding propositions. In fact, this idea — manifested by the diagonal lemma—is the basis for Tarski's theorem that truth cannot be consistently defined.
It has thus been claimed that Kripke's system indeed leads to contradiction: while its truth predicate is only partial, it does give truth value (true/false) to propositions such as the one built in Tarski's proof, and is therefore inconsistent. While there is still a debate on whether Tarski's proof can be implemented to every similar partial truth system, none have been shown to be consistent by acceptable methods used in mathematical logic.
Kripke's semantics are related to the use of topoi and other concepts from category theory in the study of mathematical logic. They provide a choice of formal semantics for intuitionistic logic.
Folk beliefs
The truth predicate "P is true" has great practical value in human language, allowing efficient endorsement or impeaching of claims made by others, to emphasize the truth or falsity of a statement, or to enable various indirect (Gricean) conversational implications. Individuals or societies will sometime punish "false" statements to deter falsehoods; the oldest surviving law text, the Code of Ur-Nammu, lists penalties for false accusations of sorcery or adultery, as well as for committing perjury in court. Even four-year-old children can pass simple "false belief" tests and successfully assess that another individual's belief diverges from reality in a specific way; by adulthood there are strong implicit intuitions about "truth" that form a "folk theory" of truth. These intuitions include:
Capture (T-in): If P, then P is true
Release (T-out): If P is true, then P
Noncontradiction: A statement cannot be both true and false
Normativity: It is usually good to believe what is true
False beliefs: The notion that believing a statement does not necessarily make it true
Like many folk theories, the folk theory of truth is useful in everyday life but, upon deep analysis, turns out to be technically self-contradictory; in particular, any formal system that fully obeys "capture and release" semantics for truth (also known as the T-schema), and that also respects classical logic, is provably inconsistent and succumbs to the liar paradox or to a similar contradiction.
Views
Ancient Greek philosophy
Socrates', Plato's and Aristotle's ideas about truth are seen by some as consistent with correspondence theory. In his Metaphysics, Aristotle stated: "To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true". The Stanford Encyclopedia of Philosophy proceeds to say of Aristotle:
[...] Aristotle sounds much more like a genuine correspondence theorist in the Categories (12b11, 14b14), where he talks of "underlying things" that make statements true and implies that these "things" (pragmata) are logically structured situations or facts (viz., his sitting, his not sitting). Most influential is his claim in De Interpretatione (16a3) that thoughts are "likenesses" (homoiosis) of things. Although he nowhere defines truth in terms of a thought's likeness to a thing or fact, it is clear that such a definition would fit well into his overall philosophy of mind. [...]
Similar statements can also be found in Plato's dialogues (Cratylus 385b2, Sophist 263b).
Some Greek philosophers maintained that truth was either not accessible to mortals, or of greatly limited accessibility, forming early philosophical skepticism. Among these were Xenophanes, Democritus, and Pyrrho, the founder of Pyrrhonism, who argued that there was no criterion of truth.
The Epicureans believed that all sense perceptions were true, and that errors arise in how we judge those perceptions.
The Stoics conceived truth as accessible from impressions via cognitive grasping.
Medieval philosophy
Avicenna (980–1037)
In early Islamic philosophy, Avicenna (Ibn Sina) defined truth in his work Kitab Al-Shifa The Book of Healing, Book I, Chapter 8, as:
Avicenna elaborated on his definition of truth later in Book VIII, Chapter 6:
This definition is but a rendering of the medieval Latin translation of the work by Simone van Riet. A modern translation of the original Arabic text states:
Aquinas (1225–1274)
Reevaluating Avicenna, and also Augustine and Aristotle, Thomas Aquinas stated in his Disputed Questions on Truth:
Thus, for Aquinas, the truth of the human intellect (logical truth) is based on the truth in things (ontological truth). Following this, he wrote an elegant re-statement of Aristotle's view in his Summa I.16.1:
Aquinas also said that real things participate in the act of being of the Creator God who is Subsistent Being, Intelligence, and Truth. Thus, these beings possess the light of intelligibility and are knowable. These things (beings; reality) are the foundation of the truth that is found in the human mind, when it acquires knowledge of things, first through the senses, then through the understanding and the judgement done by reason. For Aquinas, human intelligence ("intus", within and "legere", to read) has the capability to reach the essence and existence of things because it has a non-material, spiritual element, although some moral, educational, and other elements might interfere with its capability.
Changing concepts of truth in the Middle Ages
Richard Firth Green examined the concept of truth in the later Middle Ages in his A Crisis of Truth, and concludes that roughly during the reign of Richard II of England the very meaning of the concept changes. The idea of the oath, which was so much part and parcel of for instance Romance literature, changes from a subjective concept to a more objective one (in Derek Pearsall's summary). Whereas truth (the "trouthe" of Sir Gawain and the Green Knight) was first "an ethical truth in which truth is understood to reside in persons", in Ricardian England it "transforms...into a political truth in which truth is understood to reside in documents".
Modern philosophy
Kant (1724–1804)
Immanuel Kant endorses a definition of truth along the lines of the correspondence theory of truth. Kant writes in the Critique of Pure Reason: "The nominal definition of truth, namely that it is the agreement of cognition with its object, is here granted and presupposed". He denies that this correspondence definition of truth provides us with a test or criterion to establish which judgements are true. He states in his logic lectures:
[...] Truth, it is said, consists in the agreement of cognition with its object. In consequence of this mere nominal definition, my cognition, to count as true, is supposed to agree with its object. Now I can compare the object with my cognition, however, only by cognizing it. Hence my cognition is supposed to confirm itself, which is far short of being sufficient for truth. For since the object is outside me, the cognition in me, all I can ever pass judgement on is whether my cognition of the object agrees with my cognition of the object.
The ancients called such a circle in explanation a diallelon. And actually the logicians were always reproached with this mistake by the sceptics, who observed that with this definition of truth it is just as when someone makes a statement before a court and in doing so appeals to a witness with whom no one is acquainted, but who wants to establish his credibility by maintaining that the one who called him as witness is an honest man. The accusation was grounded, too. Only the solution of the indicated problem is impossible without qualification and for every man. [...]
This passage makes use of his distinction between nominal and real definitions. A nominal definition explains the meaning of a linguistic expression. A real definition describes the essence of certain objects and enables us to determine whether any given item falls within the definition. Kant holds that the definition of truth is merely nominal and, therefore, we cannot employ it to establish which judgements are true. According to Kant, the ancient skeptics were critical of the logicians for holding that, by means of a merely nominal definition of truth, they can establish which judgements are true. They were trying to do something that is "impossible without qualification and for every man".
Hegel (1770–1831)
G. W. F. Hegel distanced his philosophy from empiricism by presenting truth as a self-moving process, rather than a matter of merely subjective thoughts. Hegel's truth is analogous to organics in that it is self-determining according to its own inner logic: "Truth is its own self-movement within itself."
Schopenhauer (1788–1860)
For Arthur Schopenhauer, a judgment is a combination or separation of two or more concepts. If a judgment is to be an expression of knowledge, it must have a sufficient reason or ground by which the judgment could be called true. Truth is the reference of a judgment to something different from itself which is its sufficient reason (ground). Judgments can have material, formal, transcendental, or metalogical truth. A judgment has material truth if its concepts are based on intuitive perceptions that are generated from sensations. If a judgment has its reason (ground) in another judgment, its truth is called logical or formal. If a judgment, of, for example, pure mathematics or pure science, is based on the forms (space, time, causality) of intuitive, empirical knowledge, then the judgment has transcendental truth.
Kierkegaard (1813–1855)
When Søren Kierkegaard, as his character Johannes Climacus, ends his writings: My thesis was, subjectivity, heartfelt is the truth, he does not advocate for subjectivism in its extreme form (the theory that something is true simply because one believes it to be so), but rather that the objective approach to matters of personal truth cannot shed any light upon that which is most essential to a person's life. Objective truths are concerned with the facts of a person's being, while subjective truths are concerned with a person's way of being. Kierkegaard agrees that objective truths for the study of subjects like mathematics, science, and history are relevant and necessary, but argues that objective truths do not shed any light on a person's inner relationship to existence. At best, these truths can only provide a severely narrowed perspective that has little to do with one's actual experience of life.
While objective truths are final and static, subjective truths are continuing and dynamic. The truth of one's existence is a living, inward, and subjective experience that is always in the process of becoming. The values, morals, and spiritual approaches a person adopts, while not denying the existence of objective truths of those beliefs, can only become truly known when they have been inwardly appropriated through subjective experience. Thus, Kierkegaard criticizes all systematic philosophies which attempt to know life or the truth of existence via theories and objective knowledge about reality. As Kierkegaard claims, human truth is something that is continually occurring, and a human being cannot find truth separate from the subjective experience of one's own existing, defined by the values and fundamental essence that consist of one's way of life.
Nietzsche (1844–1900)
Friedrich Nietzsche believed the search for truth, or 'the will to truth', was a consequence of the will to power of philosophers. He thought that truth should be used as long as it promoted life and the will to power, and he thought untruth was better than truth if it had this life enhancement as a consequence. As he wrote in Beyond Good and Evil, "The falseness of a judgment is to us not necessarily an objection to a judgment... The question is to what extent it is life-advancing, life-preserving, species-preserving, perhaps even species-breeding..." (aphorism 4). He proposed the will to power as a truth only because, according to him, it was the most life-affirming and sincere perspective one could have.
Robert Wicks discusses Nietzsche's basic view of truth as follows:
[...] Some scholars regard Nietzsche's 1873 unpublished essay, "On Truth and Lies in a Nonmoral Sense" ("Über Wahrheit und Lüge im außermoralischen Sinn") as a keystone in his thought. In this essay, Nietzsche rejects the idea of universal constants, and claims that what we call "truth" is only "a mobile army of metaphors, metonyms, and anthropomorphisms." His view at this time is that arbitrariness completely prevails within human experience: concepts originate via the very artistic transference of nerve stimuli into images; "truth" is nothing more than the invention of fixed conventions for merely practical purposes, especially those of repose, security and consistence. [...]
Separately Nietzsche suggested that an ancient, metaphysical belief in the divinity of Truth lies at the heart of and has served as the foundation for the entire subsequent Western intellectual tradition: "But you will have gathered what I am getting at, namely, that it is still a metaphysical faith on which our faith in science rests—that even we knowers of today, we godless anti-metaphysicians still take our fire too, from the flame lit by the thousand-year old faith, the Christian faith which was also Plato's faith, that God is Truth; that Truth is 'Divine'..."
Moreover, Nietzsche challenges the notion of objective truth, arguing that truths are human creations and serve practical purposes. He wrote, "Truths are illusions about which one has forgotten that this is what they are." He argues that truth is a human invention, arising from the artistic transference of nerve stimuli into images, serving practical purposes like repose, security, and consistency; formed through metaphorical and rhetorical devices, shaped by societal conventions and forgotten origins: "What, then, is truth? A mobile army of metaphors, metonyms, and anthropomorphisms – in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically..."
Nietzsche argues that truth is always filtered through individual perspectives and shaped by various interests and biases. In "On the Genealogy of Morality," he asserts, "There are no facts, only interpretations." He suggests that truth is subject to constant reinterpretation and change, influenced by shifting cultural and historical contexts as he writes in "Thus Spoke Zarathustra" that "I say unto you: one must still have chaos in oneself to be able to give birth to a dancing star." In the same book, Zarathustra proclaims, "Truths are illusions which we have forgotten are illusions; they are metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins."
Heidegger (1889–1976)
Other philosophers take this common meaning to be secondary and derivative. According to Martin Heidegger, the original meaning and essence of truth in Ancient Greece was unconcealment, or the revealing or bringing of what was previously hidden into the open, as indicated by the original Greek term for truth, . On this view, the conception of truth as correctness is a later derivation from the concept's original essence, a development Heidegger traces to the Latin term . Owing to the primacy of ontology in Heidegger's philosophy, he considered this truth to lie within Being itself, and already in Being and Time (1927) had identified truth with "being-truth" or the "truth of Being" and partially with the Kantian thing-in-itself in an epistemology essentially concerning a mode of Dasein.
Sartre (1905–1980)
In Being and Nothingness (1943), partially following Heidegger, Jean-Paul Sartre identified our knowledge of the truth as a relation between the in-itself and for-itself of being - yet simultaneously closely connected in this vein to the data available to the material personhood, in the body, of an individual in their interaction with the world and others - with Sartre's description that "the world is human" allowing him to postulate all truth as strictly understood by self-consciousness as self-consciousness of something, a view also preceded by Henri Bergson in Time and Free Will (1889), the reading of which Sartre had credited for his interest in philosophy. This first existentialist theory, more fully fleshed out in Sartre's essay Truth and Existence (1948), which already demonstrates a more radical departure from Heidegger in its emphasis on the primacy of the idea, already formulated in Being and Nothingness, of existence as preceding essence in its role in the formulation of truth, has nevertheless been critically examined as idealist rather than materialist in its departure from more traditional idealist epistemologies such as those of Ancient Greek philosophy in Plato and Aristotle, and staying as does Heidegger with Kant.
Later, in the Search for a Method (1957), in which Sartre used a unification of existentialism and Marxism that he would later formulate in the Critique of Dialectical Reason (1960), Sartre, with his growing emphasis on the Hegelian totalisation of historicity, posited a conception of truth still defined by its process of relation to a container giving it material meaning, but with specfiic reference to a role in this broader totalisation, for "subjectivity is neither everything nor nothing; it represents a moment in the objective process (that in which externality is internalised), and this moment is perpetually eliminated only to be perpetually reborn": "For us, truth is something which becomes, it has and will have become. It is a totalisation which is forever being totalised. Particular facts do not signify anything; they are neither true nor false so long as they are not related, through the mediation of various partial totalities, to the totalisation in process." Sartre describes this as a "realistic epistemology", developed out of Marx's ideas but with such a development only possible in an existentialist light, as with the theme of the whole work. In an early segment of the lengthy two-volume Critique of 1960, Sartre continued to describe truth as a "totalising" "truth of history" to be interpreted by a "Marxist historian", whilst his break with Heidegger's epistemological ideas is finalised in the description of a seemingly antinomous "dualism of Being and Truth" as the essence of a truly Marxist epistemology.
Camus (1913–1960)
The well-regarded French philosopher Albert Camus wrote in his famous essay, The Myth of Sisyphus (1942), that "there are truths but no truth", in fundamental agreement with Nietzsche's perspectivism, and favourably cites Kierkergaad in posing that "no truth is absolute or can render satisfactory an existence that is impossible in itself". Later, in The Rebel (1951), he declared, akin to Sartre, that "the very lowest form of truth" is "the truth of history", but describes this in the context of its abuse and like Kierkergaad in the Concluding Unscientific Postscript he criticizes Hegel in holding a historical attitude "which consists of saying: 'This is truth, which appears to us, however, to be error, but which is true precisely because it happens to be error. As for proof, it is not I, but history, at its conclusion, that will furnish it.'"
Whitehead (1861–1947)
Alfred North Whitehead, a British mathematician who became an American philosopher, said: "There are no whole truths; all truths are half-truths. It is trying to treat them as whole truths that plays the devil".
The logical progression or connection of this line of thought is to conclude that truth can lie, since half-truths are deceptive and may lead to a false conclusion.
Peirce (1839–1914)
Pragmatists like C. S. Peirce take truth to have some manner of essential relation to human practices for inquiring into and discovering truth, with Peirce himself holding that truth is what human inquiry would find out on a matter, if our practice of inquiry were taken as far as it could profitably go: "The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth..."
Nishida (1870–1945)
According to Kitaro Nishida, "knowledge of things in the world begins with the differentiation of unitary consciousness into knower and known and ends with self and things becoming one again. Such unification takes form not only in knowing but in the valuing (of truth) that directs knowing, the willing that directs action, and the feeling or emotive reach that directs sensing."
Fromm (1900–1980)
Erich Fromm finds that trying to discuss truth as "absolute truth" is sterile and that emphasis ought to be placed on "optimal truth". He considers truth as stemming from the survival imperative of grasping one's environment physically and intellectually, whereby young children instinctively seek truth so as to orient themselves in "a strange and powerful world". The accuracy of their perceived approximation of the truth will therefore have direct consequences on their ability to deal with their environment. Fromm can be understood to define truth as a functional approximation of reality. His vision of optimal truth is described partly in Man for Himself: An Inquiry into the Psychology of Ethics (1947), from which excerpts are included below.
the dichotomy between 'absolute = perfect' and 'relative = imperfect' has been superseded in all fields of scientific thought, where "it is generally recognized that there is no absolute truth but nevertheless that there are objectively valid laws and principles".
In that respect, "a scientifically or rationally valid statement means that the power of reason is applied to all the available data of observation without any of them being suppressed or falsified for the sake of the desired result". The history of science is "a history of inadequate and incomplete statements, and every new insight makes possible the recognition of the inadequacies of previous propositions and offers a springboard for creating a more adequate formulation."
As a result "the history of thought is the history of an ever-increasing approximation to the truth. Scientific knowledge is not absolute but optimal; it contains the optimum of truth attainable in a given historical period." Fromm furthermore notes that "different cultures have emphasized various aspects of the truth" and that increasing interaction between cultures allows for these aspects to reconcile and integrate, increasing further the approximation to the truth.
Colin Murray Turbayne (1916-2006)
For Colin Murray Turbayne, conceptual metaphors play a central role in the search for "objective truth" throughout the history of Western philosophical thought. In his The Myth of Metaphor he argued that metaphorical constructs are essential to any language which lays claim to embody both richness and a depth of understanding. He further argued that the mind is not a "Tabula Rasa" upon which "objective truth" becomes imprinted. Consequently, the failure to properly interpret metaphorical language as a "category mistake" ultimately serves to distort our understanding of truth. In addition, the failure to recognize dead metaphors introduces unnecessary obfuscation during the search for truth. This is most evident in the adoption of "substance" and "substratum" within Rene Des Cartes's dualism, the incorporation of metaphors for the "mind" and "language" by Plato and Aristotle into the writings of both George Berkeley and Immanuel Kant and the emergence of the "procreation" metaphor in Plato's Timeus within modern theories of both "thought" and "language". He concluded in his book Metaphors of the Mind: The Creative Mind and Its Origins by arguing that in each of these cases, the use of deductive reasoning over time has distorted the underlying meaning of several ancient dead metaphors. In the process, mankind has misconstrued them as "objective truths" and become the unwitting victim of the very metaphors he initially created in his search for truth..
Foucault (1926–1984)
Truth, says Michel Foucault, is problematic when any attempt is made to see truth as an "objective" quality. He prefers not to use the term truth itself but "Regimes of Truth". In his historical investigations he found truth to be something that was itself a part of, or embedded within, a given power structure. Thus Foucault's view shares much in common with the concepts of Nietzsche. Truth for Foucault is also something that shifts through various episteme throughout history.
Baudrillard (1929–2007)
Jean Baudrillard considered truth to be largely simulated, that is pretending to have something, as opposed to dissimulation, pretending to not have something. He took his cue from iconoclasts whom he claims knew that images of God demonstrated that God did not exist. Baudrillard wrote in "Precession of the Simulacra":
The simulacrum is never that which conceals the truth—it is the truth which conceals that there is none. The simulacrum is true.
—Ecclesiastes
Some examples of simulacra that Baudrillard cited were: that prisons simulate the "truth" that society is free; scandals (e.g., Watergate) simulate that corruption is corrected; Disney simulates that the U.S. itself is an adult place. Though such examples seem extreme, such extremity is an important part of Baudrillard's theory. For a less extreme example, movies usually end with the bad being punished, humiliated, or otherwise failing, thus affirming for viewers the concept that the good end happily and the bad unhappily, a narrative which implies that the status quo and established power structures are largely legitimate.
Other contemporary positions
Truthmaker theory is "the branch of metaphysics that explores the relationships between what is true and what exists". It is different from substantive theories of truth in the sense that it does not aim at giving a definition of what truth is. Instead, it has the goal of determining how truth depends on being.
Theological views
Hinduism
In Hinduism, truth is defined as "unchangeable", "that which has no distortion", "that which is beyond distinctions of time, space, and person", "that which pervades the universe in all its constancy". The human body, therefore, is not completely true as it changes with time, for example. There are many references, properties and explanations of truth by Hindu sages that explain varied facets of truth, such as the national motto of India: "Satyameva Jayate" (Truth alone triumphs), as well as "Satyam muktaye" (Truth liberates), "Satya' is 'Parahit'artham' va'unmanaso yatha'rthatvam' satyam" (Satya is the benevolent use of words and the mind for the welfare of others or in other words responsibilities is truth too), "When one is firmly established in speaking truth, the fruits of action become subservient to him (patanjali yogasutras, sutra number 2.36), "The face of truth is covered by a golden bowl. Unveil it, O Pusan (Sun), so that I who have truth as my duty (satyadharma) may see it!" (Brhadaranyaka V 15 1–4 and the brief IIsa Upanisad 15–18), Truth is superior to silence (Manusmriti), etc. Combined with other words, satya acts as a modifier, like ultra or highest, or more literally truest, connoting purity and excellence. For example, satyaloka is the "highest heaven" and Satya Yuga is the "golden age" or best of the four cyclical cosmic ages in Hinduism, and so on. The Buddha, the 9th incarnation of Bhagwan Vishnu, quoted as such - Three things cannot be long hidden: the sun, the moon and the truth.
Buddhism
In Buddhism, particularly in the Mahayana tradition, the notion of truth is often divided into the two truths doctrine, which consists of relative or conventional truth and ultimate truth. The former refers to truth that is based on common understanding among ordinary people and is accepted as a practical basis for communication of higher truths. Ultimate truth necessarily transcends logic in the sphere of ordinary experience, and recognizes such phenomena as illusory. Mādhyamaka philosophy asserts that any doctrine can be analyzed with both divisions of truth. Affirmation and negation belong to relative and absolute truth respectively. Political law is regarded as relative, while religious law is absolute.
Christianity
Christianity has a soteriological view of truth. According to the Bible in John 14:6, Jesus is quoted as having said "I am the way, the truth and the life: no man cometh unto the Father, but by me".
See also
Asha
Confirmation holism
Contextualism
Degree of truth
Disposition
Eclecticism
Epistemic theories of truth
Imagination
Independence (probability theory)
Invariant (mathematics)
McNamara fallacy
Normative science
On Truth and Lies in a Nonmoral Sense
Perspectivism
Physical symbol system
Public opinion
Relativism
Religious views on truth
Revision theory
Slingshot argument
Subjectivity
Tautology (logic)
Tautology (rhetoric)
Theory of justification
Truth prevails
Truthiness
Unity of the proposition
Verisimilitude
Other theorists
Augustine of Hippo
Brand Blanshard
Hartry Field
Gottlob Frege
Paul Horwich
Harold Joachim
Karl Popper
Notes
References
Aristotle, "The Categories", Harold P. Cooke (trans.), pp. 1–109 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, 1938.
Aristotle, "On Interpretation", Harold P. Cooke (trans.), pp. 111–79 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, 1938.
Aristotle, "Prior Analytics", Hugh Tredennick (trans.), pp. 181–531 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, 1938.
Aristotle, "On the Soul" (De Anima), W. S. Hett (trans.), pp. 1–203 in Aristotle, Volume 8, Loeb Classical Library, William Heinemann, London, 1936.
Audi, Robert (ed., 1999), The Cambridge Dictionary of Philosophy, Cambridge University Press, Cambridge, 1995. 2nd edition, 1999. Cited as CDP.
Baldwin, James Mark (ed., 1901–1905), Dictionary of Philosophy and Psychology, 3 volumes in 4, Macmillan, New York.
Baylis, Charles A. (1962), "Truth", pp. 321–22 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Benjamin, A. Cornelius (1962), "Coherence Theory of Truth", p. 58 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Blackburn, Simon, and Simmons, Keith (eds., 1999), Truth, Oxford University Press, Oxford. Includes papers by James, Ramsey, Russell, Tarski, and more recent work.
Chandrasekhar, Subrahmanyan (1987), Truth and Beauty. Aesthetics and Motivations in Science, University of Chicago Press, Chicago, IL.
Chang, C.C., and Keisler, H.J., Model Theory, North-Holland, Amsterdam, Netherlands, 1973.
Chomsky, Noam (1995), The Minimalist Program, MIT Press, Cambridge, Massachusetts.
Church, Alonzo (1962a), "Name Relation, or Meaning Relation", p. 204 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Church, Alonzo (1962b), "Truth, Semantical", p. 322 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Clifford, W.K. (1877), "The Ethics of Belief and Other Essays". (Prometheus Books, 1999), infidels.org
Dewey, John (1900–1901), Lectures on Ethics 1900–1901, Donald F. Koch (ed.), Southern Illinois University Press, Carbondale and Edwardsville, IL.
Dewey, John (1932), Theory of the Moral Life, Part 2 of John Dewey and James H. Tufts, Ethics, Henry Holt and Company, New York, 1908. 2nd edition, Holt, Rinehart, and Winston, 1932. Reprinted, Arnold Isenberg (ed.), Victor Kestenbaum (pref.), Irvingtion Publishers, New York, 1980.
Dewey, John (1938), Logic: The Theory of Inquiry (1938), Holt and Company, New York. Reprinted, John Dewey, The Later Works, 1925–1953, Volume 12: 1938, Jo Ann Boydston (ed.), Southern Illinois University Press, Carbondale and Edwardsville, IL, 1986.
Field, Hartry (2001), Truth and the Absence of Fact, Oxford University Press, Oxford.
Foucault, Michel (1997), Essential Works of Foucault, 1954–1984, Volume 1, Ethics: Subjectivity and Truth, Paul Rabinow (ed.), Robert Hurley et al. (trans.), The New Press, New York.
Garfield, Jay L., and Kiteley, Murray (1991), Meaning and Truth: The Essential Readings in Modern Semantics, Paragon House, New York.
Gupta, Anil (2001), "Truth", in Lou Goble (ed.), The Blackwell Guide to Philosophical Logic, Blackwell Publishers, Oxford.
Gupta, Anil and Belnap, Nuel. (1993). The Revision Theory of Truth. MIT Press.
Haack, Susan (1993), Evidence and Inquiry: Towards Reconstruction in Epistemology, Blackwell Publishers, Oxford.
Habermas, Jürgen (1976), "What Is Universal Pragmatics?", 1st published, "Was heißt Universalpragmatik?", Sprachpragmatik und Philosophie, Karl-Otto Apel (ed.), Suhrkamp Verlag, Frankfurt am Main. Reprinted, pp. 1–68 in Jürgen Habermas, Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, 1979.
Habermas, Jürgen (1990), Moral Consciousness and Communicative Action, Christian Lenhardt and Shierry Weber Nicholsen (trans.), Thomas McCarthy (intro.), MIT Press, Cambridge, Massachusetts.
Habermas, Jürgen (2003), Truth and Justification, Barbara Fultner (trans.), MIT Press, Cambridge, Massachusetts.
Hegel, Georg, (1977), The Phenomenology of Spirit, Oxford University Press, Oxford, .
Horwich, Paul, (1988), Truth, 2nd edition, Oxford University Press, Oxford.
James, William (1904), A World of Pure Experience.
James, William (1907), Pragmatism, A New Name for Some Old Ways of Thinking, Popular Lectures on Philosophy, Longmans, Green, and Company, New York.
James, William (1909), The Meaning of Truth, A Sequel to 'Pragmatism, Longmans, Green, and Company, New York.
James, William (1912), Essays in Radical Empiricism. Cf. Chapt. 3, "The Thing and its Relations", pp. 92–122.
James, William (2014), William James on Habit, Will, Truth, and the Meaning of Life. James Sloan Allen (ed.), Frederic C. Beil, Publisher, Savannah, GA.
Kant, Immanuel (1800), Introduction to Logic. Reprinted, Thomas Kingsmill Abbott (trans.), Dennis Sweet (intro.), Barnes and Noble, New York, 2005.
Kirkham, Richard L. (1992), Theories of Truth: A Critical Introduction, MIT Press, Cambridge, Massachusetts.
Kneale, W., and Kneale, M. (1962), The Development of Logic, Oxford University Press, London, 1962. Reprinted with corrections, 1975.
Kreitler, Hans, and Kreitler, Shulamith (1972), Psychology of the Arts, Duke University Press, Durham, NC.
Le Morvan, Pierre (2004), "Ramsey on Truth and Truth on Ramsey", British Journal for the History of Philosophy, 12 (4) 2004, 705–18, PDF .
Peirce, C.S., Bibliography.
Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, Massachusetts, 1931–1935, 1958. Cited as CP vol.para.
Peirce, C.S. (1877), "The Fixation of Belief", Popular Science Monthly 12 (1877), 1–15. Reprinted (CP 5.358–387), (CE 3, 242–257), (EP 1, 109–123). Eprint .
Peirce, C.S. (1901), "Truth and Falsity and Error" (in part), pp. 718–20 in J.M. Baldwin (ed.), Dictionary of Philosophy and Psychology, vol. 2. Reprinted, CP 5.565–573.
Polanyi, Michael (1966), The Tacit Dimension, Doubleday and Company, Garden City, NY.
Quine, W.V. (1956), "Quantifiers and Propositional Attitudes", Journal of Philosophy 53 (1956). Reprinted, pp. 185–96 in Quine (1976), Ways of Paradox.
Quine, W.V. (1976), The Ways of Paradox, and Other Essays, 1st edition, 1966. Revised and enlarged edition, Harvard University Press, Cambridge, Massachusetts, 1976.
Quine, W.V. (1980 a), From a Logical Point of View, Logico-Philosophical Essays, 2nd edition, Harvard University Press, Cambridge, Massachusetts.
Quine, W.V. (1980 b), "Reference and Modality", pp. 139–59 in Quine (1980 a), From a Logical Point of View.
Rajchman, John, and West, Cornel (ed., 1985), Post-Analytic Philosophy, Columbia University Press, New York.
Ramsey, F.P. (1927), "Facts and Propositions", Aristotelian Society Supplementary Volume 7, 153–70. Reprinted, pp. 34–51 in F.P. Ramsey, Philosophical Papers, David Hugh Mellor (ed.), Cambridge University Press, Cambridge, 1990.
Ramsey, F.P. (1990), Philosophical Papers, David Hugh Mellor (ed.), Cambridge University Press, Cambridge.
Rawls, John (2000), Lectures on the History of Moral Philosophy, Barbara Herman (ed.), Harvard University Press, Cambridge, Massachusetts.
Rorty, R. (1979), Philosophy and the Mirror of Nature, Princeton University Press, Princeton, NJ.
Russell, Bertrand (1912), The Problems of Philosophy, 1st published 1912. Reprinted, Galaxy Book, Oxford University Press, New York, 1959. Reprinted, Prometheus Books, Buffalo, NY, 1988.
Russell, Bertrand (1918), "The Philosophy of Logical Atomism", The Monist, 1918. Reprinted, pp. 177–281 in Logic and Knowledge: Essays 1901–1950, Robert Charles Marsh (ed.), Unwin Hyman, London, 1956. Reprinted, pp. 35–155 in The Philosophy of Logical Atomism, David Pears (ed.), Open Court, La Salle, IL, 1985.
Russell, Bertrand (1956), Logic and Knowledge: Essays 1901–1950, Robert Charles Marsh (ed.), Unwin Hyman, London, 1956. Reprinted, Routledge, London, 1992.
Russell, Bertrand (1985), The Philosophy of Logical Atomism, David Pears (ed.), Open Court, La Salle, IL.
Schopenhauer, Arthur, (1974), On the Fourfold Root of the Principle of Sufficient Reason, Open Court, La Salle, IL, .
Smart, Ninian (1969), The Religious Experience of Mankind, Charles Scribner's Sons, New York.
Tarski, A., Logic, Semantics, Metamathematics: Papers from 1923 to 1938, J.H. Woodger (trans.), Oxford University Press, Oxford, 1956. 2nd edition, John Corcoran (ed.), Hackett Publishing, Indianapolis, IN, 1983.
Wallace, Anthony F.C. (1966), Religion: An Anthropological View, Random House, New York.
Reference works
Audi, Robert (ed., 1999), The Cambridge Dictionary of Philosophy, Cambridge University Press, Cambridge, 1995. 2nd edition, 1999. Cited as CDP.
Blackburn, Simon (1996), The Oxford Dictionary of Philosophy, Oxford University Press, Oxford, 1994. Paperback edition with new Chronology, 1996. Cited as ODP.
Runes, Dagobert D. (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ, 1962.
Webster's New International Dictionary of the English Language, Second Edition, Unabridged (1950), W.A. Neilson, T.A. Knott, P.W. Carhart (eds.), G. & C. Merriam Company, Springfield, MA. Cited as MWU.
Webster's Ninth New Collegiate Dictionary (1983), Frederick C. Mish (ed.), Merriam–Webster Inc., Springfield, MA. Cited as MWC.
External links
An Introduction to Truth by Paul Newall, aimed at beginners.
Internet Encyclopedia of Philosophy:
"Truth"
"Pluralist Theories of Truth"
"Truthmaker Theory"
"Prosentential Theory of Truth"
Stanford Encyclopedia of Philosophy:
Truth
Coherence theory of truth
Correspondence theory of truth
Deflationary theory of truth
Identity theory of truth
Revision theory of truth
Tarski's definition of truth
Axiomatic theories of truth
Heidegger on Truth (Aletheia) as Unconcealment
History of Truth: The Greek "Aletheia"
History of Truth: The Latin "Veritas"
Concepts in epistemology
Concepts in logic
Metaphysical properties
Ethical principles
Meaning (philosophy of language)
Ontology
Mathematical logic
Philosophical logic
Reality
Theories of truth
Virtue | 0.782136 | 0.998006 | 0.780576 |
Idealism | Idealism in philosophy, also known as philosophical idealism or metaphysical idealism, is the set of metaphysical perspectives asserting that, most fundamentally, reality is equivalent to mind, spirit, or consciousness; that reality is entirely a mental construct; or that ideas are the highest type of reality or have the greatest claim to being considered "real". Because there are different types of idealism, it is difficult to define the term uniformly.
Indian philosophy contains some of the first defenses of idealism, such as in Vedanta and in Shaiva Pratyabhijña thought. These systems of thought argue for an all-pervading consciousness as the true nature and ground of reality. Idealism is also found in some streams of Mahayana Buddhism, such as in the Yogācāra school, which argued for a "mind-only" (cittamatra) philosophy on an analysis of subjective experience. In the West, idealism traces its roots back to Plato in ancient Greece, who proposed that absolute, unchanging, timeless ideas constitute the highest form of reality: Platonic idealism. This was revived and transformed in the early modern period by Immanuel Kant's arguments that our knowledge of reality is completely based on mental structures: transcendental idealism.
Epistemologically, idealism is accompanied by a rejection of the possibility of knowing the existence of any thing independent of mind. Ontologically, idealism asserts that the existence of all things depends upon the mind; thus, ontological idealism rejects the perspectives of physicalism and dualism. In contrast to materialism, idealism asserts the primacy of consciousness as the origin and prerequisite of all phenomena.
Idealism came under heavy attack in the West at the turn of the 20th century. The most influential critics were G. E. Moore and Bertrand Russell, but its critics also included the new realists and Marxists. The attacks by Moore and Russell were so influential that even more than 100 years later "any acknowledgment of idealistic tendencies is viewed in the English-speaking world with reservation." However, many aspects and paradigms of idealism did still have a large influence on subsequent philosophy.
Definitions
Idealism is a term with several related meanings. It comes via Latin idea from the Ancient Greek idea (ἰδέα) from idein (ἰδεῖν), meaning "to see". The term entered the English language by 1743. The term idealism was first used in the abstract metaphysical sense of the "belief that reality is made up only of ideas" by Christian Wolff in 1747. The term re-entered the English language in this abstract sense by 1796. A. C. Ewing gives this influential definition:
the view that there can be no physical objects existing apart from some experience...provided that we regard thinking as part of experience and do not imply by "experience" passivity, and provided we include under experience not only human experience but the so-called "Absolute Experience" or the experience of a God such as Berkeley postulates.
A more recent definition by Willem deVries sees idealism as "roughly, the genus comprises theories that attribute ontological priority to the mental, especially the conceptual or ideational, over the non-mental." As such, idealism entails a rejection of materialism (or physicalism) as well as the rejection of the mind-independent existence of matter (and as such, also entails a rejection of dualism).
There are two main definitions of idealism in contemporary philosophy, depending on whether its thesis is epistemic or metaphysical:
Metaphysical idealism or ontological idealism is the view which holds that all of reality is in some way mental (or spirit, reason, or will) or at least ultimately grounded in a fundamental basis which is mental. This is a form of metaphysical monism because it holds that there is only one type of thing in existence. The modern paradigm of a Western metaphysical idealism is Berkeley's immaterialism. Other such idealists are Hegel, and Bradley.
Epistemological idealism (or "formal" idealism) is a position in epistemology that holds that all knowledge is based on mental structures, not on "things in themselves". Whether a mind-independent reality is accepted or not, all that we have knowledge of are mental phenomena. The main source of Western epistemic idealist arguments is the transcendental idealism of Kant. Other thinkers who have defended epistemic idealist arguments include Ludwig Boltzmann and Brand Blanshard.
Thus, metaphysical idealism holds that reality itself is non-physical, immaterial, or experiential at its core, while epistemological idealist arguments merely affirm that reality can only be known through ideas and mental structures (without necessarily making metaphysical claims about things in themselves). Because of this, A.C. Ewing argued that instead of thinking about these two categories as forms of idealism proper, we should instead speak of epistemic and metaphysical arguments for idealism.
These two ways of arguing for idealism are sometimes combined together to defend a specific type of idealism (as done by Berkeley), but they may also be defended as independent theses by different thinkers. For example, while F. H. Bradley and McTaggart focused on metaphysical arguments, Josiah Royce, and Brand Blanshard developed epistemological arguments.
Furthermore, one might use epistemic arguments, but remain neutral about the metaphysical nature of things in themselves. This metaphysically neutral position, which is not a form of metaphysical idealism proper, may be associated with figures like Rudolf Carnap, Quine, Donald Davidson, and perhaps even Kant himself (though he is difficult to categorize). The most famous kind of epistemic idealism is associated with Kantianism and transcendental idealism, as well as with the related Neo-Kantian philosophies. Transcendental idealists like Kant affirm epistemic idealistic arguments without committing themselves to whether reality as such, the "thing in itself", is ultimately mental.
Types of metaphysical idealism
Within metaphysical idealism, there are numerous further sub-types, including forms of pluralism, which hold that there are many independent mental substances or minds, such as Leibniz' monadology, and various forms of monism or absolute idealism (e.g. Hegelianism or Advaita Vedanta), which hold that the fundamental mental reality is a single unity or is grounded in some kind of singular Absolute. Beyond this, idealists disagree on which aspects of the mental are more metaphysically basic. Platonic idealism affirms that ideal forms are more basic to reality than the things we perceive, while subjective idealists and phenomenalists privilege sensory experiences. Personalism meanwhile, sees persons or selves as fundamental.
A common distinction is between subjective and objective forms of idealism. Subjective idealists like George Berkeley reject the existence of a mind-independent or "external" world (though not the appearance of such phenomena in the mind). However, not all idealists restrict the real to subjective experience. Objective idealists make claims about a trans-empirical world, but simply deny that this world is essentially divorced from or ontologically prior to mind or consciousness as such. Thus, objective idealism asserts that the reality of experiencing includes and transcends the realities of the object experienced and of the mind of the observer.
Idealism is sometimes categorized as a type of metaphysical anti-realism or skepticism. However, idealists need not reject the existence of an objective reality that we can obtain knowledge of, and can merely affirm that this real natural world is mental. Thus, David Chalmers writes of anti-realist idealisms (which would include Berkeley's) and realist forms of idealism, such as "panpsychist versions of idealism where fundamental microphysical entities are conscious subjects, and on which matter is realized by these conscious subjects and their relations."
Chalmers further outlines the following taxonomy of idealism:Micro-idealism is the thesis that concrete reality is wholly grounded in micro-level mentality: that is, in mentality associated with fundamental microscopic entities (such as quarks and photons). Macro-idealism is the thesis that concrete reality is wholly grounded in macro-level mentality: that is, in mentality associated with macroscopic (middle-sized) entities such as humans and perhaps non-human animals. Cosmic idealism is the thesis that concrete reality is wholly grounded in cosmic mentality: that is, in mentality associated with the cosmos as a whole or with a single cosmic entity (such as the universe or a deity). Guyer et al. also distinguish between forms of idealism which are grounded in substance theory (often found in the Anglophone idealisms of the late nineteenth and twentieth centuries) and forms of idealism which focus on activities or dynamic processes (favored in post-Kantian German philosophy).
Classical Greek idealism
Pre-Socratic philosophy
There some precursors of idealism in Ancient Greek Philosophy, though scholars disagree on whether any of these thinkers could be properly labeled "idealist" in the modern sense.
One example is Anaxagoras (480 BC) who taught that all things in the universe (apeiron) were set in motion by nous ("mind"). In the Phaedo, Plato quotes him as saying, "it is intelligence [nous] that arranges and causes all things". Similarly, Parmenides famously stated that "thinking and being are the same". This has led some scholars, such as Hegel and E. D. Phillips, to label Parmenides an idealist.
Platonism and neoplatonism
Plato's theory of forms or "ideas" (eidos) as described in dialogues like Phaedo, Parmenides and Sophist, describes ideal forms (for example the platonic solids in geometry or abstracts like Goodness and Justice), as perfect beings which "exists-by-itself" (Greek: auto kath’ auto), that is, independently of any particular instance (whether physical or in the individual thought of any person). Anything that exists in the world exists by participating in one of these unique ideas, which are nevertheless interrelated causally with the world of becoming, with nature. Arne Grøn calls this doctrine "the classic example of a metaphysical idealism as a transcendent idealism". Nevertheless, Plato holds that matter as perceived by us is real, though transitory, imperfect, and dependent on the eternal ideas for its existence. Because of this, some scholars have seen Plato as a dualist, though others disagree and favor a monist account.
The thought of Plato was widely influential, and later Late Platonist (or Neoplatonist) thinkers developed Platonism in new directions. Plotinus, the most influential of the later Platonists, wrote "Being and Intellect are therefore one nature" (Enneads V.9.8). According to scholars like Nathaniel Alfred Boll and Ludwig Noiré, with Plotinus, a true idealism which holds that only soul or mind exists appears for the first time in Western philosophy. Similarly, for Maria Luisa Gatti, Plotinus' philosophy is a "'contemplationist metaphysics', in which contemplation, as creative, constitutes the reason for the being of everything". For Neoplatonist thinkers, the first cause or prinicple is the Idea of the Good, i.e. The One, from which everything is derived a hierarchical procession (proodos) (Enn. VI.7.15).
Judeo-Christian idealism
Some Christian theologians have held idealist views, often based on neoplatonism. Christian neoplatonism included figures like Pseudo-Dionysius the Areopagite, and influenced numerous Christian thinkers, including the Cappadocian Fathers and Augustine. Despite the influence of Aristotelian scholasticism from the 12th century onward, there is certainly a sense in which some medieval scholastic philosophers retained influences from the Platonic idealism that came via Augustine. For example, the work of John Scottus Eriugena (c. 800 – c. 877) has been interpreted as an idealistic philosophy by Dermot Moran who writes that for Scottus "all spatiotemporal reality is understood as immaterial, mind dependent, and lacking in independent existence". Scottus thus wrote: "the intellection of all things...is the being of all things".
Idealism was also defended in medieval Jewish philosophy. According to Samuel Lebens, early Hassidic rabbis like Yitzchak Luria (1534–72) defended a form of Kabbalistic idealism in which the world was God's dream or a fictional tale told by God.
Later Western theistic idealism such as that of Hermann Lotze offers a theory of the "world ground" in which all things find their unity: it has been widely accepted by Protestant theologians.
Several modern religious movements such as, for example, the organizations within the New Thought Movement and the Unity Church, may be said to have a particularly idealist orientation. The theology of Christian Science includes a form of idealism: it teaches that all that truly exists is God and God's ideas; that the world as it appears to the senses is a distortion of the underlying spiritual reality, a distortion that may be corrected (both conceptually and in terms of human experience) through a reorientation (spiritualization) of thought.
Idealism in Eastern philosophy
There are currents of idealism throughout Indian philosophy, ancient and modern. Some forms of Hindu idealism (like Advaita) defend a type of monism or non-dualism, in which a single consciousness (brahman) is all that exists. However, other traditions defend a theistic pluralism (e.g. Shaiva Siddhanta), in which there are many selves (atman) and one God.
Buddhist idealism on the other hand is non-theistic and does not accept the existence of eternal selves (due to their adherence to the theory of not-self).
Hindu philosophy
A type of idealistic monism can be seen in the Upanishads, which often describe the ultimate reality of brahman as "being, consciousness, bliss" (Saccidānanda). The Chāndogya Upaniṣad teaches that everything is an emanation of the immortal brahman, which is the essence and source of all things, and is identical with the self (atman). The Bṛhadāraṇyaka Upaniṣad also describes brahman as awaress and bliss, and states that "this great being (mahad bhūtam) without an end, boundless (apāra), [is] nothing but vijñāna [consciousness]."
Idealist notions can be found in different schools of Hindu philosophy, including some schools of Vedanta. Other schools like the Samkhya and Nyaya-Vaisheshika, Mimamsa, Yoga, Vishishtadvaita, Dvaita, and others opposed idealism in favor of realism.
Different schools of Vedanta have different interpretations of brahman-atman, their foundational theory. Advaita Vedanta posits an absolute idealistic monism in which reality is one single absolute existence. Thus, brahman (the ultimate ground of all) is absolutely identical with all atmans (individual selves). Other forms of Vedanta like the Vishishtadvaita of Ramanuja and the Bhedabheda of Bhāskara are not as radical in their non-dualism, accepting that there is a certain difference between individual souls and Brahman.
Advaita
The most influential Advaita philosopher was Ādi Śaṅkara (788–820). In his philosophy, brahman is the single non-dual foundation (adhiṣṭhana) for all existence. This reality is independent, self-established, irreducible, immutable, and free of space, time, and causation. In comparison to this reality, the world of plurality and appearances is illusory (maya), an unreal cognitive error (mithya). This includes all individual souls or selves, which are actually unreal and numerically identical to the one brahman.
Śaṅkara did not believe it was possible to prove the view that reality is "one only, without a second" (Chandogya 6.2.1) through independent philosophical reasoning. Instead, he accepts non-duality based on the authority of the Upaniṣads. As such, most of his extant works are scriptural commentaries.
Nevertheless, he did provide various new arguments to defend his theories. A major metaphysical distinction for Śaṅkara is between what changes and may thus be negated (the unreal) and what does not (which is what is truly real). He compares the real to clay (the substantial cause, analogous to brahman) and the unreal to a pot which depends on the clay for its being (analogous to all impermanent things in the universe). By relying on dependence relations and on the reality of persistence, Śaṅkara concludes that metaphysical foundations are more real than their impermanent effects, and that effects are fully reducible and indeed identical to their metaphysical foundation. Through this argument from dependence, Śaṅkara concludes that since all things in the universe undergoes change, they must depend on some really existent cause for their being, and this is the one primordial undifferentiated existence (Chandogya Bhāṣya, 6.2.1–2). This one reality is the single cause that is in every object, and every thing is not different from this brahman since all things borrow their existence from it. Śaṅkara also provides a cosmogony in which the world arises from an unmanifest state which is like deep dreamless sleep into a state in which īśvara (God) dreams the world into existence. As such, the world is not separate from God's mind.
Śaṅkara's philosophy, along with that of his contemporary Maṇḍana Miśra (c. 8th century CE), is at the foundation of Advaita school. The opponents of this school however, labeled him a māyāvādin (illusionist) for negating the reality of the world. They also criticized what they saw as a problematic explanation for how the world arises from māyā as an error. For them, if māyā is in brahman, then brahman has ignorance, but if it is not in brahman, then this collapses into a dualism of brahman and māyā.
Other idealist schools
Perhaps the most influential critic of Advaita was Rāmānuja (c. 1017 – c. 1137), the main philosopher of the competing Viśiṣṭādvaita (qualified non-dual) school. His philosophy affirms the reality of the world and individual selves as well as affirming an underlying unity of all things with God. One of Rāmānuja's critiques of advaita is epistemological. If, as Advaita argues, all cognition other than pure undifferentiated consciousness is based in error, then it follows we would have no knowledge of the very fact that all individual cognition is error (Śrī Bhāṣya, I.i.1).
Furthermore, Rāmānuja also argues contra Advaita that individual selves are real and not illusory. This is because the very idea that an individual can be ignorant presupposes the very existence of that individual. Furthermore, since all Vedāntins agree that Brahman's nature is knowledge, consciousness and being, to say that brahman is ignorant is absurd, and so it must be individual souls which are ignorant. Thus, there must be individual selves with a metaphysically prior existence who then fall into ignorance (Śrī Bhāṣya, I.i.1.). Selves might be individual, but as the Vedas state, they still share a sense of unity with brahman. For Rāmānuja, this is because selves are distinct modes or qualities in the cosmic body of Brahman (and are thus different and yet united with brahman). Brahman meanwhile is like the soul in the body of the world. Furthermore, brahman is a theistic creator God for Rāmānuja, which really exists as the union of two deities: Vishnu, and Lakṣmī.
The philosophy of the Tantric tradition of Trika Shaivism is a non-dual theistic idealism. The key thinkers of this philosophical tradition, known as the Pratyabhijñā (Recognition) school, are the Kashmirian philosophers Utpaladeva (c. 900–950 CE) and Abhinavagupta (975–1025 CE). This tradition affirms a non-dual monism which sees God (Shiva) as a single cosmic consciousness. All selves (atman) are one with God, but they have forgotten this, and must recognize their true nature in order to reach liberation.
Unlike in Advaita Vedanta however, the one cosmic consciousness is active and dynamic, consisting of spontaneous vibration (spanda) since it has the quality of absolute freedom (svātāntrya). Through the power (Śakti) of dynamic vibrations, the absolute (Shiva-Śakti, consciousness and its power) creates the world, and so, the world is a real manifestation of absolute consciousness. Thus, in this system, the world and individual selves (which are dynamic, not an unchanging witness) are not an unreal illusion, but are seen as real and active expressions of God's creative freedom.
Idealism has remained influential in modern Hindu philosophy, especially in Neo-Vedanta modernism. Prominent modern defenders include Ram Mohan Roy (1772–1833), Vivekananda (1863–1902), Sarvepalli Radhakrishnan (An Idealist View of Life, 1932) and Aurobindo (1872–1950).
Buddhist philosophy
Buddhist views reminiscent of idealism appear in Mahayana scriptures like the Explanation of the Profound Secrets, Descent into Laṅka, and Ten Stages Sutra. These theories, known as "mind-only" (cittamatra) or "the consciousness doctrine" (vijñanavada) were mostly associated with the Indian Buddhist philosophers of the Yogācāra school and the related epistemological school (Pramāṇavāda). These figures include: Vasubandhu, Asaṅga, Dignāga, Dharmakīrti, Sthiramati, Dharmapāla, Jñānaśrīmitra, Śaṅkaranandana, and Ratnākaraśānti. Their arguments were a lively subject of debate for Buddhist and non-Buddhist philosophers in India for centuries. These discussions had a lasting influence on the later Buddhist philosophy of East Asian Buddhism and Tibetan Buddhism.
There is some modern scholarly disagreement about whether Indian Yogācāra Buddhism can be said to be a form of idealism. Some writers like philosopher Jay Garfield and German philologist Lambert Schmithausen argue that Indian Yogacarins are metaphysical idealists that reject the existence of a mind independent external world. Others see them as closer to an epistemic idealist like Kant who holds that our knowledge of the world is simply knowledge of our own concepts and perceptions. However, a major difference here is that while Kant holds that the thing-in-itself is unknowable, Indian Yogacarins held that ultimate reality is knowable, but only through the non-conceptual yogic perception of a highly trained meditative mind. Other scholars like Dan Lusthaus and Thomas Kochumuttom see Yogācāra as a kind of phenomenology of experience which seeks to understand how suffering (dukkha) arises in the mind, not provide a metaphysics.
Vasubandhu
Whatever the case, the works of Vasubandhu (fl. c.360) certainly include a refutation of mind-independent "external" objects (Sanskrit: bāhyārtha) and argue that the true nature of reality is beyond subject-object distinctions. He views ordinary conscious experience as deluded in its perceptions of an external world separate from itself (which does not exist), and instead argues that all there is vijñapti (ideas, mental images, conscious appearances, representations). Vasubandhu begins his Twenty verses (Viṃśikā) by affirming that "all this [everything we take to exist] is mere appearance of consciousness [vijñapti], because of the appearance of non-existent objects, just as a man with an eye disease sees non-existent hairs" (Viṃś.1). His main argument against external objects is a critique of the atomist theories of his realist opponents (Nyāya and Abhidharma theorists).
Vasubandhu also responds against three objections to idealism which indicate his view that all appearances are caused by mind: (1) the issue of spatio-temporal continuity, (2) accounting for intersubjectivity, and (3) the causal efficacy of matter on subjects. For the first and third objections, Vasubandhu responds by arguing that dreams can also include spatio-temporal continuity, regularity and causal efficacy. Regarding intersubjectivity, Vasubandhu appeals to shared karma as well as mind to mind causation. After answering these objections, Vasubandhu argues that idealism is a better explanation than realism for everyday experiences. To do this, he relies on the Indian "Principle of Lightness" (an appeal to parsimony like Occam's Razor) and argues that idealism is the "lighter" theory since it posits a smaller number of entities. This is thus an argument from simplicity and an inference to the best explanation (i.e. an abductive argument).
As such, he affirms that our usual experience of being a self (ātman) that knows objects is an illusory construct, and this constitutes what he calls the "imagined nature" aspect of reality.
Thus, for Vasubandhu, there is a more fundamental "root consciousness" that is empty of subject-object distinctions and yet originates all experiences "just as waves originate on water" (Thirty Verses, Triṃś.17). However, Vasubandhu sees this philosophy is a mere conventional description, since ultimate reality is "inconceivable" (Triṃś.29), an ineffable and non-conceptual "thusness" which cannot be fully captured in words and can only be known through meditative realization by yogis ("yogacaras", hence the name of his school). This is why certain modern interpreters, like Jonathan Gold, see Vasubandhu's thought as a "conventionalist idealism" or even a type of epistemic idealism like Kant's (and not a full blown objective idealism).
The Buddhist epistemologists
Buddhist arguments against external objects were further expanded and sharpened by later figures like Dignāga (fl. 6th century) and Dharmakīrti (fl. 7th century) who led an epistemological turn in medieval Indian philosophy.
Dignāga's main arguments against external objects (specifically, atomic particles) are found in his Ālambanaparīkṣā (Examination of the Object of Consciousness). Dignāga argues that for something to be an object (ālambana) of a conscious state, that object must be causally related to the consciousness and it must resemble that consciousness (in appearance or content). Dignāga then attempts to show that realism about external particulars cannot satisfy these two conditions. Since individual atoms lack a resemblance to the conscious state they supposedly cause, they cannot be the object of cognition. Furthermore, aggregates of atoms also cannot be the object, since they are merely a conceptual grouping of individual atoms (and thus, unreal), and only atoms have causal efficacy.
Dharmakīrti's view is summed up in the Pramānaṿārttika (Commentary on Epistemology) as follows: "cognition experiences itself, and nothing else whatsoever. Even the particular objects of perception, are by nature just consciousness itself." One of his main arguments for idealism is the inference from "the necessity of things only ever being experienced together with experience" (Sanskrit: sahopalambhaniyama). Dharmakīrti consicely states this argument in the Ascertainment of Epistemology (Pramāṇaviniścaya): "blue and the consciousness of blue are not different, because they must always be apprehended together." Since an object is never found independently of consciousness, objects cannot be mind-independent. This can be read as an epistemological argument for idealism which attempts to show there is no good reason (empirically or inferentially) to accept the existence of external objects.
Most of the Yogācāra thinkers and epistemologists (including Dharmakīrti) defended the existence of multiple mindstreams, and even tackled the problem of other minds. As such, thinkers like Dharmakīrti were pluralists who held there were multiple minds in the world (in this they differ with Hindu Advaita thinkers who held there was a single cosmic consciousness). However, there was a certain sub-school of Indian Buddhists, exemplified by Prajñakaragupta, Jñānaśrīmitra (fl. 975–1025 C.E.) and Ratnakīrti (11th century CE) who were not pluralists. In his Refutation of Other mindstreams (Santānāntaradūṣaṇa), Ratnakīrti argues that the existence of other minds cannot be established ultimately, and as such ultimate reality must be an undifferentiated non-dual consciousness (vijñānādvaita). This monistic interpretation of Yogācāra is known as the Citrādvaitavāda school (the view of variegated non-duality) since it sees reality as a single multifaceted non-dual luminosity (citrādvaitaprakāśa).
Chinese philosophy
In Chinese philosophy, Yogācāra idealism was defended by Chinese Buddhists like Xuanzang (602–664) and his students Kuiji (632–682) and Wŏnch'ŭk (613–696). Xuanzang had studied Yogācāra Buddhism at the great Indian university of Nalanda under the Indian philosopher Śīlabhadra. His work, especially The Demonstration of Consciousness-only, was pivotal in the establishment of East Asian Yogācāra Buddhism (also known as "consciousness only", Ch: Weishi 唯識), which in turn influenced East Asian Buddhist thought in general.
Yogācāra Buddhism also influenced the thought of other Chinese Buddhist philosophical traditions, such as Huayan, Tiantai, Pure Land, and Zen. Many Chinese Buddhist traditions like Huayan, Zen, and Tiantai were also strongly influenced by an important text called the Awakening of Faith in the Mahāyāna, which synthesized consciousness-only idealism with buddha-nature thought. This text promoted an influential theory of mind which holds that all phenomena are manifestations of the "One Mind". Some scholars have seen this as an ontological monism. One passage from the text states: "the three worlds are illusory constructs, created by the mind alone" and "all dharmas are produced from the mind's giving rise to false thoughts". Jorgensen et al. note that this indicates metaphysical idealism. The new philosophical trend ushered in by the Awakening of Faith was resisted by some Chinese Yogācāra thinkers, and the debates between the Yogācāra school of Xuanzang and those who instead followed the doctrines of the Awakening of Faith continued until the modern era. These debates happened in China as well as in Japan and Korea.
The doctrine that all phenomena arise from an ultimate principle, the One Mind, was adapted by the influential Huayan school, whose thought is exemplified by thinkers such as Fazang (643–712) and Zongmi (780–841). This tradition also promoted a kind of holism which sees every phenomenon in the cosmos as interfused and interconnected with every other phenomenon. Chinese scholars like Yu-lan Fung and Wing-tsit Chan see Huayan philosophy as a form of idealism, though other scholars have defended alternative interpretations. According to Wing-tsit Chan, since Huayan patriarch Fazang sees the One Mind as the basis for all things, including the external world, his system is one of objective idealism. A key distinction between Huayan's view of the world and that of the Yogācāra school is that in Huayan, there is a single intersubjective world (which nevertheless arises from mind), while Yogācāra holds that each mindstream projects its own world out of their underlying root consciousness.
Chinese Buddhist idealism also influenced Confucian philosophy through the work of thinkers like the Ming era (1368–1644) neo-confucian Wang Yangming (1472–1529). Wang's thought has been interpreted as a kind of idealism. According to Wang, the ultimate principle or pattern (lǐ) of the whole universe is identical with the mind, which forms one body or substance (yì tǐ) with "Heaven, Earth, and the myriad creatures" of the world. Wang argues that only this view can explain the fact that human beings experience innate care and benevolence for others as well as a sense of care for inanimate objects. Wang's thought, along with that of Lu Xiangshan, led to the creation of the School of Mind, an important Neo-Confucian tradition which emphasized these idealist views.
Yogācāra idealism saw a revival in the 20th century, associated figures like Yang Wenhui (1837–1911), Taixu, Liang Shuming, Ouyang Jingwu (1870–1943), Wang Xiaoxu (1875–1948), and Lu Cheng. Modern Chinese thinkers associated with consciousness-only linked the philosophy with Western philosophy (especially Hegelian and Kantian thought) and modern science. A similar trend occurred among some Japanese philosophers like Inoue Enryō, who linked East Asian philosophies like Huayan with the philosophy of Hegel.
Both modern Chinese Buddhists and New Confucian thinkers participated in this revival of consciousness-only studies. The thought of New Confucians like Xiong Shili, Ma Yifu, Tang Junyi and Mou Zongsan, was influenced by Yogācāra consciousness-only philosophy, as well as by the metaphysics of the Awakening of Faith in the Mahāyāna, though their thought also contained many critiques of Buddhist philosophy.
Modern philosophy
It is only in the modern era that idealism became a central topic of argumentation among Western philosophers. This was also when the term "idealism" coined by Christian Wolff (1679–1754), though previous thinkers like Berkeley had argued for it under different names.
Idealistic tendencies can be found in the work of some rationalist philosophers, like Leibniz and Nicolas Malebranche (though they did not use the term). Malebranche argued that Platonic ideas (which exist only in the mind of God) are the ultimate ground of our experiences and of the physical world, a view that prefigures later idealist positions. Some scholars also see Leibniz' philosophy as approaching idealism. Guyer et al. write that "his view that the states of monads can be only perceptions and appetitions (desires) suggests a metaphysical argument for idealism, while his famous thesis that each monad represents the entire universe from its own point of view might be taken to be an epistemological ground for idealism, even if he does not say as much." However, there is still much debate in the contemporary scholarly literature on whether Leibniz can be considered an idealist.
Subjective idealism
One famous proponent of modern idealism was Bishop George Berkeley (1685–1753), an Anglo-Irish philosopher who defended a theory he called immaterialism. This kind of idealism is sometimes also called subjective idealism (also known as phenomenalistic idealism).
Berkeley held that objects exist only to the extent that a mind perceives them and thus the physical world does not exist outside of mind. Berkeley's epistemic argument for this view (found in his A Treatise Concerning the Principles of Human Knowledge) rests on the premise that we can only know ideas in the mind. Thus, knowledge does not extend to mind-independent things (Treatise, 1710: Part I, §2). From this, Berkeley holds that "the existence of an idea consists in being perceived", thus, regarding ideas "their esse is percipi", that is, to be is to be perceived (1710: Part I, §3).
Based on this restriction of existence to only what is being perceived, Berkeley holds that it is meaningless to think that there could exist objects that are not being perceived. This is the basic idea behind what has been called Berkeley's "master argument" for idealism, which states that "one cannot conceive of anything existing unconceived because in trying to do so one is still conceiving of the object" (1710: Part I, §23). As to the question of how objects which are currently not being perceived by individual minds persist in the world, Berkeley answers that a single eternal mind keeps all of physical reality stable (and causes ideas in the first place), and this is God.
Berkeley also argued for idealism based on a second key premise: "an idea can be like nothing but an idea" and as such there cannot be any things without or outside mind. This is because for something to be like something else, there must be something they have in common. If something is mind independent, then it must be completely different from ideas. Thus, there can be no relation between ideas in the mind and things "without the mind", since they are not alike. As Berkeley writes, "...I ask whether those supposed originals or external things, of which our ideas are the pictures or representations, be themselves perceivable or no? if they are, then they are ideas, and we have gained our point; but if you say they are not, I appeal to any one whether it be sense, to assert a colour is like something which is invisible; hard or soft, like something which is intangible; and so of the rest." (1710: Part I, §8).
A similar idealistic philosophy was developed at around the same time as Berkeley by Anglican priest and philosopher Arthur Collier (Clavis Universalis: Or, A New Inquiry after Truth, Being a Demonstration of the Non-Existence, or Impossibility, of an External World, 1713). Collier claimed to have developed his view that all matter depends on mind independently of Berkeley. Paul Brunton, a British philosopher and mystic, also taught a similar type of idealism called "mentalism".
A. A. Luce and John Foster are other subjective idealists. Luce, in Sense without Matter (1954), attempts to bring Berkeley up to date by modernizing his vocabulary and putting the issues he faced in modern terms, and treats the Biblical account of matter and the psychology of perception and nature. Foster's The Case for Idealism argues that the physical world is the logical creation of natural, non-logical constraints on human sense-experience. Foster's latest defense of his views (phenomenalistic idealism) is in his book A World for Us: The Case for Phenomenalistic Idealism.
Critics of subjective idealism include Bertrand Russell's popular 1912 book The Problems of Philosophy, Australian philosopher David Stove, Alan Musgrave, and John Searle.
Epistemic idealism
Kant's Transcendental idealism
Transcendental idealism was developed by Immanuel Kant (1724–1804), who was the first philosopher to label himself an "idealist". In his Critique of Pure Reason, Kant was clear to distinguish his view (which he also called "critical" and "empirical realism") from Berkeley's idealism and from Descartes views. Kant's philosophy holds that we only have knowledge of our experiences, which consists jointly of intuitions and concepts. As such, our experiences reflect our cognitive structures, not the intrinsic nature of mind-independent things. This means even time and space are not properties of things in themselves (i.e. mind independent reality underlying appearances).
Since it focuses on the mind dependent nature of knowledge and not on metaphysics per se, transcendental idealism is a type of epistemological idealism. Unlike metaphysical forms of idealism, Kant's transcendental idealism does not deny the existence of mind independent things or affirm that they must be mental. He thus accepts that we can conceive of external objects as distinct from our representations of them. However, he argues that we cannot know what external objects are "in themselves". As such, Kant's system can be called idealist in some respects (e.g. regarding space and time) and also realist in that he accepts there must be some mind independent reality (even if we cannot know its ultimate nature and thus must remain agnostic about this). Kant's system also affirms the reality of a free truly existent self and of a God, which he sees as being possible because the non-temporal nature of the thing-in-itself allows for a radical freedom and genuine spontaneity.
Kant's main argument for his idealism, found throughout the Critique of Pure Reason, is based on the key premise that we always represent objects in space and time through our a priori intuitions (knowledge which is independent from any experience). Thus, according to Kant, space and time can never represent any "property at all of any things in themselves nor any relations of them to each other, i.e., no determination of them that attaches to objects themselves and that would remain even if one were to abstract from all subjective conditions of intuition" (CPuR A 26/B 42).
Kant's main point is that since our mental representations have spatio-temporal structure, we have no real grounds for positing that the real objects our mind represents in this way also have spatio-temporal structure in themselves. Kant makes this argument in different parts of the Critique, such as when he asks rhetorically:If there did not lie in you a faculty for intuiting a priori; if this subjective condition were not at the same time the universal a priori condition under which alone the object of ... intuition is possible; if the object ([e.g.,] the triangle) were something in itself without relation to your subject: then how could you say that what necessarily lies in your subjective conditions for constructing a triangle must also necessarily pertain to the triangle in itself. (A 48/B 65) Throughout his career, Kant labored to distinguish his philosophy from metaphysical idealism, as some of his critics charged him with being a Berkeleyian idealist. He argued that even if we cannot know how things are in themselves, we do know they must exist, and that we know this "through the representations which their influence on our sensibility provides for us." In the second edition of his Critique, he even inserted a "refutation of idealism". For Kant, "the perception of this persistent thing is possible only through a thing outside me and not through the mere representation of a thing outside me."
Neo-Kantianism
Kant's philosophy was extremely influential on European enlightenment thinkers (and counter-enlightenment ones as well), and his ideas were widely discussed and debated. Transcendental idealism was also defended by later Kantian philosophers who adopted his method, such as Karl Leonhard Reinhold and Jakob Sigismund Beck.
The mid-19th century saw a revival of Kantian philosophy, which became known as Neo-Kantianism, with its call of "Back to Kant". This movement was especially influential on 19th century German academic philosophy (and also continental philosophy as a whole). Some important figures include: Hermann Cohen (1842–1918), Wilhelm Windelband (1848–1914), Ernst Cassirer, Hermann von Helmholtz, Eduard Zeller, Leonard Nelson, Heinrich Rickert, and Friedrich Albert Lange. A key concern of the Neo-Kantians was to update Kantian epistemology, particularly in order to provide an epistemic basis for the modern sciences (all while avoiding ontology altogether, whether idealist or materialist). Neo-Kantianism rejected metaphysical idealism while also accepting the basic Kantian premise that "our experience of reality is always structured by the distinctive features of human mentality." Hence, Cassirer defended an epistemic worldview that held that one cannot reduce reality to any independent or substantial object (physical or mental), instead, there are only different ways of describing and organizing experience.
Neo-Kantianism influenced the work of the Vienna circle and its ambassadors to the Anglophone world, Rudolf Carnap 1891–1970) and Hans Reichenbach. Charles Bernard Renouvier was the first philosopher in France to formulate a system based on Kant's critical idealism, which he termed Neo-criticism (néo-criticisme). It is a transformation rather than a continuation of Kantianism.
German idealism
Several important German thinkers who were deeply influenced by Kant are the German idealists: Johann Gottlieb Fichte (1762–1814), Friedrich Wilhelm Joseph Schelling (1775–1854), and Georg Friedrich Hegel (1770–1831). Though heavily drawing on Kant, these thinkers were not transcendental idealists as such, and they sought to move beyond the idea that things in themselves are unknowable — an idea they considered as opening the door to skepticism and nihilism.
Post-Kantian German idealists thus rejected transcendental idealism by arguing against the opposition of a mind-independent world of being and a subjective world of mental constructs (or the separation between the knowledge and what is known, between subject and object, real and ideal). This new German idealism was distinguished by an "inseparability of being and thinking" and "a dynamic conception of self-consciousness" that sees reality as spontaneous conscious activity and its expressions. As such, this kind of metaphysical idealism, focused on dynamic processes and forces, was opposed to older forms of idealism, which based itself on substance theory (which these Germans labeled "dogmatism").
The first thinker to elaborate this type of dynamic idealism was J. G. Fichte (Doctrine of Wissenschaft, 1810–1813). For Fichte, the primordial act at the ground of being is called "self-positing". Fichte argues that self-consciousness or the I is a spontaneous unconditioned self-creating act which he also called the deed-act (tathandlung). Fichte argues that positing something unconditioned and independent at the ground of all is the only way to avoid an epistemic infinite regress. According to Fichte, this "I am" or "absolute subject" which "originally posits its own being absolutely" (Doctrine I, 2: 261), "is at the same time the actor and the product of the act; the actor, and that which the activity brings forth; act and deed are one and the same" (Doctrine I, 2: 259). Fichte also argues that this "I" has the capacity to "counter-posit" a "not-I", leading to a subject-object relationship. The I also has a third capacity Fichte calls "divisibility", which allows for the existence of plurality in the world, which however must be understood as manifestations of the "I-activity", and as being "within the I".
Fichte's philosophy was adopted by Schelling who defended this new idealism as a full monistic ontology which tried to account for all of nature which he would eventually name "absolute idealism". For Schelling, reality is an "original unity" (ursprüngliche Einheit) or a "primordial totality" (uranfängliche Ganzheit) of opposites. This is an absolute which he described as an "eternal act of cognition" is disclosed in subjective and objective modes, the world of ideas and nature.
G. W. F. Hegel also defended a dynamic absolute idealism that sees existence as an all-inclusive whole. However, his system differs from his predecessors' in that it is not grounded on some initial subject, mind, or "I" and tries to move beyond all bifurcation subject and object, of the dualism between thinking and being (which for Hegel just leads to various contradictions). As such, Hegel's system is an ontological monism fundamentally based on on a unity between being and thought, subject and object, which he saw as being neither materialistic realism nor subjective idealism (which still stands in an opposition to materialism and thus remains stuck in the subject-object distinction).
In his Phenomenology of Spirit (1807), Hegel provides an epistemological argument for idealism, focusing on proving the "metaphysical priority of identities over and against their opposed elements". Hegel's argument begins with his conception of knowledge, which he holds is a relation between a claim about a subject and an object that allows for a correspondence between their structural features (and is thus a type of correspondence theory). Hegel argues that if knowledge is possible, real objects must also have a similar structure as thought (without, however, being reduced to thoughts). If not, there could be no correspondence between what the object is and what a subject believes to be true about the object. For Hegel, any system in which the subject that knows and the object which is known are structurally independent would make the relations necessary for knowledge impossible. Hegel also argues that finite qualities and objects depend on other finite things to determine them. An infinite thinking being, on the other hand, would be more self-determining and hence most fully real.
Hegel argued that a careful analysis of the act of knowledge would eventually lead to an understanding of the unity of subjects and the objects in a single all-encompassing whole. In this system, experiences are not independent of the thing in itself (as in Kant) but are manifestations grounded in a metaphysical absolute, which is also experiential (but since it resists the experiential subject, can be known through this resistance). Thus, our own experiences can lead us to an insight into the thing in itself. Furthermore, since reality is a unity, all knowledge is ultimately self-knowledge, or as Hegel puts it, it is the subject being "in the other with itself" (im Anderen bei sich selbst sein). Since all things have spirit (Geist), a philosopher can attain what he termed "absolute knowing" (absolutes Wissen), which is the knowledge that all things are ultimately manifestations of an infinite absolute spirit.
Later, in his Science of Logic (1812–1814), Hegel further develops a metaphysics in which the real and objective activity of thinking unfolds itself in numerous ways (as objects and subjects). This ultimate activity of thought, which is not the activity of specific subjects, is an immediate fact, a given (vorhandenes), which is self-standing and self-organizing. In manifesting the entire world, the absolute enacts a process of self-actualization through a grand structure or master logic, which is what Hegel calls "reason" (Vernunft), and which he understands as a teleological reality.
Hegelianism was deeply influential throughout the 19th century, even as some Hegelians (like Marx) rejected idealism. Later idealist Hegelians include Friedrich Adolf Trendelenburg (1802–72) and Rudolf Hermann Lotze (1817–81).
Schopenhauer's philosophy
The philosophy of Arthur Schopenhauer owes much to the thought of Kant and to that of the German idealists, which he nevertheless strongly criticizes. Schopenhauer maintains Kant's idealist epistemology which sees even space, time and causality are mere mental representations (vorstellungen) conditioned by the subjective mind. However, he replaces Kant's unknowable thing-in-itself with an absolute reality underlying all ideas that is a single irrational Will, a view that he saw as directly opposed to Hegel's rational Spirit. This philosophy is laid out in The World as Will and Representation (WWR 1818, 2nd ed. 1844).
Schopenhauer accepts Kant's view that there can be no appearances without there being something which appears. However, unlike Kant, Schopenhauer writes that "we have immediate cognition of the thing in itself when it appears to us as our own body". (WWR §6, pp. 40–1). Schopenhauer argues that, even though we do experience our own bodies through the categories of space, time and causality, we also experience it in another more direct and internal way through the experience of willing. This immediate experience reveals that it is will alone which "gives him the key to his own appearance, reveals to him the meaning and shows him the inner workings of his essence, his deeds, his movements" (WWR §18, p. 124). Thus, for Schopenhauer, it is desire, a "dark, dull driving", which is at the root of action, not reason. Furthermore, since this is the only form of insight we have of the inner essence of any reality, we must apply this insight "to [the] appearances in the inorganic [and organic] world as well." Schopenhauer compares willing with many natural forces. As such, Will is "a name signifying the being in itself of every thing in the world and the sole kernel of every appearance" (WWR §23, pp. 142–3).
Because irrational Willing is the most foundational reality, life is filled with frustration, irrationality and disappointment. This is the metaphysical foundation of Schopenhauer's pessimistic philosophy of life. The best we can hope for is to deny and try to escape (however briefly) the incessant force of the Will, through art, aesthetic experience, asceticism, and compassion.
Gentile's actual idealism
Actual idealism is a form of idealism developed by Giovanni Gentile which argues that reality is the ongoing act of thinking, or in Italian "pensiero pensante" and thus, only thoughts exist. He further argued that our combined thoughts defined and produced reality. Gentile also nationalizes this idea, holding that the state is a composition of many minds coming together to construct reality. Giovanni Gentile was a key supporter of fascism, regarded by many as the "philosopher of fascism". His idealist theory argued for the unity of all society under one leader, which allows it to act as one body.
Anglo-American Idealism
Idealism was widespread in Anglo-American philosophy during the nineteenth and twentieth centuries. It was the dominant metaphysics in the English speaking world during the last decades of the nineteenth century and the beginning of the twentieth century. During this time, the defenders of British idealism made significant contributions to all fields of philosophy. However, other philosophers, like McTaggart, broke from this trend and instead defended a pluralistic idealism in which the ultimate reality is a plurality of minds.
Many Anglo-American idealists were influenced by Hegelianism, but they also drew on Kant, Plato and Aristotle. Key figures of this transatlantic movement include many of the British idealists, such as T. H. Green (1836–1882), F. H. Bradley (1846–1924), Bernard Bosanquet (1848–1923), J. H. Muirhead (1855–1940), , H. H. Joachim (1868–1938), A. E. Taylor (1869–1945), R. G. Collingwood (1889–1943), G. R. G. Mure (1893–1979) and Michael Oakeshott. American idealist philosophers include Josiah Royce (1855–1916) and Brand Blanshard (1892–1987).
British absolute idealism
One of the early influential British idealists was Thomas Hill Green, known for his posthumous Prolegomena to Ethics. Green argues for an idealist metaphysics in this text as a foundation for free will and ethics. In a Kantian fashion, Green first argues that knowledge consists in seeing relations in consciousness, and that any sense of something being "real" or "objective" has no meaning outside of consciousness. He then argues that experience as consciousness of related events "cannot be explained by any natural history, properly so called" and thus "the understanding which presents an order of nature to us is in principle one with an understanding which constitutes that order itself."
Green then further argues that individual human beings are aware of an order of relations which extends beyond the bounds of their individual mind. For Green, this greater order must be in a larger transpersonal intelligence, while the world is "a system of related facts" which is made possible and revealed to individual beings by the larger intelligence. Furthermore, Green also holds that participation in the transpersonal mind is constituted by the apprehension of a portion of the overall order by animal organisms. As such, Green accepts the reality of biological bodies when he writes that "in the process of our learning to know the world, an animal organism, which has its history in time, gradually becomes the vehicle of an eternally complete consciousness."
Another paradigmatic British absolute idealist is Francis Herbert Bradley, who affirms that "the Absolute is not many; there are no independent reals". This absolute reality "is one system, and ... its contents are nothing but sentient experience. It will hence be a single and all-inclusive experience, which embraces every partial diversity in concord." Bradley presents an anti-realist idealism which rejects the ultimate reality of relations, which for him are mere appearance, "a makeshift, a mere practical compromise, most necessary, but in the end most indefensible."
Bradley presented his idealism in his Appearance and Reality (1893) by arguing that the ideas we use to understand reality are contradictory. He deconstructs numerous ideas including primary and secondary qualities, substances and attributes, quality and relation, space, time and causality and the self. Most famously, Bradley argued that any ultimate distinction between qualities and relations is untenable since "qualities are nothing without relations" since "their plurality depends on relation, and, without that relation, they are not distinct. But, if not distinct, then not different, and therefore not qualities." Furthermore, for Bradley, the same thing turns out to be true of relations, and of both taken together, since for a relation to relate to a quality, it would then require a further relation. As such, qualities and relations are appearance, not ultimate truth, since "ultimate reality is such that it does not contradict itself".
Even though all appearances are "not truth", it is still possible to have true knowledge of ultimate reality, which must be a unity beyond contradictions but which still allows for diversity. Bradley thinks that this character of reality as a diverse unity is revealed to us in sentient experience, since our various experiences must be grounded and caused by some undifferentiated and pre-abstract reality. However he also admits "our complete inability to understand this concrete unity in detail".
American idealism
Idealism also became popular in the United States with thinkers like Charles Sanders Peirce (1839–1914), who defended an "objective idealism" in which, as he put it, "matter is effete mind, inveterate habits becoming physical laws". Pierce initially defended a type of representationalism alongside his form of Pragmatism which was metaphysically neutral since it is "no doctrine of metaphysics". However, in later years (after c.1905), Pierce defended an objective idealism which held that the universe evolved from a state of maximum spontaneous freedom (which he associated with mind) into its present state where matter were merely "congealed" mind. In arguing for this view, he followed the classic idealist premise that states there must be a metaphysical equality (an isomorphism) between thought and being, and as such, "the root of all being is One". A key feature of Pierce's idealism is "Tychism", which he defined as "the doctrine that absolute chance is a factor of the universe." This allows for an element of chance or indeterminism in the universe which allows for cosmological evolution.
Under the influence of Pierce, it was Josiah Royce (1855–1916) who became the leading American idealist at the turn of the century. Royce's idealism incorporated aspects of Pierce's Pragmatism and is defended in his The Spirit of Modern Philosophy (1892). One of Royce's arguments for idealism is his argument from meaning, which states the possibility of there being meaning at all requires an identity between what is meant (ordinary objects) and what makes meaning (ordinary subjects).
In his The World and the Individual (2 vols, 1899 and 1901), Royce also links meaning with purpose, seeing the meaning of a term as its intended purpose. Royce was an absolute idealist who held that ultimately reality was a super-self, an absolute mind. Royce argues that for a mind to be able represent itself and its representations (and not lead to a vicious infinite regress), it must be complex and capacious enough, and only an absolute mind has this capacity.
The American philosopher Brand Blanshard (1892–1987) was also a proponent of idealism who accepted a "necessary isomorphism between knowledge and its object". His idealism is most obvious in The Nature of Thought (1939), where he discusses how all perception is infused with concepts. He then argues from a coherence theory of truth that the "character of reality" must also include coherence itself, and thus, knowledge must be similar to what it knows. Not only that, but knowledge must be part of a single system with the world it knows, and causal relations must be also involve logical relations. These considerations lead to an idealism which sees the world as system of relations that cannot be merely physical.
Pluralistic idealism
Pluralistic idealism takes the view that there are many individual minds, monads, or processes that together underlie the existence of the observed world and which make possible the existence of the physical universe. Pluralistic idealism does not assume the existence of a single ultimate mind or absolute as with the total monism of absolute idealism, instead it affirms an ultimate plurality of ideas or beings.
Personalism
Personalism is the view that the individual minds of persons or selves are the basis for ultimate reality and value and as such emphasizes the fundamentality and inherent worth of persons. Modern personalist idealism emerged during the reaction against what was seen as a dehumanizing impersonalism of absolute idealism, a reaction which was led by figures like Hermann Lotze (1817–1881). Personalists affirmed personal freedom against what they saw as a monism that lead to totalitarianism by subordinating the individual to the collective.
Some idealistic personalists defended a theistic personalism (often influenced by Aquinas) in which reality is a society of minds ultimately dependent on a supreme person (God). Defenders of a Theistic and idealistic personalism include Borden Parker Bowne (1847–1910), Andrew Seth Pringle-Pattison (1856–1931), Edgar S. Brightman and George Holmes Howison (1834–1916). These theistic personalists emphasize the dependence of all individual minds on God.
However, other personalists like British idealist J. M. E. McTaggart and Thomas Davidson merely argued for a community of individual minds or spirits, without positing a supreme personal deity who creates or grounds them. Similarly, James Ward (1843–1925) was inspired by Leibniz to defend a form of pluralistic idealism in which the universe is composed of "psychic monads" of different levels, interacting for mutual self-betterment.
American personalism was particularly associated with idealism and with Boston university, where Bowne (who had studied with Lotze) developed his personalist idealism and published his Personalism (1908). Bowne's students, like Edgar Sheffield Brightman, Albert C. Knudson (1873–1953), Francis J. McConnell (1871–1953), and Ralph T. Flewelling (1871–1960), continued to develop his personal idealism after his death. The "Boston personalism" tradition also influenced the later work of Peter A. Bertocci (1910–1989), as well as the ideas of Martin Luther King Jr., who studied at Boston University with personalist philosophers and was shaped by their worldview.
George Holmes Howison meanwhile, developed his own brand of "California personalism". Howison argued that both impersonal monistic idealism and materialism run contrary to the experience of moral freedom, while "personal idealism" affirms it. To deny freedom to pursue truth, beauty, and "benignant love" is to undermine every profound human venture, including science, morality, and philosophy. Howison, in his book The Limits of Evolution and Other Essays Illustrating the Metaphysical Theory of Personal Idealism, developed a democratic idealism that extended all the way to God, who instead of a monarch, was seen as the ultimate democrat in eternal relation to other eternal persons.
Another pluralistic idealism was Thomas Davidson's (1840–1900) "apeirotheism", which he defined as "a theory of Gods infinite in number". The theory was indebted to Aristotle's view of the eternal rational soul and the nous. Identifying Aristotle's God with rational thought, Davidson argued, contrary to Aristotle, that just as the soul cannot exist apart from the body, God cannot exist apart from the world.
Another influential British idealist, J. M. E. McTaggart (1866–1925), defended a theory in which reality is a community of individual spirits connected by the relation of love. McTaggart defends ontological idealism through a mereological argument which argues only spirits can be substances, as well as through an argument for the unreality of time (a position he also defends in The Unreality of Time).
In The Nature of Existence (1927), McTaggart's argument relies on the premise that substances are infinitely divisible and cannot have simple parts. Furthermore, each of their infinite parts determines every other part. He then analyzes various characteristics of reality such as time, matter, sensation, and cogitation and attempts to show they cannot be real elements of real substances, but must be mere appearances. For example, the existence of matter cannot be inferred based on sensations, since they cannot be divided to infinity (and thus cannot be substances). Spirits on the other hand are true infinitely divisible substances. They have "the quality of having content, all of which is the content of one or more selves", and know themselves through direct perception as substances persisting through time. For McTaggart, there is a multiplicity of spirits, which are nevertheless related to each other harmoniously through their love for each other.
McTaggart also criticizes Hegel's view of the state in his Studies in Hegelian Cosmology (1901), arguing that metaphysics can give very little guidance to social and political action, just like it can give us very little guidance in other practical matters, like engineering.
Modern critiques
In the Western world, the popularity of idealism as a metaphysical view declined severely in the 20th century, especially in English language analytic philosophy. This was partly due to the criticisms of British philosophers like G. E. Moore and Bertrand Russell and also due to the critiques of the American "new realists" like E.B. Holt, Ralph Barton Perry and Roy Wood Sellars.
Moore famously critiqued idealism and defended realism in The Refutation of Idealism (1903), and A Defence of Common Sense (1925). In the Refutation, Moore argues that arguments for idealism most often rely on the premise that to be is to be perceived (esse est percipi), but that if this is true "how can we infer that anything whatever, let alone everything is an inseparable aspect of any experience?". Bertrand Russell's popular 1912 book The Problems of Philosophy also contained a similar critique. Their main objection is that idealists falsely presuppose that the mind's relation to any object is a necessary condition for the existence of the object. Russell thinks this fallacy fails to make "the distinction between act and object in our apprehending of things" (1912 [1974: 42]). Guyer et al. write that the success of these arguments might be controversial and that "the charge that they simply conflate knowledge and object hardly seems to do justice to the elaborate arguments of the late nineteenth-century idealists." It also relies on a realist epistemology in which knowledge stands "in an immediate relation to an independent individual object".
Regarding positive arguments, Moore's most famous argument for the existence of external matter (found in Proof of an External World, 1939) was an epistemological argument from common sense facts, sometimes known as "Here is one hand". Idealism was also more recently critiqued in the works of Australian philosopher David Stove, and by Alan Musgrave, and John Searle.
Contemporary idealism
Today, idealism remains a minority view in Western analytic circles. In spite of this, the study of the work of the Anglo-American idealists saw a revival in the 21st century with an increase in publications at the turn of the century, and they are now considered to have made important contributions to philosophy.
Several modern figures continue to defend idealism. Recent idealist philosophers include A. A. Luce (Sense without Matter, 1954), Timothy Sprigge (The Vindication of Absolute Idealism, 1984), Leslie Armour, Vittorio Hösle (Objective Idealism, 1998), John Andrew Foster (A World for Us, 2008), John A. Leslie (Infinite Minds: A Philosophical Cosmology, 2002), and Bernardo Kastrup (The Idea of the World, 2018). In 2022, Howard Robinson authored Perception and Idealism.
Both Foster and Sprigge defend idealism through an epistemic argument for the unity of the act of perception with its object. Sprigge also made an argument from grounding, which held that our phenomenal objects presuppose some noumenal ground. As such For Sprigge, the physical world "consists in innumerable mutually interacting centres of experience, or, what comes to the same, of pulses and flows of experience." Thus, the noumenal ground is the totality of all experiences, which are one "concrete universal", that resembles Bradley's absolute.
Helen Yetter-Chappell has defended nontheistic (quasi-)Berkeleyan idealism.
Idealistic theories based on 20th-century science
Idealist notions took a strong hold among physicists of the early 20th century confronted with the paradoxes of quantum physics and the theory of relativity. In The Grammar of Science, Preface to the 2nd Edition, 1900, Karl Pearson wrote, "There are many signs that a sound idealism is surely replacing, as a basis for natural philosophy, the crude materialism of the older physicists." This book influenced Einstein's regard for the importance of the observer in scientific measurements. In § 5 of that book, Pearson asserted that "...science is in reality a classification and analysis of the contents of the mind..." Also, "...the field of science is much more consciousness than an external world."
Arthur Eddington, a British astrophysicist of the early 20th century, wrote in his book The Nature of the Physical World that the stuff of the world is mind-stuff, adding that "The mind-stuff of the world is, of course, something more general than our individual conscious minds." Ian Barbour, in his book Issues in Science and Religion, cites Arthur Eddington's The Nature of the Physical World (1928) as a text that argues The Heisenberg Uncertainty Principles provides a scientific basis for "the defense of the idea of human freedom" and his Science and the Unseen World (1929) for support of philosophical idealism "the thesis that reality is basically mental."
The physicist Sir James Jeans wrote: "The stream of knowledge is heading towards a non-mechanical reality; the Universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter... we ought rather hail it as the creator and governor of the realm of matter." Jeans, in an interview published in The Observer (London), when asked the question: "Do you believe that life on this planet is the result of some sort of accident, or do you believe that it is a part of some great scheme?" replied, "I incline to the idealistic theory that consciousness is fundamental... In general the universe seems to me to be nearer to a great thought than to a great machine."
The chemist Ernest Lester Smith, a member of the occult movement Theosophy, wrote a book Intelligence Came First (1975) in which he claimed that consciousness is a fact of nature and that the cosmos is grounded in and pervaded by mind and intelligence.
See also
Innatism
Qualia
Hard problem of consciousness
Notes
References
Primary
Berkeley, George. Treatise Concerning the Principles of Human Knowledge, 1710.
Bradley, Francis Herbert, Appearance and Reality: A Metaphysical Essay, Oxford: Clarendon Press, 1893
Fichte, Johann Gottlieb. Foundations of Natural Right (Grundlagen des Naturrechts nach Prinzipien der Wissenschaftslehre), 1797.
Foster, John Andrew. A World for Us: The Case for Phenomenalistic Idealism. Oxford University Press, Oxford, 2008.
Dignāga; Krumroy, Robert E; Sastri, N. Aiyaswami. Ālambanaparīkṣā, and Vṛtti by Diṅnāga, with the Commentary of Dharmapāla, Restored Into Sanskrit from the Tibetan and Chinese Versions and Edited with English Translations and Notes and with Copious Extracts from Vinītadeva's Commentary. Jain Publishing Company, 2007.
Hegel, Georg Wilhelm Friedrich, Phenomenology of the Spirit (Phänomenologie des Geistes), 1807.
Kant, Immanuel. Critique of Pure Reason (Kritik der reinen Vernunft), 1781/87.
Leibniz, Gottfried Wilhelm, La Monadologie (The Monadology), c. 1714.
Leslie, John A. Infinite Minds: A Philosophical Cosmology, Clarendon Press, 2003.
McTaggart, John McTaggart Ellis. The Nature of Existence, 2 volumes, Cambridge University Press. 1921–7.
Radhakrishnan, Sarvepalli. An Idealist View of Life, 1932
Schelling, Friedrich Wilhelm Joseph. System des transcendentalen Idealismus (System of Transcendental Idealism), 1800.
Schopenhauer, Arthur. Die Welt als Wille und Vorstellung (The World as Will and Presentation), Leipzig, 1819.
Sprigge, T.L.S., The Vindication of Absolute Idealism, Edinburgh University Press, 1983.
Vasubandhu (c. 4th century), Viṃśatikā-vijñaptimātratāsiddhi (Twenty Verses on Consciousness Only) in Gold, Jonathan C. 2015. Paving the Great Way: Vasubandhu’s Unifying Buddhist Philosophy, New York: Columbia University Press.
Vasubandhu, Trisvabhāvanirdeśa (Treatise on the Three Natures), in William Edelglass & Jay Garfield (eds.), Buddhist Philosophy: Essential Readings, New York & Oxford: Oxford University Press, pp. 35–45.
Xuanzang (c. 7th century). Chéng Wéishì Lùn (The Demonstration of Consciousness-only, Ch: 成唯識論).
Other
Dunham, Jeremy; Grant, Iain Hamilton; Watson, Sean. Idealism: The History of a Philosophy, Acumen, 2011,
Goldschmidt, Tyron; Pearce, Kenneth L. (ed.), Idealism: New Essays in Metaphysics, Oxford University Press, 2017,
Guyer, Paul; Horstmann, Rolf-Peter. Idealism in Modern Philosophy, Oxford University Press, 2023.
Neujahr, Philip J., Kant's Idealism, Mercer University Press, 1995
Prabhat Rainjan Sarkar (1984), Human Society. Vols. I and II. (Ananda Marga Publications, Calcutta, India).
Surendranath Dasgupta (1969), Indian Idealism (Cambridge University Press, New York, NY, USA),
Sohail Inayatullah (2001), Understanding P. R. Sarkar: The Indian Episteme, Macrohistory and Transformative Knowledge, (Leiden, Brill Publishers) .
Watts, Michael. Kierkegaard, Oneworld,
External links
A. C. Grayling-Wittgenstein on Scepticism and Certainty
Idealism and its practical use in physics and psychology
'The Triumph of Idealism', lecture by Professor Keith Ward offering a positive view of Idealism, at Gresham College, 13 March 2008 (available in text, audio and video download)
A new theory of ideomaterialism being a synthesis of idealism and materialism
Metaphysical theories | 0.781487 | 0.998806 | 0.780554 |
DOGMA | DOGMA, short for Developing Ontology-Grounded Methods and Applications, is the name of research project in progress at Vrije Universiteit Brussel's STARLab, Semantics Technology and Applications Research Laboratory. It is an internally funded project, concerned with the more general aspects of extracting, storing, representing and browsing information.
Methodological Root
DOGMA, as a dialect of the fact-based modeling approach, has its root in database semantics and model theory. It adheres to the fact-based information management methodology towards Conceptualization and 100% principle of ISO TR9007.
The DOGMA methodological principles include:
Data independence: the meaning of data shall be decoupled from the data itself.
Interpretation independence: unary or binary fact types (i.e. lexons) shall be adhere to formal interpretation in order to store semantics; lexons themselves do not carry semantics
Multiple views on and uses of stored conceptualization. An ontology shall be scalable and extensible.
Language neutral. An ontology shall meet multilingual needs.
Presentations independence: an ontology in DOGMA shall meet any kinds of users' needs of presentation. As an FBM dialect, DOGMA supports both graphical notations and textual presentation in a controlled language. Semantic decision tables, for example, is a means to visualize processes in a DOGMA commitment. SDRule-L is to visualize and publish ontology-based decision support models.
Concepts shall be validated by the stakeholders.
Informal textual definitions shall be provided in case the source of the ontology is missing or incomplete.
Technical introduction
DOGMA is an ontology approach and framework that is not restricted to a particular representation language. This approach has some distinguishing characteristics that make it different from traditional ontology approaches such as (i) its groundings in the linguistic representations of knowledge and (ii) the methodological separation of the domain-versus-application conceptualization, which is called the ontology double articulation principle. The idea is to enhance the potential for re-use and design scalability. Conceptualisations are materialised in terms of lexons. A lexon is a 5-tuple declaring either (in some context G):
taxonomical relationship (genus): e.g., < G, manager, is a, subsumes, person >;
non-taxonomical relationship (differentia): e.g., < G, manager, directs, directed by, company >.
Lexons could be approximately considered as a combination of an RDF/OWL triple and its inverse, or as a conceptual graph style relation (Sowa, 1984). The next section elaborates more on the notions of context.
Language versus conceptual level
Another distinguishing characteristic of DOGMA is the explicit duality (orthogonal to double articulation) in interpretation between the language level and conceptual level. The goal of this separation is primarily to disambiguate the lexical representation of terms in a lexon (on the language level) into concept definitions (on the conceptual level), which are word senses taken from lexical resources such as WordNet. The meaning of the terms in a lexon is dependent on the context of elicitation.
For example, consider a term “capital”. If this term was elicited from a typewriter manual, it has a different meaning (read: concept definition) than when elicited from a book on marketing. The intuition that a context provides here is: a context is an abstract identifier that refers to implicit or tacit assumptions in a domain, and that maps a term to its intended meaning (i.e. concept identifier) within these assumptions.
Ontology evolution
Ontologies naturally co-evolve with their communities of use. Therefore, in De Leenheer (2007) he identified a set of primitive operators for changing ontologies. We make sure these change primitives are conditional, which means that their applicability depends on pre- and post-conditions. Doing so, we guarantee that only valid structures can be built.
Context dependency types
De Leenheer and de Moor (2005) distinguished four key characteristics of context:
a context packages related knowledge: it defines part of the knowledge of a particular domain,
it disambiguates the lexical representation of concepts and relationships by distinguishing between language level and conceptual level,
it defines context dependencies between different ontological contexts and
contexts can be embedded or linked, in the sense that statements about contexts are themselves in context.
Based on this, they identified three different types of context dependencies within one ontology (intra-ontological) and between different ontologies (inter-ontological): articulation, application, and specialisation. One particular example in the sense of conceptual graph theory would be a specialisation dependency for which the dependency constraint is equivalent to the conditions for CG-specialisation
Context dependencies provide a better understanding of the whereabouts of knowledge elements and their inter-dependencies, and consequently make negotiation and application less vulnerable to ambiguity, hence more practical.
See also
Ontology engineering
Business semantics management
Data governance
Metadata management
References
Further reading
Mustafa Jarrar: "Towards Methodological Principles for Ontology Engineering". PhD Thesis. Vrije Universiteit Brussel. (May 2005)
Mustafa Jarrar: "Towards the notion of gloss, and the adoption of linguistic resources in formal ontology engineering". In proceedings of the 15th International World Wide Web Conference (WWW2006). Edinburgh, Scotland. Pages 497-503. ACM Press. . May 2006.
Mustafa Jarrar and Robert Meersman: "Ontology Engineering -The DOGMA Approach". Book Chapter (Chapter 3). In Advances in Web Semantics I. Volume LNCS 4891, Springer. 2008.
Banerjee, J., Kim, W. Kim, H., and Korth., H. (1987) Semantics and implementation of schema evolution in object-oriented databases. Proc. ACM SIGMOD Conf. Management of Data, 16(3), pp. 311–322
De Leenheer P, de Moor A (2005). Context-driven disambiguation in ontology elicitation. In P. Shvaiko and J. Euzenat (eds), Context and Ontologies: Theory, Practice, and Applications. Proc. of the 1st Context and Ontologies Workshop, AAAI/IAAI 2005, Pittsburgh, USA, pp 17–24
De Leenheer P, de Moor A, Meersman R (2007). Context dependency management in ontology engineering: a formal approach. Journal on Data Semantics VIII, LNCS 4380, Springer, pp 26–56
Jarrar, M., Demey, J., Meersman, R. (2003) On reusing conceptual data modeling for ontology engineering. Journal on Data Semantics 1(1):185–207
Spyns P, Meersman R, Jarrar M (2002). Data modeling versus ontology engineering. SIGMOD Record, 31(4), pp 12–17
Peter Spyns, Yan Tang and Robert Meersman, An Ontology Engineering Methodology for DOGMA, Journal of Applied Ontology, special issue on "Ontological Foundations for Conceptual Modeling", Giancarlo Guizzardi and Terry Halpin (eds.), Volume 3, Issue 1-2, p. 13-39 (2008).
Fact-based modeling (FBM) official website: http://www.factbasedmodeling.org/
Ontology (information science) | 0.782235 | 0.997828 | 0.780536 |
Gnosiology | Gnosiology ("study of knowledge") is "the philosophy of knowledge and cognition". In Italian, Soviet and post-Soviet philosophy, the word is often used as a synonym for epistemology. The term is also currently used in regard to Eastern Christianity.
Etymology
The term is derived from the Ancient Greek words gnosis ("knowledge", γνῶσις) and logos ("word" or "discourse", λόγος). Linguistically, one might compare it to epistemology, which is derived from the Greek words episteme ("certain knowledge") and logos.
The term "gnosiology" is not well known today, although found in Baldwin's (1906) Dictionary of Psychology and Philosophy. The Encyclopædia Britannica (1911) remarks that "The term Gnosiology has not, however, come into general use."
The term "gnosiology" (Modern Greek: γνωσιολογία) is used more commonly in Modern Greek than in English. As a philosophical concept, gnosiology broadly means the theory of knowledge, which in ancient Greek philosophy was perceived as a combination of sensory perception and intellect and then made into memory (called the mnemonic system). When considered in the context of science, gnosiology takes on a different meaning: the study of knowledge, its origin, processes, and validity. Gnosiology being the study of types of knowledge i.e. memory (abstract knowledge derived from experimentation being "episteme" or teachable knowledge), experience induction (or empiricism), deduction (or rationalism), scientific abductive reasoning, contemplation (theoria), metaphysical and instinctual or intuitive knowledge. Gnosiology is focused on the study of the noesis and noetic components of human ontology.
Within gnosiology, gnosis is derived by noesis. Noesis refers to the experiences or activities of the nous. This makes the study and origin of gnosis and gnosiology the study of the intuitive and or instinctual.
Philosophy and Western esotericism
In philosophy, gnosology (also known as gnoseology or gnostology) literally means the study of gnosis, meaning knowledge or esoteric knowledge. The study of gnosis itself covers a number of subjects, which include magic, noetics, gnostic logic, and logical gnosticism, among others. Gnosology has also been used, particularly by James Hutchison Stirling, to render Johann Gottlieb Fichte's term for his own version of transcendental idealism, Wissenschaftslehre, meaning "Doctrine of Knowledge".
The so-called "intellectus ectypus" derives its knowledge of objects from intuitions of things-in-themselves without the forms of intuition while the "intellectual archetypus" creates the objects of its knowledge through the act of thinking them. Emilii Medtner drew from Kant's gnosology along with the Kantian theory of knowledge to respond to Carl Jung's Zofingia Lectures, particularly to criticize the way intuition was conceived as a knowledge organ that is capable of functioning with validity and independence.
See also
Anosognosia
Aseity
Epistemology
Nikolay Lossky
George Metallinos
References
External links
Faith And Science In Orthodox Gnosiology and Methodology, Rev. Prof. George Metallinos at University of Athens, Greece
Epistemology
el:Γνωσιολογία | 0.789118 | 0.989111 | 0.780525 |
Metaethics | In metaphilosophy and ethics, metaethics is the study of the nature, scope, and meaning of moral judgment. It is one of the three branches of ethics generally studied by philosophers, the others being normative ethics (questions of how one ought to be and act) and applied ethics (practical questions of right behavior in given, usually contentious, situations).
While normative ethics addresses such questions as "What should I do?", evaluating specific practices and principles of action, metaethics addresses questions such as "What is goodness?" and "How can we tell what is good from what is bad?", seeking to understand the assumptions underlying normative theories. Another distinction often made is that normative ethics involves first-order or substantive questions; metaethics involves second-order or formal questions.
Some theorists argue that a metaphysical account of morality is necessary for the proper evaluation of actual moral theories and for making practical moral decisions; others reason from opposite premises and suggest that studying moral judgments about proper actions can guide us to a true account of the nature of morality.
Metaethical questions
According to Richard Garner and Bernard Rosen, there are three kinds of metaethical problems, or three general questions:
What is the meaning of moral terms or judgments? (moral semantics)
Asks about the meanings of such words as 'good', 'bad', 'right' and 'wrong' (see value theory)
What is the nature of moral judgments? (moral ontology)
Asks questions of whether moral judgments are absolute or relative, of one kind or many kinds, etc.
How may moral judgments be supported or defended? (moral epistemology)
Asks such questions as how we can know if something is right or wrong, if at all.
Garner and Rosen say that answers to the three basic questions "are not unrelated, and sometimes an answer to one will strongly suggest, or perhaps even entail, an answer to another." A metaethical theory, unlike a normative ethical theory, does not attempt to evaluate specific choices as being better, worse, good, bad, or evil; although it may have profound implications as to the validity and meaning of normative ethical claims. An answer to any of the three example questions above would not itself be a normative ethical statement.
Moral semantics
Moral semantics attempts to answer the question, "What is the meaning of moral terms or judgments?" Answers may have implications for answers to the other two questions as well.
Cognitivist theories
Cognitivist theories hold that evaluative moral sentences express propositions (i.e., they are 'truth-apt' or 'truth bearers', capable of being true or false), as opposed to non-cognitivism. Most forms of cognitivism hold that some such propositions are true (including moral realism and ethical subjectivism), as opposed to error theory, which asserts that all are erroneous.
Moral realism
Moral realism (in the robust sense; see moral universalism for the minimalist sense) holds that such propositions are about robust or mind-independent facts, that is, not facts about any person or group's subjective opinion, but about objective features of the world. Metaethical theories are commonly categorized as either a form of realism or as one of three forms of "anti-realism" regarding moral facts: ethical subjectivism, error theory, or non-cognitivism. Realism comes in two main varieties:
Ethical naturalism holds that there are objective moral properties and that these properties are reducible or stand in some metaphysical relation (such as supervenience) to entirely non-ethical properties. Most ethical naturalists hold that we have empirical knowledge of moral truths. Ethical naturalism was implicitly assumed by many modern ethical theorists, particularly utilitarians.
Ethical non-naturalism, as put forward by G. E. Moore, holds that there are objective and irreducible moral properties (such as the property of 'goodness'), and that we sometimes have intuitive or otherwise a priori awareness of moral properties or of moral truths. Moore's open question argument against what he considered the naturalistic fallacy was largely responsible for the birth of metaethical research in contemporary analytic philosophy.
Ethical subjectivism
Ethical subjectivism is one form of moral anti-realism. It holds that moral statements are made true or false by the attitudes and/or conventions of people, either those of each society, those of each individual, or those of some particular individual. Most forms of ethical subjectivism are relativist, but there are notable forms that are universalist:
Ideal observer theory holds that what is right is determined by the attitudes that a hypothetical ideal observer would have. An ideal observer is usually characterized as a being who is perfectly rational, imaginative, and informed, among other things. Though a subjectivist theory due to its reference to a particular (albeit hypothetical) subject, Ideal Observer Theory still purports to provide universal answers to moral questions.
Divine command theory holds that for a thing to be right is for a unique being, God, to approve of it, and that what is right for non-God beings is obedience to the divine will. This view was criticized by Plato in the Euthyphro (see the Euthyphro problem) but retains some modern defenders (Robert Adams, Philip Quinn, and others). Like ideal observer theory, divine command theory purports to be universalist despite its subjectivism.
Error theory
Error theory, another form of moral anti-realism, holds that although ethical claims do express propositions, all such propositions are false. Thus, both the statement "Murder is morally wrong" and the statement "Murder is morally permissible" are false, according to error theory. J. L. Mackie is probably the best-known proponent of this view. Since error theory denies that there are moral truths, error theory entails moral nihilism and, thus, moral skepticism; however, neither moral nihilism nor moral skepticism conversely entail error theory.
Non-cognitivist theories
Non-cognitivist theories hold that ethical sentences are neither true nor false because they do not express genuine propositions. Non-cognitivism is another form of moral anti-realism. Most forms of non-cognitivism are also forms of expressivism, however some such as Mark Timmons and Terrence Horgan distinguish the two and allow the possibility of cognitivist forms of expressivism. Non-cognitivism includes:
Emotivism, defended by A. J. Ayer and Charles Stevenson, holds that ethical sentences serve merely to express emotions. Ayer argues that ethical sentences are expressions of approval or disapproval, not assertions. So "Killing is wrong" means something like "Boo on killing!".
Quasi-realism, defended by Simon Blackburn, holds that ethical statements behave linguistically like factual claims and can be appropriately called "true" or "false", even though there are no ethical facts for them to correspond to. Projectivism and moral fictionalism are related theories.
Universal prescriptivism, defended by R. M. Hare, holds that moral statements function like universalized imperative sentences. So "Killing is wrong" means something like "Don't kill!" Hare's version of prescriptivism requires that moral prescriptions be universalizable, and hence actually have objective values, in spite of failing to be indicative statements with truth-values per se.
Centralism and non-centralism
Yet another way of categorizing metaethical theories is to distinguish between centralist and non-centralist moral theories. The debate between centralism and non-centralism revolves around the relationship between the so-called "thin" and "thick" concepts of morality: thin moral concepts are those such as good, bad, right, and wrong; thick moral concepts are those such as courageous, inequitable, just, or dishonest. While both sides agree that the thin concepts are more general and the thick more specific, centralists hold that the thin concepts are antecedent to the thick ones and that the latter are therefore dependent on the former. That is, centralists argue that one must understand words like "right" and "ought" before understanding words like "just" and "unkind." Non-centralism rejects this view, holding that thin and thick concepts are on par with one another and even that the thick concepts are a sufficient starting point for understanding the thin ones.
Non-centralism has been of particular importance to ethical naturalists in the late 20th and early 21st centuries as part of their argument that normativity is a non-excisable aspect of language and that there is no way of analyzing thick moral concepts into a purely descriptive element attached to a thin moral evaluation, thus undermining any fundamental division between facts and norms. Allan Gibbard, R. M. Hare, and Simon Blackburn have argued in favor of the fact/norm distinction, meanwhile, with Gibbard going so far as to argue that, even if conventional English has only mixed normative terms (that is, terms that are neither purely descriptive nor purely normative), we could develop a nominally English metalanguage that still allowed us to maintain the division between factual descriptions and normative evaluations.
Moral ontology
Moral ontology attempts to answer the question, "What is the nature of moral judgments?"
Amongst those who believe there to be some standard(s) of morality (as opposed to moral nihilists), there are two divisions:
universalists, who hold that the same moral facts or principles apply to everyone everywhere; and
relativists, who hold that different moral facts or principles apply to different people or societies.
Moral universalism
Moral universalism (or universal morality) is the metaethical position that some system of ethics, or a universal ethic, applies universally, that is to all intelligent beings regardless of culture, race, sex, religion, nationality, sexuality, or other distinguishing feature. The source or justification of this system may be thought to be, for instance, human nature, shared vulnerability to suffering, the demands of universal reason, what is common among existing moral codes, or the common mandates of religion (although it can be argued that the latter is not in fact moral universalism because it may distinguish between Gods and mortals). Moral universalism is the opposing position to various forms of moral relativism.
Universalist theories are generally forms of moral realism, though exceptions exists, such as the subjectivist ideal observer and divine command theories, and the non-cognitivist universal prescriptivism of R. M. Hare. Forms of moral universalism include:
Value monism is the common form of universalism, which holds that all goods are commensurable on a single value scale.
Value pluralism contends that there are two or more genuine scales of value, knowable as such, yet incommensurable, so that any prioritization of these values is either non-cognitive or subjective. A value pluralist might, for example, contend that both a life as a nun and a life as a mother realize genuine values (in a universalist sense), yet they are incompatible (nuns may not have children), and there is no purely rational way to measure which is preferable. A notable proponent of this view is Isaiah Berlin.
Moral relativism
Moral relativism maintains that all moral judgments have their origins either in societal or in individual standards, and that no single standard exists by which one can objectively assess the truth of a moral proposition. Metaethical relativists, in general, believe that the descriptive properties of terms such as "good", "bad", "right", and "wrong" do not stand subject to universal truth conditions, but only to societal convention and personal preference. Given the same set of verifiable facts, some societies or individuals will have a fundamental disagreement about what one ought to do based on societal or individual norms, and one cannot adjudicate these using some independent standard of evaluation. The latter standard will always be societal or personal and not universal, unlike, for example, the scientific standards for assessing temperature or for determining mathematical truths.
Moral nihilism
Moral nihilism, also known as ethical nihilism, is the metaethical view that nothing has intrinsic moral value. For example, a moral nihilist would say that killing someone, for whatever reason, is intrinsically neither morally right nor morally wrong. Moral nihilism must be distinguished from moral relativism, which does allow for moral statements to be intrinsically true or false in a non-universal sense, but does not assign any static truth-values to moral statements. Insofar as only true statements can be known, moral nihilists are moral skeptics. Most forms of moral nihilism are non-cognitivist and vice versa, though there are notable exceptions such as universal prescriptivism (which is semantically non-cognitive but substantially universal).
Moral epistemology
Moral epistemology is the study of moral knowledge. It attempts to answer such questions as, "How may moral judgments be supported or defended?" and "Is moral knowledge possible?"
If one presupposes a cognitivist interpretation of moral sentences, morality is justified by the moralist's knowledge of moral facts, and the theories to justify moral judgements are epistemological theories. Most moral epistemologies posit that moral knowledge is somehow possible (including empiricism and moral rationalism), as opposed to moral skepticism. Amongst them, there are those who hold that moral knowledge is gained inferentially on the basis of some sort of non-moral epistemic process, as opposed to ethical intuitionism.
Moral knowledge gained by inference
Empiricism
Empiricism is the doctrine that knowledge is gained primarily through observation and experience. Metaethical theories that imply an empirical epistemology include:
ethical naturalism, which holds moral facts to be reducible to non-moral facts and thus knowable in the same ways; and
most common forms of ethical subjectivism, which hold that moral facts reduce to facts about individual opinions or cultural conventions and thus are knowable by observation of those conventions.
There are exceptions within subjectivism however, such as ideal observer theory, which implies that moral facts may be known through a rational process, and individualist ethical subjectivism, which holds that moral facts are merely personal opinions and so may be known only through introspection. Empirical arguments for ethics run into the is-ought problem, which asserts that the way the world is cannot alone instruct people how they ought to act.
Moral rationalism
Moral rationalism, also called ethical rationalism, is the view according to which moral truths (or at least general moral principles) are knowable a priori, by reason alone. Plato and Immanuel Kant, prominent figures in the history of philosophy, defended moral rationalism. David Hume and Friedrich Nietzsche are two figures in the history of philosophy who have rejected moral rationalism.
Recent philosophers who defended moral rationalism include R. M. Hare, Christine Korsgaard, Alan Gewirth, and Michael Smith. A moral rationalist may adhere to any number of different semantic theories as well; moral realism is compatible with rationalism, and the subjectivist ideal observer theory and non-cognitivist universal prescriptivism both entail it.
Ethical intuitionism
Ethical intuitionism is the view according to which some moral truths can be known without inference. That is, the view is at its core a foundationalism about moral beliefs. Such an epistemological view implies that there are moral beliefs with propositional contents; so it implies cognitivism. Ethical intuitionism commonly suggests moral realism, the view that there are objective facts of morality and, to be more specific, ethical non-naturalism, the view that these evaluative facts cannot be reduced to natural fact. However, neither moral realism nor ethical non-naturalism are essential to the view; most ethical intuitionists simply happen to hold those views as well. Ethical intuitionism comes in both a "rationalist" variety, and a more "empiricist" variety known as moral sense theory.
Moral skepticism
Moral skepticism is the class of metaethical theories all members of which entail that no one has any moral knowledge. Many moral skeptics also make the stronger, modal, claim that moral knowledge is impossible. Forms of moral skepticism include, but are not limited to, error theory and most but not all forms of non-cognitivism.
See also
Anthropic principle
Axiology
Deontic logic
Ethical subjectivism
Fact–value distinction
Is–ought problem
Meta-rights
Moral realism
Normative ethics
Principia Ethica
The Right and the Good
References
External links
Metaethics – entry in the Internet Encyclopedia of Philosophy
The Language of Morals (1952) by R. M. Hare
Groundwork of the Metaphysics of Morals by Immanuel Kant
Essays by philosopher Michael Huemer on meta-ethics, especially intuitionism
Relativity theory of ethics by J. J. Mittler
Ethical theories
Metaphilosophy | 0.783242 | 0.996048 | 0.780147 |
Universality (philosophy) | In philosophy, universality or absolutism is the idea that universal facts exist and can be progressively discovered, as opposed to relativism, which asserts that all facts are relative to one's perspective. Absolutism and relativism have been explored at length in contemporary analytic philosophy.
Also see Kantian and Platonist notions of "universal", which are considered by most philosophers to be separate notions.
Universality in ethics
When used in the context of ethics, the meaning of universal refers to that which is true for "all similarly situated individuals". Rights, for example in natural rights, or in the 1789 Declaration of the Rights of Man and of the Citizen, for those heavily influenced by the philosophy of the Enlightenment and its conception of a human nature, could be considered universal. The 1948 Universal Declaration of Human Rights is inspired by such principles.
Universal moralities contrast with moral relativisms, which seek to account for differing ethical positions between people and cultural norms.
Universality about truth
In logic, or the consideration of valid arguments, a proposition is said to have universality if it can be conceived as being true in all possible contexts without creating a contradiction. A universalist conception of truth accepts one or more universals, whereas a relativist conception of truth denies the existence of some or all universals.
Universals in metaphysics
In metaphysics, a universal is a proposed type, property, or relation which can be instantiated by many different particulars. While universals are related to the concept of universality, the concept is importantly distinct; see the main page on universals for a full treatment of the topic.
See also
Natural law
Natural and legal rights
Moral universalism
Universal law
Tianxia
Ubuntu
References
Further reading
Metaphysical properties
sk:Absolútna pravda | 0.791153 | 0.985887 | 0.779988 |
History of philosophy | The history of philosophy is the systematic study of the development of philosophical thought. It focuses on philosophy as rational inquiry based on argumentation, but some theorists also include myth, religious traditions, and proverbial lore.
Western philosophy originated with an inquiry into the fundamental nature of the cosmos in Ancient Greece. Subsequent philosophical developments covered a wide range of topics including the nature of reality and the mind, how people should act, and how to arrive at knowledge. The medieval period was focused more on theology. The Renaissance period saw a renewed interest in Ancient Greek philosophy and the emergence of humanism. The modern period was characterized by an increased focus on how philosophical and scientific knowledge is created. Its new ideas were used during the Enlightenment period to challenge traditional authorities. Influential developments in the 19th and 20th centuries included German idealism, pragmatism, positivism, formal logic, linguistic analysis, phenomenology, existentialism, and postmodernism.
Arabic–Persian philosophy was strongly influenced by Ancient Greek philosophers. It had its peak period during the Islamic Golden Age. One of its key topics was the relation between reason and revelation as two compatible ways of arriving at the truth. Avicenna developed a comprehensive philosophical system that synthesized Islamic faith and Greek philosophy. After the Islamic Golden Age, the influence of philosophical inquiry waned, partly due to Al-Ghazali's critique of philosophy. In the 17th century, Mulla Sadra developed a metaphysical system based on mysticism. Islamic modernism emerged in the 19th and 20th centuries as an attempt to reconcile traditional Islamic doctrines with modernity.
Indian philosophy is characterized by its combined interest in the nature of reality, the ways of arriving at knowledge, and the spiritual question of how to reach enlightenment. Its roots are in the religious scriptures known as the Vedas. Subsequent Indian philosophy is often divided into orthodox schools, which are closely associated with the teachings of the Vedas, and heterodox schools, like Buddhism and Jainism. Influential schools based on them include the Hindu schools of Advaita Vedanta and Navya-Nyāya as well as the Buddhist schools of Madhyamaka and Yogācāra. In the modern period, the exchange between Indian and Western thought led various Indian philosophers to develop comprehensive systems. They aimed to unite and harmonize diverse philosophical and religious schools of thought.
Central topics in Chinese philosophy were right social conduct, government, and self-cultivation. In early Chinese philosophy, Confucianism explored moral virtues and how they lead to harmony in society while Daoism focused on the relation between humans and nature. Later developments include the introduction and transformation of Buddhist teachings and the emergence of the schools of Xuanxue and Neo-Confucianism. The modern period in Chinese philosophy was characterized by its encounter with Western philosophy, specifically with Marxism. Other influential traditions in the history of philosophy were Japanese philosophy, Latin American philosophy, and African philosophy.
Definition and related disciplines
The history of philosophy is the field of inquiry that studies the historical development of philosophical thought. It aims to provide a systematic and chronological exposition of philosophical concepts and doctrines, as well as the philosophers who conceived them and the schools of thought to which they belong. It is not merely a collection of theories but attempts to show how these theories are interconnected. For example, some schools of thought build on earlier theories, while others reject them and offer alternative explanations. Purely mystical and religious traditions are often excluded from the history of philosophy if their claims are not based on rational inquiry and argumentation. However, some theorists treat the topic broadly, including the philosophical aspects of traditional worldviews, religious myths, and proverbial lore.
The history of philosophy has both a historical and a philosophical component. The historical component is concerned with how philosophical thought has unfolded throughout the ages. It explores which philosophers held particular views and how they were influenced by their social and cultural contexts. The philosophical component, on the other hand, evaluates the studied theories for their truth and validity. It reflects on the arguments presented for these positions and assesses their hidden assumptions, making the philosophical heritage accessible to a contemporary audience while evaluating its continued relevance. Some historians of philosophy focus primarily on the historical component, viewing the history of philosophy as part of the broader discipline of intellectual history. Others emphasize the philosophical component, arguing that the history of philosophy transcends intellectual history because its interest is not exclusively historical. It is controversial to what extent the history of philosophy can be understood as a discipline distinct from philosophy itself. Some theorists contend that the history of philosophy is an integral part of philosophy. For example, Neo-Kantians like Wilhelm Windelband argue that philosophy is essentially historical and that it is not possible to understand a philosophical position without understanding how it emerged.
Closely related to the history of philosophy is the historiography of philosophy, which examines the methods used by historians of philosophy. It is also interested in how dominant opinions in this field have changed over time. Different methods and approaches are used to study the history of philosophy. Some historians focus primarily on philosophical theories, emphasizing their claims and ongoing relevance rather than their historical evolution. Another approach sees the history of philosophy as an evolutionary process, assuming clear progress from one period to the next, with earlier theories being refined or replaced by more advanced later theories. Other historians seek to understand past philosophical theories as products of their time, focusing on the positions accepted by past philosophers and the reasons behind them, often without concern for their relevance today. These historians study how the historical context and the philosopher's biography influenced their philosophical outlook.
Another important methodological feature is the use of periodization, which involves dividing the history of philosophy into distinct periods, each corresponding to one or several philosophical tendencies prevalent during that historical timeframe. Traditionally, the history of philosophy has focused primarily on Western philosophy. However, in a broader sense, it includes many non-Western traditions such as Arabic–Persian philosophy, Indian philosophy, and Chinese philosophy.
Western
Western philosophy refers to the philosophical traditions and ideas associated with the geographical region and cultural heritage of the Western world. It originated in Ancient Greece and subsequently expanded to the Roman Empire, later spreading to Western Europe and eventually reaching other regions, including North America, Latin America, and Australia. Spanning over 2,500 years, Western philosophy began in the 6th century BCE and continues to evolve today.
Ancient
Western philosophy originated in Ancient Greece in the 6th century BCE. This period is conventionally considered to have ended in 529 CE when the Platonic Academy and other philosophical schools in Athens were closed by order of the Byzantine Emperor Justinian I, who sought to suppress non-Christian teachings.
Presocratic
The first period of Ancient Greek philosophy is known as Presocratic philosophy, which lasted until about the mid-4th century BCE. Studying Presocratic philosophy can be challenging because many of the original texts have only survived in fragments and often have to be reconstructed based on quotations found in later works.
A key innovation of Presocratic philosophy was its attempt to provide rational explanations for the cosmos as a whole. This approach contrasted with the prevailing Greek mythology, which offered theological interpretations—such as the myth of Uranus and Gaia—to emphasize the roles of gods and goddesses who continued to be worshipped even as Greek philosophy evolved. The Presocratic philosophers were among the first to challenge traditional Greek theology, seeking instead to provide empirical theories to explain how the world came into being and why it functions as it does.
Thales (c. 624–545 BCE), often regarded as the first philosopher, sought to describe the cosmos in terms of a first principle, or arche. He identified water as this primal source of all things. Anaximander (c. 610–545 BCE) proposed a more abstract explanation, suggesting that the eternal substance responsible for the world's creation lies beyond human perception. He referred to this arche as the apeiron, meaning "the boundless".
Heraclitus (c. 540–480 BCE) viewed the world as being in a state of constant flux, stating that one cannot step into the same river twice. He also emphasized the role of logos, which he saw as an underlying order governing both the inner self and the external world. In contrast, Parmenides (c. 515–450 BCE) argued that true reality is unchanging, eternal, and indivisible. His student Zeno of Elea (c. 490–430 BCE) formulated several paradoxes to support this idea, asserting that motion and change are illusions, as illustrated by his paradox of Achilles and the Tortoise.
Another significant theory from this period was the atomism of Democritus (c. 460–370 BCE), who posited that reality is composed of indivisible particles called atoms. Other notable Presocratic philosophers include Anaximenes, Pythagoras, Xenophanes, Empedocles, Anaxagoras, Leucippus, and the sophists, such as Protagoras and Gorgias.
Socrates, Plato, and Aristotle
The philosophy of Socrates (469–399 BCE) and Plato (427–347 BCE) built on Presocratic philosophy but also introduced significant changes in focus and methodology. Socrates did not write anything himself, and his influence is largely due to the impact he made on his contemporaries, particularly through his approach to philosophical inquiry. This method, often conducted in the form of Socratic dialogues, begins with simple questions to explore a topic and critically reflect on underlying ideas and assumptions. Unlike the Presocratics, Socrates was less concerned with metaphysical theories and more focused on moral philosophy. Many of his dialogues explore the question of what it means to lead a good life by examining virtues such as justice, courage, and wisdom. Despite being regarded as a great teacher of ethics, Socrates did not advocate specific moral doctrines. Instead, he aimed to prompt his audience to think for themselves and recognize their own ignorance.
Most of what is known about Socrates comes from the writings of his student Plato. Plato's works are presented in the form of dialogues between various philosophers, making it difficult to determine which ideas are Socrates' and which are Plato's own theories. Plato's theory of forms asserts that the true nature of reality is found in abstract and eternal forms or ideas, such as the forms of beauty, justice, and goodness. The physical and changeable world of the senses, according to Plato, is merely an imperfect copy of these forms. The theory of forms has had a lasting influence on subsequent views of metaphysics and epistemology. Plato is also considered a pioneer in the field of psychology. He divided the soul into three faculties: reason, spirit, and desire, each responsible for different mental phenomena and interacting in various ways. Plato also made contributions to ethics and political philosophy. Additionally, Plato founded the Academy, which is often considered the first institution of higher education.
Aristotle (384–322 BCE), who began as a student at Plato's Academy, became a systematic philosopher whose teachings were transcribed into treatises on various subjects, including the philosophy of nature, metaphysics, logic, and ethics. Aristotle introduced many technical terms in these fields that are still used today. While he accepted Plato's distinction between form and matter, he rejected the idea that forms could exist independently of matter, arguing instead that forms and matter are interdependent. This debate became central to the problem of universals, which was discussed by many subsequent philosophers. In metaphysics, Aristotle presented a set of basic categories of being as a framework for classifying and analyzing different aspects of existence. He also introduced the concept of the four causes to explain why change and movement occur in nature. According to his teleological cause, for example, everything in nature has a purpose or goal toward which it moves. Aristotle's ethical theory emphasizes that leading a good life involves cultivating virtues to achieve eudaimonia, or human flourishing. In logic, Aristotle codified rules for correct inferences, laying the foundation for formal logic that would influence philosophy for centuries.
Hellenistic and Roman
After Aristotle, ancient philosophy saw the rise of broader philosophical movements, such as Epicureanism, Stoicism, and Skepticism, which are collectively known as the Hellenistic schools of thought. These movements primarily focused on fields like ethics, physics, logic, and epistemology. This period began with the death of Alexander the Great in 323 BCE and had its main influence until the end of the Roman Republic in 31 BCE.
The Epicureans built upon and refined Democritus's idea that nature is composed of indivisible atoms. In ethics, they viewed pleasure as the highest good but rejected the notion that luxury and indulgence in sensory pleasures lead to long-term happiness. Instead, they advocated a nuanced form of hedonism, where a simple life characterized by tranquillity was the best way to achieve happiness.
The Stoics rejected this hedonistic outlook, arguing that desires and aversions are obstacles to living in accordance with reason and virtue. To overcome these desires, they advocated self-mastery and an attitude of indifference.
The skeptics focused on how judgments and opinions impact well-being. They argued that dogmatic beliefs lead to emotional disturbances and recommended that people suspend judgments on matters where certainty is unattainable. Some skeptics went further, claiming that this suspension of judgment should apply to all beliefs, suggesting that any form of knowledge is impossible.
The school of Neoplatonism, which emerged in the later part of the ancient period, began in the 3rd century CE and reached its peak by the 6th century CE. Neoplatonism inherited many ideas from Plato and Aristotle, transforming them in creative ways. Its central doctrine posits a transcendent and ineffable entity responsible for all existence, referred to as "the One" or "the Good." From the One emerges the Intellect, which contemplates the One, and this, in turn, gives rise to the Soul, which generates the material world. Influential Neoplatonists include Plotinus (204–270 CE) and his student Porphyry (234–305 CE).
Medieval
The medieval period in Western philosophy began between 400 and 500 CE and ended between 1400 and 1500 CE. A key distinction between this period and earlier philosophical traditions was its emphasis on religious thought. The Christian Emperor Justinian ordered the closure of philosophical schools, such as Plato's Academy. As a result, intellectual activity became concentrated within the Church, and diverging from doctrinal orthodoxy was fraught with risks. Due to these developments, some scholars consider this era a "dark age" compared to what preceded and followed it. Central topics during this period included the problem of universals, the nature of God, proofs for the existence of God, and the relationship between reason and faith. The early medieval period was heavily influenced by Plato's philosophy, while Aristotelian ideas became dominant later.
Augustine of Hippo (354–430 CE) was deeply influenced by Platonism and utilized this perspective to interpret and explain key concepts and problems within Christian doctrine. He embraced the Neoplatonist idea that God, or the ultimate source, is both good and incomprehensible. This led him to address the problem of evil—specifically, how evil could exist in a world created by a benevolent, all-knowing, and all-powerful God. Augustine's explanation centered on the concept of free will, asserting that God granted humans the ability to choose between good and evil, along with the responsibility for those choices. Augustine also made significant contributions in other areas, including arguments for the existence of God, his theory of time, and his just war theory.
Boethius (477–524 CE) had a profound interest in Greek philosophy. He translated many of Aristotle's works and sought to integrate and reconcile them with Christian doctrine. Boethius addressed the problem of universals and developed a theory to harmonize Plato's and Aristotle's views. He proposed that universals exist in the mind without matter in one sense, but also exist within material objects in another sense. This idea influenced subsequent medieval debates on the problem of universals, inspiring nominalists to argue that universals exist only in the mind. Boethius also explored the problem of the trinity, addressing the Christian doctrine of how God can exist as three persons—Father, Son, and Holy Spirit—simultaneously.
Scholasticism
The later part of the medieval period was dominated by scholasticism, a philosophical method heavily influenced by Aristotelian philosophy and characterized by systematic and methodological inquiry. The intensified interest in Aristotle during this period was largely due to the Arabic–Persian tradition, which preserved, translated, and interpreted many of Aristotle's works that had been lost in the Western world.
Anselm of Canterbury (1033–1109 CE) is often regarded as the father of scholasticism. He viewed reason and faith as complementary, each depending on the other for a fuller understanding. Anselm is best known for his ontological argument for the existence of God, where he defined God as the greatest conceivable being and argued that such a being must exist outside of the mind. He posited that if God existed only in the mind, He would not be the greatest conceivable being, since a being that exists in reality is greater than one that exists only in thought. Peter Abelard (1079–1142) similarly emphasized the harmony between reason and faith, asserting that both emerge from the same divine source and therefore cannot be in contradiction. Abelard was also known for his nominalism, which claimed that universals exist only as mental constructs.
Thomas Aquinas (1224–1274 CE) is often considered the most influential medieval philosopher. Rooted in Aristotelianism, Aquinas developed a comprehensive system of scholastic philosophy that encompassed areas such as metaphysics, theology, ethics, and political theory. Many of his insights were compiled in his seminal work, the Summa Theologiae. A key goal in Aquinas's writings was to demonstrate how faith and reason work in harmony. He argued that reason supports and reinforces Christian tenets, but faith in God's revelation is still necessary since reason alone cannot comprehend all truths. This is particularly relevant to claims such as the eternality of the world and the intricate relationship between God and His creation. In metaphysics, Aquinas posited that every entity is characterized by two aspects: essence and existence. Understanding a thing involves grasping its essence, which can be done without perceiving whether it exists. However, in the case of God, Aquinas argued that His existence is identical to His essence, making God unique. In ethics, Aquinas held that moral principles are rooted in human nature. He believed that ethics is about pursuing what is good and that humans, as rational beings, have a natural inclination to pursue the Good. In natural theology, Aquinas's famous Five Ways are five arguments for the existence of God.
Duns Scotus (1266–1308 CE) engaged critically with many of Aquinas's ideas. In metaphysics, Scotus rejected Aquinas's claim of a real distinction between essence and existence. Instead, he argued that this distinction is only formal, meaning essence and existence are two aspects of a thing that cannot be separated. Scotus further posited that each individual entity has a unique essence, known as haecceity, which distinguishes it from other entities of the same kind.
William of Ockham (1285–1347 CE) is one of the last scholastic philosophers. He is known for formulating the methodological principle known as Ockham's Razor, which is used to choose between competing explanations of the same phenomenon. Ockham's Razor states that the simplest explanation, the one that assumes the existence of fewer entities, should be preferred. Ockham employed this principle to argue for nominalism and against realism about universals, contending that nominalism is the simpler explanation since it does not require the assumption of the independent existence of universals.
Renaissance
The Renaissance period began in the mid-14th century and lasted until the early 17th century. This cultural and intellectual movement originated in Italy and gradually spread to other regions of Western Europe. Key aspects of the Renaissance included a renewed interest in Ancient Greek philosophy and the emergence of humanism, as well as a shift toward scientific inquiry. This represented a significant departure from the medieval period, which had been primarily focused on religious and scholastic traditions. Another notable change was that intellectual activity was no longer as closely tied to the Church as before; most scholars of this period were not clerics.
An important aspect of the resurgence of Ancient Greek philosophy during the Renaissance was a revived enthusiasm for the teachings of Plato. This Renaissance Platonism was still conducted within the framework of Christian theology and often aimed to demonstrate how Plato's philosophy was compatible with and could be applied to Christian doctrines. For example, Marsilio Ficino (1433–1499) argued that souls form a connection between the realm of Platonic forms and the sensory realm. According to Plato, love can be understood as a ladder leading to higher forms of understanding. Ficino interpreted this concept in an intellectual sense, viewing it as a way to relate to God through the love of knowledge.
The revival of Ancient Greek philosophy during the Renaissance was not limited to Platonism; it also encompassed other schools of thought, such as Skepticism, Epicureanism, and Stoicism. This revival was closely associated with the rise of Renaissance humanism, a human-centered worldview that highly valued the academic disciplines studying human society and culture. This shift in perspective also involved seeing humans as genuine individuals. Although Renaissance humanism was not primarily a philosophical movement, it brought about many social and cultural changes that affected philosophical activity. These changes were also accompanied by an increased interest in political philosophy. Niccolò Machiavelli (1469–1527) argued that a key responsibility of rulers is to ensure stability and security. He believed they should govern effectively to benefit the state as a whole, even if harsh circumstances require the use of force and ruthless actions. In contrast, Thomas More (1478–1535) envisioned an ideal society characterized by communal ownership, egalitarianism, and devotion to public service.
The Renaissance also witnessed various developments in the philosophy of nature and science, which helped lay the groundwork for the scientific revolution. One such development was the emphasis on empirical observation in scientific inquiry. Another was the idea that mathematical explanations should be employed to understand these observations. Francis Bacon (1561–1626 CE) is often seen as a transitional figure between the Renaissance and modernity. He sought to revolutionize logic and scientific inquiry with his work Novum Organum, which was intended to replace Aristotle's influential treatises on logic. Bacon's work discussed, for example, the role of inductive reasoning in empirical inquiry, which involves deriving general laws from numerous individual observations. Another key transitional figure was Galileo Galilei (1564–1642 CE), who played a crucial role in the Copernican Revolution by asserting that the Sun, rather than the Earth, is at the center of the Solar System.
Early modern
Early modern philosophy encompasses the 17th and 18th centuries. The philosophers of this period are traditionally divided into empiricists and rationalists. However, contemporary historians argue that this division is not a strict dichotomy but rather a matter of varying degrees. These schools share a common goal of establishing a clear, rigorous, and systematic method of inquiry. This philosophical emphasis on method mirrored the advances occurring simultaneously during the scientific revolution.
Empiricism and rationalism differ concerning the type of method they advocate. Empiricism focuses on sensory experience as the foundation of knowledge. In contrast, rationalism emphasizes reason—particularly the principles of non-contradiction and sufficient reason—and the belief in innate knowledge. While the emphasis on method was already foreshadowed in Renaissance thought, it only came to full prominence during the early modern period.
The second half of this period saw the emergence of the Enlightenment movement, which used these philosophical advances to challenge traditional authorities while promoting progress, individual freedom, and human rights.
Empiricism
Empiricism in the early modern period was mainly associated with British philosophy. John Locke (1632–1704) is often considered the father of empiricism. In his book An Essay Concerning Human Understanding, he rejected the notion of innate knowledge and argued that all knowledge is derived from experience. He asserted that the mind is a blank slate at birth, relying entirely on sensory experience to acquire ideas. Locke distinguished between primary qualities, which he believed are inherent in external objects and exist independently of any observer, and secondary qualities, which are the powers of objects to produce sensations in observers.
George Berkeley (1685–1753) was strongly influenced by Locke but proposed a more radical form of empiricism. He developed a form of idealism, giving primacy to perceptions and ideas over material objects. Berkeley argued that objects only exist insofar as they are perceived by the mind, leading to the conclusion that there is no reality independent of perception.
David Hume (1711–1776) also upheld the empiricist principle that knowledge is derived from sensory experience. However, he took this idea further by arguing that it is impossible to know with certainty that one event causes another. Hume's reasoning was that the connection between cause and effect is not directly perceivable. Instead, the mind observes consistent patterns between events and develops a habit of expecting certain outcomes based on prior experiences.
The empiricism promoted by Hume and other philosophers had a significant impact on the development of the scientific method, particularly in its emphasis on observation, experimentation, and rigorous testing.
Rationalism
Another dominant school of thought in this period was rationalism. René Descartes (1596–1650) played a pivotal role in its development. He sought to establish absolutely certain knowledge and employed methodological doubt, questioning all his beliefs to find an indubitable foundation for knowledge. He discovered this foundation in the statement "I think, therefore I am." Descartes used various rationalist principles, particularly the focus on deductive reasoning, to build a comprehensive philosophical system upon this foundation. His philosophy is rooted in substance dualism, positing that the mind and body are distinct, independent entities that coexist.
The rationalist philosophy of Baruch Spinoza (1632–1677) placed even greater emphasis on deductive reasoning. He developed and employed the so-called geometrical method to construct his philosophical system. This method begins with a small set of self-evident axioms and proceeds to derive a comprehensive philosophical system through deductive reasoning. Unlike Descartes, Spinoza arrived at a metaphysical monism, asserting that there is only one substance in the universe. Another influential rationalist was Gottfried Wilhelm Leibniz (1646–1716). His principle of sufficient reason posits that everything has a reason or explanation. Leibniz used this principle to develop his metaphysical system known as monadology.
Enlightenment and other late modern philosophy
The latter half of the modern period saw the emergence of the cultural and intellectual movement known as the Enlightenment. This movement drew on both empiricism and rationalism to challenge traditional authorities and promote the pursuit of knowledge. It advocated for individual freedom and held an optimistic view of progress and the potential for societal improvement. Immanuel Kant (1724–1804) was one of the central thinkers of the Enlightenment. He emphasized the role of reason in understanding the world and used it to critique dogmatism and blind obedience to authority. Kant sought to synthesize both empiricism and rationalism within a comprehensive philosophical system. His transcendental idealism explored how the mind, through its pre-established categories, shapes human experience of reality. In ethics, he developed a deontological moral system based on the categorical imperative, which defines universal moral duties. Other important Enlightenment philosophers included Voltaire (1694–1778), Montesquieu (1689–1755), and Jean-Jacques Rousseau (1712–1778).
Political philosophy during this period was shaped by Thomas Hobbes's (1588–1679) work, particularly his book Leviathan. Hobbes had a pessimistic view of the natural state of humans, arguing that it involves a war of all against all. According to Hobbes, the purpose of civil society is to avoid this state of chaos. This is achieved through a social contract in which individuals cede some of their rights to a central and immensely powerful authority in exchange for protection from external threats. Jean-Jacques Rousseau also theorized political life using the concept of a social contract, but his political outlook differed significantly due to his more positive assessment of human nature. Rousseau's views led him to advocate for democracy.
19th century
The 19th century was a rich and diverse period in philosophy, during which the term "philosophy" acquired the distinctive meaning it holds today: a discipline distinct from the empirical sciences and mathematics. A rough division between two types of philosophical approaches in this period can be drawn. Some philosophers, like those associated with German and British idealism, sought to provide comprehensive and all-encompassing systems. In contrast, other thinkers, such as Bentham, Mill, and the American pragmatists, focused on more specific questions related to particular fields, such as ethics and epistemology.
Among the most influential philosophical schools of this period was German idealism, a tradition inaugurated by Immanuel Kant, who argued that the conceptual activity of the subject is always partially constitutive of experience and knowledge. Subsequent German idealists critiqued what they saw as theoretical problems with Kant's dualisms and the contradictory status of the thing-in-itself. They sought a single unifying principle as the foundation of all reality. Johann Gottlieb Fichte (1762–1814) identified this principle as the activity of the subject or transcendental ego, which posits both itself and its opposite. Friedrich Wilhelm Joseph Schelling (1775–1854) rejected this focus on the ego, instead proposing a more abstract principle, referred to as the absolute or the world-soul, as the foundation of both consciousness and nature.
The philosophy of Georg Wilhelm Friedrich Hegel (1770–1831) is often described as the culmination of this tradition. Hegel reconstructed a philosophical history in which the measure of progress is the actualization of freedom. He applied this not only to political life but also to philosophy, which he claimed aims for self-knowledge characterized by the identity of subject and object. His term for this is "the absolute" because such knowledge—achieved through art, religion, and philosophy—is entirely self-conditioned.
Further influential currents of thought in this period included historicism and neo-Kantianism. Historicists such as Johann Gottfried Herder emphasized the validity and unique nature of historical knowledge of individual events, contrasting this with the universal knowledge of eternal truths. Neo-Kantianism was a diverse philosophical movement that revived and reinterpreted Kant's ideas.
British idealism developed later in the 19th century and was strongly influenced by Hegel. For example, Francis Herbert Bradley (1846–1924) argued that reality is an all-inclusive totality of being, identified with absolute spirit. He is also famous for claiming that external relations do not exist.
Karl Marx (1818–1883) was another philosopher inspired by Hegel's ideas. He applied them to the historical development of society based on class struggle. However, he rejected the idealistic outlook in favor of dialectical materialism, which posits that economics rather than spirit is the basic force behind historical development.
Arthur Schopenhauer (1788–1860) proposed that the underlying principle of all reality is the will, which he saw as an irrational and blind force. Influenced by Indian philosophy, he developed a pessimistic outlook, concluding that the expressions of the will ultimately lead to suffering. He had a profound influence on Friedrich Nietzsche, who saw the will to power as a fundamental driving force in nature. Nietzsche used this concept to critique many religious and philosophical ideas, arguing that they were disguised attempts to wield power rather than expressions of pure spiritual achievement.
In the field of ethics, Jeremy Bentham (1748–1832) developed the philosophy of utilitarianism. He argued that whether an action is right depends on its utility, i.e., on the pleasure and pain it produces. The goal of actions, according to Bentham, is to maximize happiness or to produce "the greatest good for the greatest number." His student John Stuart Mill (1806–1873) became one of the foremost proponents of utilitarianism, further refining the theory by asserting that what matters is not just the quantity of pleasure and pain, but also their quality.
Toward the end of the 19th century, the philosophy of pragmatism emerged in the United States. Pragmatists evaluate philosophical ideas based on their usefulness and effectiveness in guiding action. Charles Sanders Peirce (1839–1914) is usually considered the founder of pragmatism. He held that the meaning of ideas and theories lies in their practical and observable consequences. For example, to say that an object is hard means that, on a practical level, it is difficult to break, pierce, or scratch. Peirce argued that a true belief is a stable belief that works, even if it must be revised in the future. His pragmatist philosophy gained wider popularity through his lifelong friend William James (1842–1910), who applied Peirce's ideas to psychology. James argued that the meaning of an idea consists of its experiential consequences and rejected the notion that experiences are isolated events, instead proposing the concept of a stream of consciousness.
20th century
Philosophy in the 20th century is usually divided into two main traditions: analytic philosophy and continental philosophy. Analytic philosophy was dominant in English-speaking countries and emphasized clarity and precise language. It often employed tools like formal logic and linguistic analysis to examine traditional philosophical problems in fields such as metaphysics, epistemology, science, and ethics. Continental philosophy was more prominent in European countries, particularly in Germany and France. It is an umbrella term without a precisely established meaning and covers philosophical movements like phenomenology, hermeneutics, existentialism, deconstruction, critical theory, and psychoanalytic theory.
Interest in academic philosophy increased rapidly in the 20th century, as evidenced by the growing number of philosophical publications and the increasing number of philosophers working at academic institutions. Another change during this period was the increased presence of female philosophers. However, despite this progress, women remained underrepresented in the field.
Some schools of thought in 20th-century philosophy do not clearly fall into either analytic or continental traditions. Pragmatism evolved from its 19th-century roots through scholars like Richard Rorty (1931–2007) and Hilary Putnam (1926–2016). It was applied to new fields of inquiry, such as epistemology, politics, education, and the social sciences.
The 20th century also saw the rise of feminism in philosophy, which studies and critiques traditional assumptions and power structures that disadvantage women. Prominent feminist philosophers include Simone de Beauvoir (1908–1986), Martha Nussbaum (1947–present), and Judith Butler (1956–present).
Analytic
George Edward Moore (1873–1958) was one of the founding figures of analytic philosophy. He emphasized the importance of common sense and used it to argue against radical forms of philosophical skepticism. Moore was particularly influential in the field of ethics, where he claimed that our actions should promote the good. He argued that the concept of "good" cannot be defined in terms of other concepts and that whether something is good can be known through intuition.
Gottlob Frege (1848–1925) was another pioneer of the analytic tradition. His development of modern symbolic logic had a significant impact on subsequent philosophers, even outside the field of logic. Frege employed these advances in his attempt to prove that arithmetic can be reduced to logic, a thesis known as logicism. The logicist project of Bertrand Russell (1872–1970) was even more ambitious since it included not only arithmetic but also geometry and analysis. Although their attempts were very fruitful, they did not fully succeed, as additional axioms beyond those of logic are required. In the philosophy of language, Russell's theory of definite descriptions was influential. It explains how to make sense of paradoxical expressions like "the present King of France," which do not refer to any existing entity. Russell also developed the theory of logical atomism, which was further refined by his student Ludwig Wittgenstein (1889–1951). According to Wittgenstein's early philosophy, as presented in the Tractatus Logico-Philosophicus, the world is made up of a multitude of atomic facts. The world and language have the same logical structure, making it possible to represent these facts using propositions. Despite the influence of this theory, Wittgenstein came to reject it in his later philosophy. He argued instead that language consists of a variety of games, each with its own rules and conventions. According to this view, meaning is determined by usage and not by referring to facts.
Logical positivism developed in parallel to these ideas and was strongly influenced by empiricism. It is primarily associated with the Vienna Circle and focused on logical analysis and empirical verification. One of its prominent members was Rudolf Carnap (1891–1970), who defended the verification principle. This principle claims that a statement is meaningless if it cannot be verified through sensory experience or the laws of logic. Carnap used this principle to reject the discipline of metaphysics in general. However, this principle was later criticized by Carnap's student Willard Van Orman Quine (1908–2000) as one of the dogmas of empiricism. A core idea of Quine's philosophy was naturalism, which he understood as the claim that the natural sciences provide the most reliable framework for understanding the world. He used this outlook to argue that mathematical entities have real existence because they are indispensable to science.
Wittgenstein's later philosophy formed part of ordinary language philosophy, which analyzed everyday language to understand philosophical concepts and problems. The theory of speech acts by John Langshaw Austin (1911–1960) was an influential early contribution to this field. Other prominent figures in this tradition include Gilbert Ryle (1900–1976) and Sir Peter Frederick Strawson (1919–2006). The shift in emphasis on the role of language is known as the linguistic turn.
Richard Mervyn Hare (1919–2002) and John Leslie Mackie (1917–1981) were influential ethical philosophers in the analytic tradition, while John Rawls (1921–2002) and Robert Nozick (1938–2002) made significant contributions to political philosophy.
Continental
Phenomenology was an important early movement in the tradition of continental philosophy. It aimed to provide an unprejudiced description of human experience from a subjective perspective, using this description as a method to analyze and evaluate philosophical problems across various fields such as epistemology, ontology, philosophy of mind, and ethics. The founder of phenomenology was Edmund Husserl (1859–1938), who emphasized the importance of suspending all antecedent beliefs to achieve a pure and unbiased description of experience as it unfolds. His student, Martin Heidegger (1889–1976), adopted this method into an approach he termed fundamental ontology. Heidegger explored how human pre-understanding of reality shapes the experience of and engagement with the world. He argued that pure description alone is insufficient for phenomenology and should be accompanied by interpretation to uncover and avoid possible misunderstandings. This line of thought was further developed by his student Hans-Georg Gadamer (1900–2002), who held that human pre-understanding is dynamic and evolves through the process of interpretation. Gadamer explained this process as a fusion of horizons, which involves an interplay between the interpreter's current horizon and the horizon of the object being interpreted.
Another influential aspect of Heidegger's philosophy is his focus on how humans care about the world. He explored how this concern is related to phenomena such as anxiety and authenticity. These ideas influenced Jean-Paul Sartre (1905–1980), who developed the philosophy of existentialism. Existentialists hold that humans are fundamentally free and responsible for their choices. They also assert that life lacks a predetermined purpose, and the act of choosing one's path without such a guiding purpose can lead to anxiety. The idea that the universe is inherently meaningless was especially emphasized by absurdist thinkers like Albert Camus (1913–1960).
Critical Theory emerged in the first half of the 20th century within the Frankfurt School of philosophy. It is a form of social philosophy that aims to provide a reflective assessment and critique of society and culture. Unlike traditional theory, its goal is not only to understand and explain but also to bring about practical change, particularly to emancipate people and liberate them from domination and oppression. Key themes of Critical Theory include power, inequality, social justice, and the role of ideology. Notable figures include Theodor Adorno (1903–1969), Max Horkheimer (1895–1973), and Herbert Marcuse (1898–1979).
The second half of 20th-century continental philosophy was marked by a critical attitude toward many traditional philosophical concepts and assumptions, such as truth, objectivity, universal explanations, reason, and progress. This outlook is sometimes labeled postmodernism. Michel Foucault (1926–1984) examined the relationship between knowledge and power, arguing that knowledge is always shaped by power. Jacques Derrida (1930–2004) developed the philosophy of deconstruction, which aims to expose hidden contradictions within philosophical texts by subverting the oppositions they rely on, such as the opposition between presence and absence or between subject and object. Gilles Deleuze (1925–1995) drew on psychoanalytic theory to critique and reimagine traditional concepts like desire, subjectivity, identity, and knowledge.
Arabic–Persian
Arabic–Persian philosophy refers to the philosophical tradition associated with the intellectual and cultural heritage of Arabic- and Persian-speaking regions. This tradition is also commonly referred to as Islamic philosophy or philosophy in the Islamic world.
The classical period of Arabic–Persian philosophy began in the early 9th century CE, roughly 200 years after the death of Muhammad. It continued until the late 12th century CE and was an integral part of the Islamic Golden Age. The early classical period, prior to the work of Avicenna, focused particularly on the translation and interpretation of Ancient Greek philosophy. The late classical period, following Avicenna, was shaped by the engagement with his comprehensive philosophical system.
Arabic–Persian philosophy had a profound influence on Western philosophy. During the early medieval period, many of the Greek texts were unavailable in Western Europe. They became accessible in the later medieval period largely due to their preservation and transmission by the Arabic–Persian intellectual tradition.
Kalam and early classical
The early Arabic intellectual tradition before the classical period was characterized by various theological discussions, primarily focused on understanding the correct interpretation of Islamic revelation. Some historians view this as part of Arabic–Persian philosophy, while others draw a more narrow distinction between theology (kalam) and philosophy proper (falsafa). Theologians, who implicitly accepted the truth of revelation, restricted their inquiries to religious topics, such as proofs of the existence of God. Philosophers, on the other hand, investigated a broader range of topics, including those not directly covered by the scriptures.
Early classical Arabic–Persian philosophy was strongly influenced by Ancient Greek philosophy, particularly the works of Aristotle, but also other philosophers such as Plato. This influence came through both translations and comprehensive commentaries. A key motivation for this process was to integrate and reconcile Greek philosophy with Islamic thought. Islamic philosophers emphasized the role of rational inquiry and examined how to harmonize reason and revelation.
Al-Kindi (801–873) is often considered the first philosopher of this tradition, in contrast to the more theological works of his predecessors. He followed Aristotle in regarding metaphysics as the first philosophy and the highest science. From his theological perspective, metaphysics studies the essence and attributes of God. He drew on Plotinus's doctrine of the One to argue for the oneness and perfection of God. For Al-Kindi, God emanates the universe by "bringing being to be from non-being." In the field of psychology, he argued for a dualism that strictly distinguishes the immortal soul from the mortal body. Al-Kindi was a prolific author, producing around 270 treatises during his lifetime.
Al-Farabi (c. 872–950), strongly influenced by Al-Kindi, accepted his emanationist theory of creation. Al-Farabi claimed that philosophy, rather than theology, is the best pathway to truth. His interest in logic earned him the title "the Second Master" after Aristotle. He concluded that logic is universal and forms the foundation of all language and thought, a view that contrasts with certain passages in the Quran that assign this role to Arabic grammar. In his political philosophy, Al-Farabi endorsed Plato's idea that a philosopher-king would be the best ruler. He discussed the virtues such a ruler should possess, the tasks they should undertake, and why this ideal is rarely realized. Al-Farabi also provided an influential classification of the different sciences and fields of inquiry.
Later classical
Avicenna (980–1037) drew on the philosophies of the Ancient Greeks and Al-Farabi to develop a comprehensive philosophical system aimed at providing a holistic and rational understanding of reality that encompasses science, religion, and mysticism. He regarded logic as the foundation of rational inquiry. In the field of metaphysics, Avicenna argued that substances can exist independently, while accidents always depend on something else to exist. For example, color is an accident that requires a body to manifest. Avicenna distinguished between two forms of existence: contingent existence and necessary existence. He posited that God has necessary existence, meaning that God's existence is inherent and not dependent on anything else. In contrast, everything else in the world is contingent, meaning that it was caused by God and depends on Him for its existence. In psychology, Avicenna viewed souls as substances that give life to beings. He categorized souls into different levels: plants possess the simplest form of souls, while the souls of animals and humans have additional faculties, such as the ability to move, sense, and think rationally. In ethics, Avicenna advocated for the pursuit of moral perfection, which can be achieved by adhering to the teachings of the Quran. His philosophical system profoundly influenced both Islamic and Western philosophy.
Al-Ghazali (1058–1111) was highly critical of Avicenna's rationalist approach and his adoption of Greek philosophy. He was skeptical of reason's ability to arrive at a true understanding of reality, God, and religion. Al-Ghazali viewed the philosophy of other Islamic philosophers as problematic, describing it as an illness. In his influential work, The Incoherence of the Philosophers, he argued that many philosophical teachings were riddled with contradictions and incompatible with Islamic faith. However, Al-Ghazali did not completely reject philosophy; he acknowledged its value but believed it should be subordinate to a form of mystical intuition. This intuition, according to Al-Ghazali, relied on direct personal experience and spiritual insight, which he considered essential for attaining a deeper understanding of reality.
Averroes (1126–1198) rejected Al-Ghazali's skeptical outlook and sought to demonstrate the harmony between the philosophical pursuit of knowledge and the spiritual dimensions of faith. Averroes' philosophy was heavily influenced by Aristotle, and he frequently criticized Avicenna for diverging too much from Aristotle's teachings. In the field of psychology, Averroes proposed that there is only one universal intellect shared by all humans. Although Averroes' work did not have a significant impact on subsequent Islamic scholarship, it had a considerable influence on European philosophy.
Post-classical
Averroes is often considered the last major philosopher of the classical era of Islamic philosophy. The traditional view holds that the post-classical period was marked by a decline on several levels. This decline is understood both in terms of the global influence of Islam and in the realm of scientific and philosophical inquiry within the Islamic world. Al-Ghazali's skepticism regarding the power of reason and the role of philosophy played a significant part in this development, leading to a shift in focus towards theology and religious doctrine. However, some contemporary scholars have questioned the extent of this so-called decline. They argue that it is better understood as a shift in philosophical interest rather than an outright decline. According to this view, philosophy did not disappear but was instead integrated into and continued within the framework of theology.
Mulla Sadra (1571–1636) is often regarded as the most influential philosopher of the post-classical era. He was a prominent figure in the philosophical and mystical school known as illuminationism. Mulla Sadra saw philosophy as a spiritual practice aimed at fostering wisdom and transforming oneself into a sage. His metaphysical theory of existence was particularly influential. He rejected the traditional Aristotelian notion that reality is composed of static substances with fixed essences. Instead, he advocated a process philosophy that emphasized continuous change and novelty. According to this view, the creation of the world is not a singular event in the past but an ongoing process. Mulla Sadra synthesized monism and pluralism by claiming that there is a transcendent unity of being that encompasses all individual entities. He also defended panpsychism, arguing that all entities possess consciousness to varying degrees.
The movement of Islamic modernism emerged in the 19th and 20th centuries in response to the cultural changes brought about by modernity and the increasing influence of Western thought. Islamic modernists aimed to reassess the role of traditional Islamic doctrines and practices in the modern world. They sought to reinterpret and adapt Islamic teachings to demonstrate how the core tenets of Islam are compatible with modern principles, particularly in areas such as democracy, human rights, science, and the response to colonialism.
Indian
Indian philosophy is the philosophical tradition that originated on the Indian subcontinent. It can be divided into three main periods: the ancient period, which lasted until the end of the 2nd century BCE, the classical and medieval period, which lasted until the end of the 18th century CE, and the modern period that followed. Indian philosophy is characterized by a deep interest in the nature of ultimate reality, often relating this topic to spirituality and asking questions about how to connect with the divine and reach a state of enlightenment. In this regard, Indian philosophers frequently served as gurus, guiding spiritual seekers.
Indian philosophy is traditionally divided into orthodox and heterodox schools of thought, referred to as āstikas and nāstikas. The exact definitions of these terms are disputed. Orthodox schools typically accept the authority of the Vedas, the religious scriptures of Hinduism, and tend to acknowledge the existence of the self (Atman) and ultimate reality (Brahman). There are six orthodox schools: Nyāya, Vaiśeṣika, Sāṃkhya, Yoga, Mīmāṃsā, and Vedānta. The heterodox schools are defined negatively, as those that do not adhere to the orthodox views. The main heterodox schools are Buddhism and Jainism.
Ancient
The ancient period of Indian philosophy began around 900 BCE and lasted until 200 BCE. During this time, the Vedas were composed. These religious texts form the foundation of much of Indian philosophy, covering a wide range of topics, including hymns and rituals. Of particular philosophical interest are the Upanishads, which are late Vedic texts that discuss profound philosophical topics. Some scholars consider the Vedas as part of philosophy proper, while others view them as a form of proto-philosophy. This period also saw the emergence of non-Vedic movements, such as Buddhism and Jainism.
The Upanishads introduce key concepts in Indian philosophy, such as Atman and Brahman. Atman refers to the self, regarded as the eternal soul that constitutes the essence of every conscious being. Brahman represents the ultimate reality and the highest principle governing the universe. The Upanishads explore the relationship between Atman and Brahman, with a key idea being that understanding their connection is a crucial step on the spiritual path toward liberation. Some Upanishads advocate an ascetic lifestyle, emphasizing withdrawal from the world to achieve self-realization. Others emphasize active engagement with the world, rooted in the belief that individuals have social duties to their families and communities. These duties are prescribed by the concept of dharma, which varies according to one's social class and stage of life. Another influential idea from this period is the concept of rebirth, where individual souls are caught in a cycle of reincarnation. According to this belief, a person's actions in previous lives determine their circumstances in future lives, a principle known as the law of karma.
While the Vedas had a broad influence, not all Indian philosophical traditions originated from them. For example, the non-Vedic movements of Buddhism and Jainism emerged in the 6th century BCE. These movements agreed with certain Vedic teachings about the cycle of rebirth and the importance of seeking liberation but rejected many of the rituals and the social hierarchy described in the Vedas. Buddhism was founded by Gautama Siddhartha (563–483 BCE), who challenged the Vedic concept of Atman by arguing that there is no permanent, stable self. He taught that the belief in a permanent self leads to suffering and that liberation can be attained by realizing the absence of a permanent self.
Jainism was founded by Mahavira (599–527 BCE). Jainism emphasizes respect for all forms of life, a principle expressed in its commitment to non-violence. This principle prohibits harming or killing any living being, whether in action or thought. Another central tenet of Jainism is the doctrine of non-absolutism, which posits that reality is complex and multifaceted, and thus cannot be fully captured by any single perspective or expressed adequately in language. The third pillar of Jainism is the practice of asceticism or non-attachment, which involves detaching oneself from worldly possessions and desires to avoid emotional entanglement with them.
Classical and medieval
The classical and medieval periods in Indian philosophy span roughly from 200 BCE to 1800 CE. Some scholars refer to this entire duration as the "classical period," while others divide it into two distinct periods: the classical period up until 1300 CE, and the medieval period afterward. During the first half of this era, the orthodox schools of Indian philosophy, known as the darsanas, developed. Their foundational scriptures usually take the form of sūtras, which are aphoristic or concise texts that explain key philosophical ideas. The latter half of this period was characterized by detailed commentaries on these sutras, aimed at providing comprehensive explanations and interpretations.
Samkhya is the oldest of the darśanas. It is a dualistic philosophy that asserts that reality is composed of two fundamental principles: Purusha, or pure consciousness, and Prakriti, or matter. Samkhya teaches that Prakriti is characterized by three qualities known as gunas. Sattva represents calmness and harmony, Rajas corresponds to passion and activity, and Tamas involves ignorance and inertia. The Yoga school initially formed a part of Samkhya and later became an independent school. It is based on the Yoga Sutras of Patanjali and emphasizes the practice of physical postures and various forms of meditation.
Nyaya and Vaisheshika are two other significant orthodox schools. In epistemology, Nyaya posits that there are four sources of knowledge: perception, inference, analogical reasoning, and testimony. Nyaya is particularly known for its theory of logic, which emphasizes that inference depends on prior perception and aims to generate new knowledge, such as understanding the cause of an observed phenomenon. Vaisheshika, on the other hand, is renowned for its atomistic metaphysics. Although Nyaya and Vaisheshika were originally distinct schools, they later became intertwined and were often treated as a single tradition.
The schools of Vedānta and Mīmāṃsā focus primarily on interpreting the Vedic scriptures. Vedānta is concerned mainly with the Upanishads, discussing metaphysical theories and exploring the possibilities of knowledge and liberation. In contrast, Mīmāṃsā is more focused on the ritualistic practices outlined in the Vedas.
Buddhist philosophy also flourished during this period, leading to the development of four main schools of Indian Buddhism: Sarvāstivāda, Sautrāntika, Madhyamaka, and Yogācāra. While these schools agree on the core teachings of Gautama Buddha, they differ on certain key points. The Sarvāstivāda school holds that "all exists," including past, present, and future entities. This view is rejected by the Sautrāntika school, which argues that only the present exists. The Madhyamaka school, founded by Nagarjuna (c. 150–250 CE), asserts that all phenomena are inherently empty, meaning that nothing possesses a permanent essence or independent existence. The Yogācāra school is traditionally interpreted as a form of idealism, arguing that the external world is an illusion created by the mind.
The latter half of the classical period saw further developments in both the orthodox and heterodox schools of Indian philosophy, often through detailed commentaries on foundational sutras. The Vedanta school gained significant influence during this time, particularly with the rise of the Advaita Vedanta school under Adi Shankara (c. 700–750 CE). Shankara advocated for a radical form of monism, asserting that Atman and Brahman are identical, and that the apparent multiplicity of the universe is merely an illusion, or Maya.
This view was modified by Ramanuja (1017–1137 CE), who developed the Vishishtadvaita Vedanta school. Ramanuja agreed that Brahman is the ultimate reality, but he argued that individual entities, such as qualities, persons, and objects, are also real as parts of the underlying unity of Brahman. He emphasized the importance of Bhakti, or devotion to the divine, as a spiritual path and was instrumental in popularizing the Bhakti movement, which continued until the 17th to 18th centuries.
Another significant development in this period was the emergence of the Navya-Nyāya movement within the Nyaya school, which introduced a more sophisticated framework of logic with a particular focus on linguistic analysis.
Modern
The modern period in Indian philosophy began around 1800 CE, during a time of social and cultural changes, particularly due to the British rule and the introduction of English education. These changes had various effects on Indian philosophers. Whereas previously, philosophy was predominantly conducted in the language of Sanskrit, many philosophers of this period began to write in English. An example of this shift is the influential multi-volume work A History of Indian Philosophy by Surendranath Dasgupta (1887–1952). Philosophers during this period were influenced both by their own traditions and by new ideas from Western philosophy.
During this period, various philosophers attempted to create comprehensive systems that would unite and harmonize the diverse philosophical and religious schools of thought in India. For example, Swami Vivekananda (1863–1902) emphasized the validity and universality of all religions. He used the principles of Advaita Vedanta to argue that different religious traditions are merely different paths leading to the same spiritual truth. According to Advaita Vedanta, there is only one ultimate reality, without any distinctions or divisions. This school of thought considers the diversity and multiplicity in the world as an illusion that obscures the underlying divine oneness. Vivekananda believed that different religions represent various ways of realizing this divine oneness.
A similar project was pursued by Sri Aurobindo in his integral philosophy. His complex philosophical system seeks to demonstrate how different historical and philosophical movements are part of a global evolution of consciousness. Other contributions to modern Indian philosophy were made by spiritual teachers like Sri Ramakrishna, Ramana Maharshi, and Jiddu Krishnamurti.
Chinese
Chinese philosophy encompasses the philosophical thought associated with the intellectual and cultural heritage of China. Various periodizations of this tradition exist. One common periodization divides Chinese philosophy into four main eras: an early period before the Qin dynasty, a period up to the emergence of the Song dynasty, a period lasting until the end of the Qing dynasty, and a modern era that follows. The three main schools of Chinese philosophy are Confucianism, Daoism, and Buddhism. Other influential schools include Mohism and Legalism.
In traditional Chinese thought, philosophy was not distinctly separated from religious thought and other types of inquiry. It was primarily concerned with ethics and societal matters, often placing less emphasis on metaphysics compared to other traditions. Philosophical practice in China tended to focus on practical wisdom, with philosophers often serving as sages or thoughtful advisors.
Pre-Qin
The first period in Chinese philosophy began in the 6th century BCE and lasted until the rise of the Qin dynasty in 221 BCE. The concept of Dao, often translated as "the Way," played a central role during this period, with different schools of thought interpreting it in various ways. Early Chinese philosophy was heavily influenced by the teachings of Confucius (551–479 BCE). Confucius emphasized that a good life is one that aligns with the Dao, which he understood primarily in terms of moral conduct and virtuous behavior. He argued for the importance of filial piety, the respect for one's elders, and advocated for universal altruism. In Confucian thought, the family is fundamental, with each member fulfilling their role to ensure the family's overall flourishing. Confucius extended this idea to society, viewing the state as a large family where harmony is essential.
Laozi (6th century BCE) is traditionally regarded as the founder of Daoism. Like Confucius, he believed that living a good life involves being in harmony with the Dao. However, unlike Confucius, Laozi focused not only on society but also on the relationship between humans and nature. His concept of wu wei, often translated as "effortless action," was particularly influential. It refers to acting in a natural, spontaneous way that is in accordance with the Dao, which Laozi saw as an ideal state of being characterized by ease and spontaneity.
The Daoist philosopher Zhuangzi (399–295 BCE) employed parables and allegories to express his ideas. To illustrate the concept of wu wei in daily life, he used the example of a butcher who, after years of practice, could cut an ox effortlessly, with his knife naturally following the optimal path without any conscious effort. Zhuangzi is also famous for his story of the butterfly dream, which explores the nature of subjective experience. In this story, Zhuangzi dreams of being a butterfly and, upon waking, questions whether he is a man who dreamt of being a butterfly or a butterfly dreaming of being a man.
The school of Mohism was founded by Mozi (c. 470–391 BCE). Central to Mozi's philosophy is the concept of jian ai, which advocates for universal love or impartial caring. Based on this concept, he promoted an early form of consequentialism, arguing that political actions should be evaluated based on how they contribute to the welfare of the people.
Qin to pre-Song dynasties
The next period in Chinese philosophy began with the establishment of the Qin dynasty in 221 BCE and lasted until the rise of the Song dynasty in 960 CE. This period was influenced by Xuanxue philosophy, legalist philosophy, and the spread of Buddhism. Xuanxue, also known as Neo-Daoism, sought to synthesize Confucianism and Daoism while developing a metaphysical framework for these schools of thought. It posited that the Dao is the root of ultimate reality, leading to debates about whether this root should be understood as being or non-being. Philosophers such as He Yan (c. 195–249 CE) and Wang Bi (226–249 CE) argued that the Dao is a formless non-being that acts as the source of all things and phenomena. This view was contested by Pei Wei (267–300 CE), who claimed that non-being could not give rise to being; instead, he argued that being gives rise to itself.
In the realm of ethics and politics, the school of Legalism became particularly influential. Legalists rejected the Mohist idea that politics should aim to promote general welfare. Instead, they argued that statecraft is about wielding power and establishing order. They also dismissed the Confucian emphasis on virtues and moral conduct as the foundation of a harmonious society. In contrast, Legalists believed that the best way to achieve order was through the establishment of strict laws and the enforcement of punishments for those who violated them.
Buddhism, which arrived in China from India in the 1st century CE, initially focused on the translation of original Sanskrit texts into Chinese. Over time, however, new and distinctive forms of Chinese Buddhism emerged. For instance, Tiantai Buddhism, founded in the 6th century CE, introduced the doctrine of the Threefold Truth, which sought to reconcile two opposing views. The first truth, conventional realism, affirms the existence of ordinary things. The second truth posits that all phenomena are illusory or empty. The third truth attempts to reconcile these positions by claiming that the mundane world is both real and empty at the same time. This period also witnessed the rise of Chan Buddhism, which later gave rise to Zen Buddhism in Japan. In epistemology, Chan Buddhists advocated for a form of immediate acquaintance with reality, asserting that it transcends the distortions of linguistic distinctions and leads to direct knowledge of ultimate reality.
Song to Qing dynasties and modern
The next period in Chinese philosophy began with the emergence of the Song dynasty in 960 CE. Some scholars consider this period to end with the Opium Wars in 1840, while others extend it to the establishment of the Republic of China in 1912. During this era, neo-Confucianism became particularly influential. Unlike earlier forms of Confucianism, Neo-Confucianism placed greater emphasis on metaphysics, largely in response to similar developments in Daoism and Buddhism. It rejected the Daoist and Buddhist focus on non-being and emptiness, instead centering on the concept of li as the positive foundation of metaphysics. Li is understood as the rational principle that underlies being and governs all entities. It also forms the basis of human nature and is the source of virtues. Li is often contrasted with qi, which is seen as a material and vital force.
The later part of the Qing dynasty and the subsequent modern period were marked by an encounter with Western philosophy, including the ideas of philosophers like Plato, Kant, and Mill, as well as movements like pragmatism. However, Marx's ideas of class struggle, socialism, and communism were particularly significant. His critique of capitalism and his vision of a classless society led to the development of Chinese Marxism. In this context, Mao Zedong (1893–1976) played a dual role as both a philosopher who expounded these ideas and a revolutionary leader committed to their practical implementation. Chinese Marxism diverged from classical Marxism in several ways. For instance, while classical Marxism assigns the proletariat the responsibility for both the rise of the capitalist economy and the subsequent socialist revolution, in Mao's Marxism, this role is assigned to the peasantry under the guidance of the Communist Party.
Traditional Chinese thought also remained influential during the modern period. This is exemplified in the philosophy of Liang Shuming (1893–1988), who was influenced by Confucianism, Buddhism, and Western philosophy. Liang is often regarded as a founder of New Confucianism. He advocated for a balanced life characterized by harmony between humanity and nature as the path to true happiness. Liang criticized the modern European attitude for its excessive focus on exploiting nature to satisfy desires, and he viewed the Indian approach, with its focus on the divine and renunciation of desires, as an extreme in the opposite direction.
Others
Various philosophical traditions developed their own distinctive ideas. In some cases, these developments occurred independently, while in others, they were influenced by the major philosophical traditions.
Japanese
Japanese philosophy is characterized by its engagement with various traditions, including Chinese, Indian, and Western schools of thought. Ancient Japanese philosophy was shaped by Shinto, the indigenous religion of Japan, which included a form of animism that saw natural phenomena and objects as spirits, known as kami. The arrival of Confucianism and Buddhism in the 5th and 6th centuries CE transformed the intellectual landscape and led to various subsequent developments. Confucianism influenced political and social philosophy and was further developed into different strands of neo-Confucianism. Japanese Buddhist thought evolved particularly within the traditions of Pure Land Buddhism and Zen Buddhism.
In the 19th and 20th centuries, interaction with Western thinkers had a major influence on Japanese philosophy, particularly through the schools of existentialism and phenomenology. This period saw the foundation of the Kyoto School, established by Kitaro Nishida (1870–1945). Nishida criticized Western philosophy, particularly Kantianism, for its reliance on the distinction between subject and object. He sought to overcome this dichotomy by developing the concept of basho, which is usually translated as "place" and may be understood as an experiential domain that transcends the subject-object distinction. Other influential members of the Kyoto School include Tanabe Hajime (1885–1962) and Nishitani Keiji (1900–1990).
Latin American
Philosophy in Latin America is often considered part of Western philosophy. However, in a more specific sense, it represents a distinct tradition with its own unique characteristics, despite strong Western influence. Philosophical ideas concerning the nature of reality and the role of humans within it can be found in the region's indigenous civilizations, such as the Aztecs, the Maya, and the Inca. These ideas developed independently of European influence. However, most discussions typically focus on the colonial and post-colonial periods, as very few texts from the pre-colonial period have survived.
The colonial period was dominated by religious philosophy, particularly in the form of scholasticism. In the 18th and 19th centuries, the emphasis shifted to Enlightenment philosophy and the adoption of a scientific outlook, particularly through positivism. An influential current in the later part of the 20th century was the philosophy of liberation, which was inspired by Marxism and focused on themes such as political liberation, intellectual independence, and education.
African
In the broadest sense, African philosophy encompasses philosophical ideas that originated across the entire African continent. However, the term is often understood more narrowly to refer primarily to the philosophical traditions of Western and sub-Saharan Africa. The philosophical tradition in Africa draws from both ancient Egypt and scholarly texts from medieval Africa. While early African intellectual history primarily focused on folklore, wise sayings, and religious ideas, it also included philosophical concepts, such as the idea of Ubuntu. Ubuntu is usually translated as "humanity" or "humanness" and emphasizes the deep moral connections between people, advocating for kindness and compassion.
African philosophy before the 20th century was primarily conducted and transmitted orally as ideas by philosophers whose names have been lost to history. This changed in the 1920s with the emergence of systematic African philosophy. A significant movement during this period was excavationism, which sought to reconstruct traditional African worldviews, often with the goal of rediscovering a lost African identity. However, this approach was contested by Afro-deconstructionists, who questioned the existence of a singular African identity. Other influential strands and topics in modern African thought include ethnophilosophy, négritude, pan-Africanism, Marxism, postcolonialism, and critiques of Eurocentrism.
References
Notes
Citations
Sources
Intellectual history
History by topic | 0.782398 | 0.996868 | 0.779948 |
Declarative knowledge | Declarative knowledge is an awareness of facts that can be expressed using declarative sentences. It is also called theoretical knowledge, descriptive knowledge, propositional knowledge, and knowledge-that. It is not restricted to one specific use or purpose and can be stored in books or on computers.
Epistemology is the main discipline studying declarative knowledge. Among other things, it studies the essential components of declarative knowledge. According to a traditionally influential view, it has three elements: it is a belief that is true and justified. As a belief, it is a subjective commitment to the accuracy of the believed claim while truth is an objective aspect. To be justified, a belief has to be rational by being based on good reasons. This means that mere guesses do not amount to knowledge even if they are true. In contemporary epistemology, additional or alternative components have been suggested. One proposal is that no contradicting evidence is present. Other suggestions are that the belief was caused by a reliable cognitive process and that the belief is infallible.
Types of declarative knowledge can be distinguished based on the source of knowledge, the type of claim that is known, and how certain the knowledge is. A central contrast is between a posteriori knowledge, which arises from experience, and a priori knowledge, which is grounded in pure rational reflection. Other classifications include domain-specific knowledge and general knowledge, knowledge of facts, concepts, and principles as well as explicit and implicit knowledge.
Declarative knowledge is often contrasted with practical knowledge and knowledge by acquaintance. Practical knowledge consists of skills, like knowing how to ride a horse. It is a form of non-intellectual knowledge since it does not need to involve true beliefs. Knowledge by acquaintance is a familiarity with something based on first-hand experience, like knowing the taste of chocolate. This familiarity can be present even if the person does not possess any factual information about the object. Some theorists also contrast declarative knowledge with conditional knowledge, prescriptive knowledge, structural knowledge, case knowledge, and strategic knowledge.
Declarative knowledge is required for various activities, such as labeling phenomena as well as describing and explaining them. It can guide the processes of problem-solving and decision-making. In many cases, its value is based on its usefulness in achieving one's goals. However, its usefulness is not always obvious and not all instances of declarative knowledge are valuable. A lot of knowledge taught at school is declarative knowledge. It is said to be stored as explicit memory and can be learned through rote memorization of isolated, singular, facts. But in many cases, it is advantageous to foster a deeper understanding that integrates the new information into wider structures and connects it to pre-existing knowledge. Sources of declarative knowledge are perception, introspection, memory, reasoning, and testimony.
Definition and semantic field
Declarative knowledge is an awareness or understanding of facts. It can be expressed through spoken and written language using declarative sentences and can thus be acquired through verbal communication. Examples of declarative knowledge are knowing "that Princess Diana died in 1997" or "that Goethe was 83 when he finished writing Faust". Declarative knowledge involves mental representations in the form of concepts, ideas, theories, and general rules. Through these representations, the person stands in a relationship to a particular aspect of reality by depicting what it is like. Declarative knowledge tends to be context-independent: it is not tied to any specific use and may be employed for many tasks. It includes a wide range of phenomena and encompasses both knowledge of individual facts and general laws. An example for individual facts is knowing that the atomic mass of gold is 196.97 u. Knowing that the color of leaves of some trees changes in autumn, on the other hand, belongs to general laws. Due to its verbal nature, declarative knowledge can be stored in media like books and harddisks. It may also be processed using computers and plays a key role in various forms of artificial intelligence, for example, in the knowledge base of expert systems.
Terms like theoretical knowledge, descriptive knowledge, propositional knowledge, and knowledge-that are used as synonyms of declarative knowledge and express its different aspects. Theoretical knowledge is knowledge of what is the case, in the past, present, or future independent of a practical outlook concerning how to achieve a specific goal. Descriptive knowledge is knowledge that involves descriptions of actual or speculative objects, events, or concepts. Propositional knowledge asserts that a proposition or claim about the world is true. This is often expressed using a that-clause, as in "knowing that kangaroos hop" or "knowing that 2 + 2 = 4". For this reason, it is also referred to as knowledge-that. Declarative knowledge contrasts with non-declarative knowledge, which does not concern the explicit comprehension of factual information regarding the world. In this regard, practical knowledge in the form of skills and knowledge by acquaintance as a type of experiential familiarity are not forms of declarative knowledge. The main discipline investigating declarative knowledge is called epistemology. It tries to determine its nature, how it arises, what value it has, and what its limits are.
Components
A central issue in epistemology is to determine the components or essential features of declarative knowledge. This field of inquiry is called the analysis of knowledge. It aims to provide the conditions that are individually necessary and jointly sufficient for a state to amount to declarative knowledge. In this regard, it is similar to how a chemist breaks down a sample by identifying all the chemical elements composing it.
A traditionally influential view states that declarative knowledge has three essential features: it is (1) a belief that is (2) true and (3) justified. This position is referred to as the justified-true-belief theory of knowledge and is often seen as the standard view. This view faced significant criticism following a series of counterexamples given by Edmund Gettier in the latter half of the 20th century. In response, various alternative theories of the elements of declarative knowledge have been suggested. Some see justified true belief as a necessary condition that is not sufficient by itself and discuss additional components that are needed. Another response is to deny that justification is needed and seek a different component to replace it. Some theorists, like Timothy Williamson, reject the idea that declarative knowledge can be deconstructed into various constituent parts. They argue instead that it is a basic and unanalyzable epistemological state.
Belief
One commonly accepted component of knowledge is belief. In this sense, whoever knows that whales are animals automatically also believes that whales are animals. A belief is a mental state that affirms that something is the case. As an attitude toward a proposition, it belongs to the subjective side of knowledge. Some theorists, like Luis Villoro, distinguish between weak and strong beliefs. Having a weak belief implies that the person merely presumes that something is the case. They guess that the claim is probably correct while acknowledging at the same time that they might very well be mistaken about it. This contrasts with strong belief, which implies a substantial commitment to the believed claim. It involves certainty in the form of being sure about it. For declarative knowledge, this stronger sense of belief is relevant.
A few epistemologists, like Katalin Farkas, claim that, at least in some cases, knowledge is not a form of belief but a different type of mental state. One argument for this position is based on statements like "I don't believe it, I know it", which may be used to express that the person is very certain and has good reason to affirm this claim. However, this argument is not generally accepted since knowing something does not imply that the person disbelieves the claim. A further explanation is to hold that this statement is a linguistic tool to emphasize that the person is well-informed. In this regard, it only denies that a weak belief exists without rejecting that a stronger form of belief is involved.
Truth
Beliefs are either true or false depending on whether they accurately represent reality. Truth is usually seen as one of the essential components of knowledge. This means that it is impossible to know a claim that is false. For example, it is possible to believe that Hillary Clinton won the 2016 US Presidential election but nobody can know it because this event did not occur. That a proposition is true does not imply that it is common knowledge, that an irrefutable proof exists, or that someone is thinking about it. Instead, it only means that it presents things as they are. For example, when flipping a coin, it may be true that it will land heads even if it is not possible to predict this with certainty. Truth is an objective factor of knowledge that goes beyond the mental sphere of belief since it usually depends on what the world outside the person's mind is like.
Some epistemologists hold that there are at least some forms of knowledge that do not require truth. For example, Joseph Thomas Tolliver argues that some mental states amount to knowledge only because of the causes and effects they have. This is the case even if they do not represent anything and are therefore neither true nor false. A different outlook is found in the field of the anthropology of knowledge, which studies how knowledge is acquired, stored, retrieved, and communicated. In this discipline, knowledge is often understood in a very wide sense that is roughly equivalent to understanding and culture. In this regard, the main interest is usually about how people ascribe truth values to meaning-contents, like when affirming an assertion, independent of whether this assertion is true or false. Despite these positions, it is widely accepted in epistemology that truth is an essential component of declarative knowledge.
Justification
In epistemology, justification means that a claim is supported by evidence or that a person has good reasons for believing it. This implies some form of appraisal in relation to an evaluative standard of rationality. For example, a person who just checked their bank account and saw that their balance is 500 dollars has a good reason to believe that they have 500 dollars in their bank account. However, justification by itself does not imply that a belief is true. For example, if someone reads the time from their clock they may form a justified belief about the current time even if the clock stopped a while ago and shows a false time now. If a person has a justified belief then they are often able to articulate what this belief is and to provide arguments stating the reasons supporting it. However, this ability to articulate one's reasons is not an essential requirement of justification.
Justification is usually included as a component of knowledge to exclude lucky guesses. For example, a compulsive gambler flipping a coin may be certain that it will land heads this time without a good reason for this belief. In this case, the belief does not amount to knowledge even if it turns out that it was true. This observation can be easily explained by including justification as an essential component. This implies that the gambler's belief does not amount to knowledge because it lacks justification. In this regard, mere true opinion is not enough to establish knowledge. A central issue in epistemology concerns the standards of justification, i.e., what conditions have to be fulfilled for a belief to be justified. Internalists understand justification as a purely subjective component, akin to belief. They claim that a belief is justified if it stands in the right relation to other mental states of the believer. For example, perceptual experiences can justify beliefs about the perceived object. This contrasts with externalists, who claim that justification involves objective factors that are external to the person's mind. Such factors can include causal relations with the object of the belief or that reliable cognitive processes are responsible for the formation of the belief.
A closely related issue concerns the question of how the different mental states have to be related to each other to be justified. For example, one belief may be supported by another belief. However, it is questionable whether this is sufficient for justification if the second belief is itself not justified. For example, a person may believe that Ford cars are cheaper than BMWs because they heard this from a friend. However, this belief may not be justified if there is no good reason to think that the friend is a reliable source of information. This can lead to an infinite regress since whatever reason is provided for the friend's reliability may itself lack justification. Three popular responses to this problem are foundationalism, coherentism, and infinitism. According to foundationalists, some reasons are foundational and do not depend on other reasons for their justification. Coherentists also reject the idea that an infinite chain of reasons is needed and argue that different beliefs can mutually support each other without one being more basic than the others. Infinitists, on the other hand, accept the idea that an infinite chain of reasons is required.
Many debates concerning the nature of declarative knowledge focus on the role of justification, specifically whether it is needed at all and what else might be needed to complement it. Influential in this regard was a series of thought experiments by Edmund Gettier. They present concrete cases of justified true beliefs that fail to amount to knowledge. The reason for their failure is a type of epistemic luck. This means that the justification is not relevant to whether the belief is true. In one thought experiment, Smith and Jones apply for a job and before officially declaring the result, the company president tells Smith that Jones will get the job. Smith saw that Jones has 10 coins in his pocket so he comes to form the justified belief that the successful candidate has 10 coins in his pocket. In the end, it turns out that Smith gets the job after all. By lucky coincidence, Smith also has 10 coins in his pocket. Gettier claims that, because of this coincidence, Smith's belief that the successful candidate has 10 coins in his pocket does not amount to knowledge. The belief is justified and true but the justification is not relevant to the truth.
Others
In response to Gettier's thought experiments, various further components of declarative knowledge have been suggested. Some of them are intended as additional elements besides belief, truth, and justification while others are understood as replacements for justification.
According to defeasibility theory, an additional factor besides having evidence in favor of the belief is that no defeating evidence is present. Defeating evidence of a belief is evidence that undermines the justification of the belief. For example, if a person looks outside the window and sees a rainbow then this impression justifies their belief that there is a rainbow. However, if the person just ate a psychedelic drug then this is defeating evidence since it undermines the reliability of their experiences. Defeasibility theorists claim that, in this case, the belief does not amount to knowledge because defeating evidence is present. As an additional component of knowledge, they require that the person has no defeating evidence of the belief. Some theorists demand the stronger requirement that there is no true proposition that would defeat the belief, independent of whether the person is aware of this proposition or not. A closely related theory holds that beliefs can only amount to knowledge if they are not inferred from a falsehood.
A further theory is based on the idea that knowledge states should be responsive to what the world is like. One suggested component in this regard is that the belief is safe or sensitive. This means that the person has the belief because it is true but that they would not hold the belief if it was false. In this regard, the person's belief tracks the state of the world.
Some theories do not try to provide additional requirements but instead propose replacing justification with alternative components. For example, according to some forms of reliabilism, a true belief amounts to knowledge if it was formed through a reliable cognitive process. A cognitive process is reliable if it produces mostly true beliefs in actual situations and would also do so in counterfactual situations.
Examples of reliable processes are perception and reasoning. An outcome of reliabilism is that knowledge is not restricted to humans. The reason is that reliable belief-formation processes may also be present in other animals, like dogs, apes, or rats, even if they do not possess justification for their beliefs. Virtue epistemology is a closely related approach that understands knowledge as the manifestation of epistemic virtues. It agrees with regular forms of reliabilism that knowledge is not a matter of luck but puts additional emphasis on the evaluative aspect of knowledge and the underlying skills responsible for it.
According to causal theories of knowledge, a necessary element of knowing a fact is that this fact somehow caused the knowledge of it. This is the case, for example, if a belief about the color of a house is based on a perceptual experience, which causally connects the house to the belief. This causal connection does not have to be direct and can be mediated through steps like activating memories and drawing inferences.
In many cases, the goal of suggesting additional components is to avoid cases of epistemic luck. In this regard, some theorists have argued that the additional component would have to ensure that the belief is true. This approach is reflected in the idea that knowledge implies a form of certainty. But it sets the standards of knowledge very high and may require that a belief has to be infallible to amount to knowledge. This means that the justification ensures that the belief is true. For example, Richard Kirkham argues that the justification required for knowledge must be based on self-evident premises that deductively entail the held belief. Such a position leads to a form of skepticism about knowledge since the great majority of regular beliefs do not live up to these requirements. It would imply that people know very little and that most who claim to know a certain fact are mistaken. However, a more common view among epistemologists is that knowledge does not require infallibility and that many knowledge claims in everyday life are true.
Types
Declarative knowledge arises in many forms. It is possible to distinguish between them based on the type of content of what is known. For example, empirical knowledge is knowledge of observable facts while conceptual knowledge is an understanding of general categorizations and theories as well as the relations between them. Other examples are ethical, religious, scientific, mathematical, and logical knowledge as well as self-knowledge. A further distinction focuses on the mode of how something is known. On a causal level, different sources of knowledge correspond to different types of declarative knowledge. Examples are knowledge through perception, introspection, memory, reasoning, and testimony.
On a logical level, forms of knowledge can be distinguished based on how a knowledge claim is supported by its premises. This classification corresponds to the different forms of logical reasoning, such as deductive and inductive reasoning. A closely related categorization focuses on the strength of the source of the justification. It distinguishes between probabilistic and apodictic knowledge. The distinction between a priori and a posteriori knowledge, on the other hand, focuses on the type of the source. These classifications overlap with each other at various points. For example, a priori knowledge is closely connected to apodictic, conceptual, deductive, and logical knowledge. A posteriori knowledge, on the other hand, is linked to probabilistic, empirical, inductive, and scientific knowledge. Self-knowledge may be identified with introspective knowledge.
The distinction between a priori and a posteriori knowledge is determined by the role of experience and matches the contrast between empirical and non-empirical knowledge. A posteriori knowledge is knowledge from experience. This means that experience, like regular perception, is responsible for its formation and justification. Knowing that the door of one's house is green is one example of a posteriori knowledge since some form of sensory observation is required. For a priori knowledge, on the other hand, no experience is required. It is based on pure rational reflection and can neither be verified nor falsified through experience. Examples are knowing that 7 + 5 = 12 or that whatever is red everywhere is not blue everywhere. In this context, experience means primarily sensory observation but can also include related processes, like introspection and memory. However, it does not include all conscious phenomena. For example, having a rational insight into the solution of a mathematical problem does not mean that the resulting knowledge is a posteriori. And knowing that 7 + 5 = 12 is a priori knowledge even though some form of consciousness is involved in learning what symbols like "7" and "+" mean and in becoming aware of the associated concepts.
One classification distinguishes between knowledge of facts, concepts, and principles. Knowledge of facts pertains to the association of concrete information, for example, that the red color on a traffic light means stop or that Christopher Columbus sailed in 1492 from Spain to America. Knowledge of concepts applies to more abstract and general ideas that group together many individual phenomena. For example, knowledge of the concept of jogging implies knowing how it differs from walking and running as well as being able to apply this concept to concrete cases. Knowledge of principles is an awareness of general patterns of cause and effect, including rules of thumb. It is a form of understanding how things work and being aware of the explanation of why something happened the way it did. Examples are that if there is lightning then there will be thunder or if a person robs a bank then they may go to jail. Similar classifications distinguish between declarative knowledge of persons, events, principles, maxims, and norms.
Declarative knowledge is traditionally identified with explicit knowledge and contrasted with tacit or implicit knowledge. Explicit knowledge is knowledge of which the person is aware and which can be articulated. It is stored in explicit memory. Implicit knowledge, on the other hand, is a form of embodied knowledge that the person cannot articulate. The traditional association of declarative knowledge with explicit knowledge is not always accepted in the contemporary literature. Some theorists argue that there are forms of implicit declarative knowledge. A putative example is a person who has learned a concept and is now able to correctly classify objects according to this concept even though they are not able to provide a verbal rationale for their decision.
A further contrast is between domain-specific and general knowledge. Domain-specific knowledge applies to a narrow subject or a particular task but is useless outside this focus. General knowledge, on the other hand, concerns wide topics or has general applications. For example, declarative knowledge of the rules of grammar belongs to general knowledge while having memorized the lines of the poem The Raven is domain-specific knowledge. This distinction is based on a continuum of cases that are more or less general without a clear-cut line between the types. According to Paul Kurtz, there are six types of descriptive knowledge: knowledge of available means, of consequences, of particular facts, of general causal laws, of established values, and of basic needs. Another classification distinguishes between structural knowledge and perceptual knowledge.
Contrast with other forms of knowledge
Declarative knowledge is often contrasted with other types of knowledge. A common classification in epistemology distinguishes it from practical knowledge and knowledge by acquaintance. All of them can be expressed with the verb "to know" but their differences are reflected in the grammatical structures used to articulate them. Declarative knowledge is usually expressed with a that-clause, as in "Ann knows that koalas sleep most of the time". For practical knowledge, a how-clause is used instead, for example, "Dave knows how to read the time on a clock". Knowledge by acquaintance can be articulated using a direct object without a preposition, as in "Emily knows Obama personally".
Practical knowledge consists of skills. Knowing how to ride a horse or how to play the guitar are forms of practical knowledge. The terms "procedural knowledge" and "knowledge-how" are often used as synonyms. It differs from declarative knowledge in various aspects. It is usually imprecise and cannot be proven by deducing it from premises. It is non-propositional and, for the most part, cannot be taught in abstract without concrete exercise. In this regard, it is a form of non-intellectual knowledge. It is tied to a specific goal and its value lies not in being true, but rather in how effective it is to accomplish its goal. Practical knowledge can be present without any beliefs and may even involve false beliefs. For example, an experienced ball player may know how to catch a ball despite having false beliefs. They may believe that their eyes continuously track the ball. But, in truth, their eyes perform a series of abrupt movements that anticipate the ball's trajectory rather than following it. Another difference is that declarative knowledge is commonly only ascribed to animals with highly developed minds, like humans. Practical knowledge, on the other hand, is more prevalent in the animal kingdom. For example, ants know how to walk through the kitchen despite presumably lacking the mental capacity for the declarative knowledge that they are walking through the kitchen.
Declarative knowledge is also different from knowledge by acquaintance, which is also known as objectual knowledge, and knowledge-of. Knowledge by acquaintance is a form of familiarity or direct awareness that a person has with another person, a thing, or a place. For example, a person who has tasted the flavor of chocolate knows chocolate in this sense, just like a person who visited Lake Taupō knows Lake Taupō. Knowledge by acquaintance does not imply that the person can provide factual information about the object. It is a form of non-inferential knowledge that depends on first-hand experience. For example, a person who has never left their home country may acquire a lot of declarative knowledge about other countries by reading books without any knowledge by acquaintance.
Knowledge by acquaintance plays a central role in the epistemology of Bertrand Russell. He holds that it is more basic than other forms of knowledge since to understand a proposition, one has to be acquainted with its constituents. According to Russell, knowledge by acquaintance covers a wide range of phenomena, such as thoughts, feelings, desires, memory, introspection, and sense data. It can happen in relation to particular things and universals. Knowledge of physical objects, on the other hand, belongs to declarative knowledge, which he calls knowledge by description. It also has a central role to play since it extends the realm of knowledge to things that lie beyond the personal sphere of experience.
Some theorists, like Anita Woolfolk et. al., contrast declarative knowledge and procedural knowledge with conditional knowledge. According to this view, conditional knowledge is about knowing when and why to use declarative and procedural knowledge. For many issues, like solving math problems and learning a foreign language, it is not sufficient to know facts and general procedures if the person does not know under which situations to use them. To master a language, for example, it is not enough to acquire declarative knowledge of verb forms if one lacks conditional knowledge of when it is appropriate to use them. Some theorists understand conditional knowledge as one type of declarative knowledge and not as a distinct category.
A further distinction is between declarative or descriptive knowledge in contrast to prescriptive knowledge. Descriptive knowledge represents what the world is like. It describes and classifies what phenomena are there and in what relations they stand toward each other. It is interested in what is true independently of what people want. Prescriptive knowledge is not about what things actually are like but what they should be like. This concerns specifically the question of what purposes people should follow and how they should act. It guides action by showing what people should do to fulfill their needs and desires. In this regard, it has a more subjective component since it depends on what people want. Some theorists equate prescriptive knowledge with procedural knowledge. But others distinguish them based on the claim that prescriptive knowledge is about what should be done while procedural knowledge is about how to do it. Other classifications contrast declarative knowledge with structural knowledge, meta knowledge, heuristic knowledge, control knowledge, case knowledge, and strategic knowledge.
Some theorists argue that one type of knowledge is more basic than others. For example, Robert E. Haskell claims that declarative knowledge is the basic form of knowledge since it constitutes a general framework of understanding. According to him, it is a precondition for acquiring other forms of knowledge. However, this position is not generally accepted and philosophers like Gilbert Ryle defend the opposing thesis that declarative knowledge presupposes procedural knowledge.
Value
Declarative knowledge plays a central role in human understanding of the world. It underlies activities such as labeling phenomena, describing them, explaining them, and communicating with others about them. The value of declarative knowledge depends in part on its usefulness in helping people achieve their objectives. For example, to treat a disease, knowledge of its symptoms and possible cures is beneficial. Or if a person has applied for a new job then knowing where and when the interview takes place is important. Due to its context-independence, declarative knowledge can be used for a great variety of tasks and because of its compact nature, it can be easily stored and retrieved. Declarative knowledge can be useful for procedural knowledge, for example, by knowing the list of steps needed to execute a skill. It also has a key role in understanding and solving problems and can guide the process of decision-making. A related issue in the field of epistemology concerns the question of whether declarative knowledge is more valuable than true belief. This is not obvious since, for many purposes, true belief is as useful as knowledge to achieve one's goals.
Declarative knowledge is primarily desired in cases where it is immediately useful. But not all forms of knowledge are useful. For example, indiscriminately memorizing phone numbers found in a foreign phone book is unlikely to result in useful declarative knowledge. However, it is often difficult to assess the value of knowledge if one does not foresee a situation where it would be useful. In this regard, it can happen that the value of apparently useless knowledge is only discovered much later. For example, Maxwell's equations linking magnetism to electricity were considered useless at the time of discovery until experimental scientists discovered how to detect electromagnetic waves. Occasionally, knowledge may have a negative value, for example, when it hinders someone to do what would be needed because their knowledge of associated dangers paralyzes them.
Learning
The value of knowledge is specifically relevant in the field of education. It is needed to decide which of the vast amount of knowledge should become part of the curriculum to be passed on to students. Many types of learning at school involve the acquisition of declarative knowledge. One form of declarative knowledge learning is so-called rote learning. It is a memorization technique in which the claim to be learned is repeated again and again until it is fully memorized. Other forms of declarative knowledge learning focus more on developing an understanding of the subject. This means that the learner should not only be able to repeat the claim but also to explain, describe, and summarize it. For declarative knowledge to be useful, it is often advantageous if it is embedded in a meaningful structure. For example, learning about new concepts and ideas involves developing an understanding of how they are related to each other and to what is already known.
According to Ellen Gagné, learning declarative knowledge happens in four steps. In the first step, the learner comes into contact with the material to be learned and apprehends it. Next, they translate this information into propositions. Following that, the learner's memory triggers and activates related propositions. As the last step, new connections are established and inferences are drawn. A similar process is described by John V. Dempsey, who stresses that the new information must be organized, divided, and linked to existing knowledge. He distinguishes between learning that involves recalling information in contrast to learning that only requires being able to recognize patterns. A related theory is defended by Anthony J. Rhem. He holds that the process of learning declarative knowledge involves organizing new information into groups. Next, links between the groups are drawn and the new information is connected to pre-existing knowledge.
Some theorists, like Robert Gagné and Leslie Briggs, distinguish between types of declarative knowledge learning based on the cognitive processes involved: learning of labels and names, of facts and lists, and of organized discourse. Learning labels and names requires forming a mental connection between two elements. Examples include memorizing foreign vocabulary and learning the capital city of each state. Learning facts involves relationships between concepts, for example, that "Ann Richards was the governor of Texas in 1991". This process is usually easier if the person is not dealing with isolated facts but possesses a network of information into which the new fact is integrated. The case for learning lists is similar since it involves the association of many items. Learning organized discourse encompasses not discrete facts or items but a wider comprehension of the meaning present in an extensive body of information.
Various sources of declarative knowledge are discussed in epistemology. They include perception, introspection, memory, reasoning, and testimony. Perception is usually understood as the main source of empirical knowledge. It is based on the senses, like seeing that it is raining when looking out the window. Introspection is similar to perception but provides knowledge of the internal sphere and not of external objects. An example is directing one's attention to a pain in one's toe to assess whether it has intensified.
Memory differs from perception and introspection in that it does not produce new knowledge but merely stores and retrieves pre-existing knowledge. As such, it depends on other sources. It is similar to reasoning in this regard, which starts from a known fact and arrives at new knowledge by drawing inferences from it. Empiricists hold that this is the only way how reason can arrive at knowledge while rationalists contend that some claims can be known by pure reason independent of additional sources. Testimony is different from the other sources since it does not have its own cognitive faculty. Rather, it is grounded in the notion that people can acquire knowledge through communication with others, for example, by speaking to someone or by reading a newspaper. Some religious philosophers include religious experiences (through the so-called sensus divinitatis) as a source of knowledge of the divine. However, such claims are controversial.
References
Citations
Sources
Concepts in epistemology
Psychological concepts
Intelligence
Mental content
Definitions of knowledge | 0.781984 | 0.997322 | 0.779891 |
Contextualism | Contextualism, also known as epistemic contextualism, is a family of views in philosophy which emphasize the context in which an action, utterance, or expression occurs. Proponents of contextualism argue that, in some important respect, the action, utterance, or expression can only be understood relative to that context. Contextualist views hold that philosophically controversial concepts, such as "meaning P", "knowing that P", "having a reason to A", and possibly even "being true" or "being right" only have meaning relative to a specified context. Other philosophers contend that context-dependence leads to complete relativism.
In ethics, "contextualist" views are often closely associated with situational ethics, or with moral relativism.
Contextualism in architecture is a theory of design where modern building types are harmonized with urban forms usual to a traditional city.
In epistemology, contextualism is the treatment of the word 'knows' as context-sensitive. Context-sensitive expressions are ones that "express different propositions relative to different contexts of use". For example, some terms generally considered context-sensitive are indexicals, such as 'I', 'here', and 'now'; while 'I' has a constant linguistic meaning in all contexts of use, whom it refers to varies with context. Similarly, epistemic contextualists argue that the word 'knows' is context sensitive, expressing different relations in some different contexts.
Overview
Contextualism was introduced, in part, to undermine skeptical arguments that have this basic structure:
I don't know that I am not in a skeptical scenario H (e.g., I'm not a brain in a vat)
If I don't know that H is not the case, then I don't know an ordinary proposition O (e.g., I have hands)
Conclusion: Therefore, I don't know O
The contextualist solution is not to deny any premise, nor to say that the argument does not follow, but link the truth value of (3) to the context, and say that we can refuse (3) in context—like everyday conversational context—where we have different requirements to say we know.
The main tenet of contextualist epistemology is that knowledge attributions are context-sensitive, and the truth values of "know" depend on the context in which it is used. A statement like 'I know that I have hands' would be false. The same proposition in an ordinary context—e.g., in a cafe with friends— would be truth, and its negation would be false. When we participate in philosophical discourses of the skeptical sort, we seem to lose our knowledge; once we leave the skeptical context, we can truthfully say we have knowledge.
That is, when we attribute knowledge to someone, the context in which we use the term 'knowledge' determines the standards relative to which "knowledge" is being attributed (or denied). If we use it in everyday conversational contexts, the contextualist maintains, most of our claims to "know" things are true, despite skeptical attempts to show we know little or nothing. But if the term 'knowledge' is used when skeptical hypotheses are being discussed, we count as "knowing" very little, if anything. Contextualists use this to explain why skeptical arguments can be persuasive, while at the same time protecting the correctness of our ordinary claims to "know" things. This theory does not allow that someone can have knowledge at one moment and not the other, which would not be a satisfying epistemological answer. What contextualism entails is that in one context an utterance of a knowledge attribution can be true, and in a context with higher standards for knowledge, the same statement can be false. This happens in the same way that 'I' can correctly be used (by different people) to refer to different people at the same time.
What varies with context is how well-positioned a subject must be with respect to a proposition to count as "knowing" it. Contextualism in epistemology then is a semantic thesis about how 'knows' works in English, not a theory of what knowledge, justification, or strength of epistemic position consists in. However, epistemologists combine contextualism with views about what knowledge is to address epistemological puzzles and issues, such as skepticism, the Gettier problem, and the Lottery paradox.
Contextualist accounts of knowledge became increasingly popular toward the end of the 20th century, particularly as responses to the problem of skepticism. Contemporary contextualists include Michael Blome-Tillmann, Michael Williams, Stewart Cohen, Keith DeRose, David Lewis, Gail Stine, and George Mattey.
The standards for attributing knowledge to someone, the contextualist claims, vary from one user's context to the next. Thus, if I say "John knows that his car is in front of him", the utterance is true if and only if (1) John believes that his car is in front of him, (2) the car is in fact in front of him, and (3) John meets the epistemic standards that my (the speaker's) context selects. This is a loose contextualist account of knowledge, and there are many significantly different theories of knowledge that can fit this contextualist template and thereby come in a contextualist form.
For instance, an evidentialist account of knowledge can be an instance of contextualism if it's held that strength of justification is a contextually varying matter. And one who accepts a relevant alternative's account of knowledge can be a contextualist by holding that what range of alternatives are relevant is sensitive to conversational context. DeRose adopts a type of modal or "safety" (as it has since come to be known) account on which knowledge is a matter of one's belief as to whether or not p is the case matching the fact of the matter, not only in the actual world, but also in the sufficiently close possible worlds: Knowledge amounts to there being no "nearby" worlds in which one goes wrong with respect to p. But how close is sufficiently close? It's here that DeRose takes the modal account of knowledge in a contextualist direction, for the range of "epistemically relevant worlds" is what varies with context: In high standards contexts one's belief must match the fact of the matter through a much wider range of worlds than is relevant to low standards contexts.
It is claimed that neurophilosophy has the goal of contextualizing.
Contextualist epistemology has been criticized by several philosophers. Contextualism is opposed to any general form of Invariantism, which claims that knowledge is not context-sensitive (i.e. it is invariant). More recent criticism has been in the form of rival theories, including Subject-Sensitive Invariantism (SSI), mainly due to the work of John Hawthorne (2004), and Interest-Relative Invariantism (IRI), due to Jason Stanley (2005). SSI claims that it is the context of the subject of the knowledge attribution that determines the epistemic standards, whereas Contextualism maintains it is the attributor. IRI, on the other hand, argues that it is the context of the practical interests of the subject of the knowledge attribution that determines the epistemic standards. Stanley writes that bare IRI is "simply the claim that whether or not someone knows that p may be determined in part by practical facts about the subject's environment." ("Contextualism" is a misnomer for either form of Invariantism, since "Contextualism" among epistemologists is considered to be restricted to a claim about the context-sensitivity of knowledge attributions (or the word "knows"). Thus, any view which maintains that something other than knowledge attributions are context-sensitive is not, strictly speaking, a form of Contextualism.)
An alternative to contextualism called contrastivism has been proposed by Jonathan Schaffer. Contrastivism, like contextualism, uses semantic approaches to tackle the problem of skepticism.
Recent work in experimental philosophy has taken an empirical approach to testing the claims of contextualism and related views. This research has proceeded by conducting experiments in which ordinary non-philosophers are presented with vignettes, then asked to report on the status of the knowledge ascription. The studies address contextualism by varying the context of the knowledge ascription, e.g. how important it is that the agent in the vignette has accurate knowledge.
In the studies completed up to 2010, no support for contextualism has been found: stakes have no impact on evidence. More specifically, non-philosophical intuitions about knowledge attributions are not affected by the importance to the potential knower of the accuracy of that knowledge.
See also
Anekantavada
Degrees of truth
Exclusive disjunction
False dilemma
Fuzzy logic
Logical disjunction
Logical value
Multi-valued logic
Perspectivism
Principle of bivalence
Propositional attitude
Propositional logic
Relativism
Rhizome (philosophy)
Semiotic anthropology
Truth
Footnotes
References
Annis, David. 1978. "A Contextualist Theory of Epistemic Justification", in American Philosophical Quarterly, 15: 213–219.
Cappelen, H. & Lepore, E. 2005. Insensitive Semantics: A Defense of Semantic Minimalism and Speech Act Pluralism, Blackwell Publishing.
Cohen, Stuart. 1998. "Contextualist Solutions to Epistemological Problems: Scepticism, Gettier, and the Lottery." Australasian Journal of Philosophy, 76: 289–306.
Cohen Stuart. 1999. "Contextualism, Skepticism, and Reasons", in Tomberlin 1999.
DeRose, Keith. 1992. "Contextualism and Knowledge Attributions", Philosophy and Phenomenological Research 52: 913–929.
DeRose, Keith. 1995. "Solving the Skeptical Problem," Philosophical Review 104: 1-52.
DeRose, Keith. 1999. "Contextualism: An Explanation and Defense", in Greco and Sosa 1999.
DeRose, Keith. 2002. "Assertion, Knowledge, and Context," Philosophical Review 111: 167–203.
DeRose, Keith. 2009. The Case for Contextualism: Knowledge, Skepticism and Context, Vol. 1, Oxford: Oxford University Press.
Feldman, Richard. 1999. "Contextualism and Skepticism", in Tomberlin 1999.
Greco, J. & Sosa, E. 1999. Blackwell Guide to Epistemology, Blackwell Publishing.
Hawthorne, John. 2004. Knowledge and Lotteries, Oxford: Oxford University Press.
Mackie, J.L. 1977, "Ethics: Inventing Right and Wrong", Viking Press, .
May, Joshua, Sinnott-Armstrong, Walter, Hull, Jay G. & Zimmerman, Aaron. 2010. "Practical Interests, Relevant Alternatives, and Knowledge Attributions: An Empirical Study", Review of Philosophy and Psychology (formerly European Review of Philosophy), special issue on Psychology and Experimental Philosophy ed. by Edouard Machery, Tania Lombrozo, & Joshua Knobe, Vol. 1, No. 2, pp. 265–273.
Price, A. W. 2008. ' 'Contextuality in Practical Reason' ', Oxford University Press.
Schaffer, Jonathan. 2004. "From Contextualism to Contrastivism," Philosophical Studies 119: 73–103.
Schiffer, Stephen. 1996. "Contextualist Solutions to Scepticism", Proceedings of the Aristotelian Society, 96:317-33.
Stanley, Jason. 2005. Knowledge and Practical Interests. New York: Oxford University Press.
Timmons Mark, 1998 "Morality Without Foundations: A Defense of Ethical Contextualism Oxford University Press US.
Tomberlin, James (ed.). 1999. Philosophical Perspectives 13, Epistemology, Blackwell Publishing.
External links
A Brief History of Contextualism - DeRose on the history of contextualism in epistemology.
Contextualism in Epistemology - an article by Tim Black on the Internet Encyclopedia of Philosophy.
.
Consensus reality
Metaethics
Metatheory
Relativism
Skepticism
Systemic functional linguistics
Ethical theories
Theories of justification | 0.792146 | 0.98426 | 0.779677 |
Existentialism | Existentialism is a family of views and forms of philosophical inquiry that explore the existence of the human individual and conclude that, despite the absurdity or incomprehensibility of the universe, individuals must still embrace responsibility for their actions and strive to lead authentic lives. In examining meaning, purpose, and value, existentialist thought often includes concepts such as existential crises, angst, courage, and freedom.
Existentialism is associated with several 19th- and 20th-century European philosophers who shared an emphasis on the human subject, despite often profound differences in thought. Among the earliest figures associated with existentialism are philosophers Søren Kierkegaard, Friedrich Nietzsche and novelist Fyodor Dostoevsky, all of whom critiqued rationalism and concerned themselves with the problem of meaning. In the 20th century, prominent existentialist thinkers included Jean-Paul Sartre, Albert Camus, Martin Heidegger, Simone de Beauvoir, Karl Jaspers, Gabriel Marcel, and Paul Tillich.
Many existentialists considered traditional systematic or academic philosophies, in style and content, to be too abstract and removed from concrete human experience. A primary virtue in existentialist thought is authenticity. Existentialism would influence many disciplines outside of philosophy, including theology, drama, art, literature, and psychology.
Existentialist philosophy encompasses a range of perspectives, but it shares certain underlying concepts. Among these, a central tenet of existentialism is that personal freedom, individual responsibility, and deliberate choice are essential to the pursuit of self-discovery and the determination of life's meaning.
Etymology
The term existentialism was coined by the French Catholic philosopher Gabriel Marcel in the mid-1940s. When Marcel first applied the term to Jean-Paul Sartre, at a colloquium in 1945, Sartre rejected it. Sartre subsequently changed his mind and, on October 29, 1945, publicly adopted the existentialist label in a lecture to the Club Maintenant in Paris, published as (Existentialism Is a Humanism), a short book that helped popularize existentialist thought. Marcel later came to reject the label himself in favour of Neo-Socratic, in honor of Kierkegaard's essay "On the Concept of Irony".
Some scholars argue that the term should be used to refer only to the cultural movement in Europe in the 1940s and 1950s associated with the works of the philosophers Sartre, Simone de Beauvoir, Maurice Merleau-Ponty, and Albert Camus. Others extend the term to Kierkegaard, and yet others extend it as far back as Socrates. However, it is often identified with the philosophical views of Sartre.
Definitional issues and background
The labels existentialism and existentialist are often seen as historical conveniences in as much as they were first applied to many philosophers long after they had died. While existentialism is generally considered to have originated with Kierkegaard, the first prominent existentialist philosopher to adopt the term as a self-description was Sartre. Sartre posits the idea that "what all existentialists have in common is the fundamental doctrine that existence precedes essence", as the philosopher Frederick Copleston explains. According to philosopher Steven Crowell, defining existentialism has been relatively difficult, and he argues that it is better understood as a general approach used to reject certain systematic philosophies rather than as a systematic philosophy itself. In a lecture delivered in 1945, Sartre described existentialism as "the attempt to draw all the consequences from a position of consistent atheism". For others, existentialism need not involve the rejection of God, but rather "examines mortal man's search for meaning in a meaningless universe", considering less "What is the good life?" (to feel, be, or do, good), instead asking "What is life good for?".
Although many outside Scandinavia consider the term existentialism to have originated from Kierkegaard, it is more likely that Kierkegaard adopted this term (or at least the term "existential" as a description of his philosophy) from the Norwegian poet and literary critic Johan Sebastian Cammermeyer Welhaven. This assertion comes from two sources:
The Norwegian philosopher Erik Lundestad refers to the Danish philosopher Fredrik Christian Sibbern. Sibbern is supposed to have had two conversations in 1841, the first with Welhaven and the second with Kierkegaard. It is in the first conversation that it is believed that Welhaven came up with "a word that he said covered a certain thinking, which had a close and positive attitude to life, a relationship he described as existential". This was then brought to Kierkegaard by Sibbern.
The second claim comes from the Norwegian historian Rune Slagstad, who claimed to prove that Kierkegaard himself said the term existential was borrowed from the poet. He strongly believes that it was Kierkegaard himself who said that "Hegelians do not study philosophy 'existentially;' to use a phrase by Welhaven from one time when I spoke with him about philosophy."
Concepts
Existence precedes essence
Sartre argued that a central proposition of existentialism is that existence precedes essence, which is to say that individuals shape themselves by existing and cannot be perceived through preconceived and a priori categories, an "essence". The actual life of the individual is what constitutes what could be called their "true essence" instead of an arbitrarily attributed essence others use to define them. Human beings, through their own consciousness, create their own values and determine a meaning to their life. This view is in contradiction to Aristotle and Aquinas, who taught that essence precedes individual existence. Although it was Sartre who explicitly coined the phrase, similar notions can be found in the thought of existentialist philosophers such as Heidegger, and Kierkegaard:
Some interpret the imperative to define oneself as meaning that anyone can wish to be anything. However, an existentialist philosopher would say such a wish constitutes an inauthentic existence – what Sartre would call "bad faith". Instead, the phrase should be taken to say that people are defined only insofar as they act and that they are responsible for their actions. Someone who acts cruelly towards other people is, by that act, defined as a cruel person. Such persons are themselves responsible for their new identity (cruel persons). This is opposed to their genes, or human nature, bearing the blame.
As Sartre said in his lecture Existentialism is a Humanism: "Man first of all exists, encounters himself, surges up in the world—and defines himself afterwards." The more positive, therapeutic aspect of this is also implied: a person can choose to act in a different way, and to be a good person instead of a cruel person.
Jonathan Webber interprets Sartre's usage of the term essence not in a modal fashion, i.e. as necessary features, but in a teleological fashion: "an essence is the relational property of having a set of parts ordered in such a way as to collectively perform some activity". For example, it belongs to the essence of a house to keep the bad weather out, which is why it has walls and a roof. Humans are different from houses because—unlike houses—they do not have an inbuilt purpose: they are free to choose their own purpose and thereby shape their essence; thus, their existence precedes their essence.
Sartre is committed to a radical conception of freedom: nothing fixes our purpose but we ourselves, our projects have no weight or inertia except for our endorsement of them. Simone de Beauvoir, on the other hand, holds that there are various factors, grouped together under the term sedimentation, that offer resistance to attempts to change our direction in life. Sedimentations are themselves products of past choices and can be changed by choosing differently in the present, but such changes happen slowly. They are a force of inertia that shapes the agent's evaluative outlook on the world until the transition is complete.
Sartre's definition of existentialism was based on Heidegger's magnum opus Being and Time (1927). In the correspondence with Jean Beaufret later published as the Letter on Humanism, Heidegger implied that Sartre misunderstood him for his own purposes of subjectivism, and that he did not mean that actions take precedence over being so long as those actions were not reflected upon. Heidegger commented that "the reversal of a metaphysical statement remains a metaphysical statement", meaning that he thought Sartre had simply switched the roles traditionally attributed to essence and existence without interrogating these concepts and their history.
The absurd
The notion of the absurd contains the idea that there is no meaning in the world beyond what meaning we give it. This meaninglessness also encompasses the amorality or "unfairness" of the world. This can be highlighted in the way it opposes the traditional Abrahamic religious perspective, which establishes that life's purpose is the fulfillment of God's commandments. This is what gives meaning to people's lives. To live the life of the absurd means rejecting a life that finds or pursues specific meaning for man's existence since there is nothing to be discovered. According to Albert Camus, the world or the human being is not in itself absurd. The concept only emerges through the juxtaposition of the two; life becomes absurd due to the incompatibility between human beings and the world they inhabit. This view constitutes one of the two interpretations of the absurd in existentialist literature. The second view, first elaborated by Søren Kierkegaard, holds that absurdity is limited to actions and choices of human beings. These are considered absurd since they issue from human freedom, undermining their foundation outside of themselves.
The absurd contrasts with the claim that "bad things don't happen to good people"; to the world, metaphorically speaking, there is no such thing as a good person or a bad person; what happens happens, and it may just as well happen to a "good" person as to a "bad" person. Because of the world's absurdity, anything can happen to anyone at any time and a tragic event could plummet someone into direct confrontation with the absurd. Many of the literary works of Kierkegaard, Beckett, Kafka, Dostoevsky, Ionesco, Miguel de Unamuno, Luigi Pirandello, Sartre, Joseph Heller, and Camus contain descriptions of people who encounter the absurdity of the world.
It is because of the devastating awareness of meaninglessness that Camus claimed in The Myth of Sisyphus that "There is only one truly serious philosophical problem, and that is suicide." Although "prescriptions" against the possible deleterious consequences of these kinds of encounters vary, from Kierkegaard's religious "stage" to Camus' insistence on persevering in spite of absurdity, the concern with helping people avoid living their lives in ways that put them in the perpetual danger of having everything meaningful break down is common to most existentialist philosophers. The possibility of having everything meaningful break down poses a threat of quietism, which is inherently against the existentialist philosophy. It has been said that the possibility of suicide makes all humans existentialists. The ultimate hero of absurdism lives without meaning and faces suicide without succumbing to it.
Facticity
Facticity is defined by Sartre in Being and Nothingness (1943) as the in-itself, which for humans takes the form of being and not being. It is the facts of one's personal life and as per Heidegger, it is "the way in which we are thrown into the world." This can be more easily understood when considering facticity in relation to the temporal dimension of our past: one's past is what one is, meaning that it is what has formed the person who exists in the present. However, to say that one is only one's past would ignore the change a person undergoes in the present and future, while saying that one's past is only what one was, would entirely detach it from the present self. A denial of one's concrete past constitutes an inauthentic lifestyle, and also applies to other kinds of facticity (having a human body—e.g., one that does not allow a person to run faster than the speed of sound—identity, values, etc.).
Facticity is a limitation and a condition of freedom. It is a limitation in that a large part of one's facticity consists of things one did not choose (birthplace, etc.), but a condition of freedom in the sense that one's values most likely depend on it. However, even though one's facticity is "set in stone" (as being past, for instance), it cannot determine a person: the value ascribed to one's facticity is still ascribed to it freely by that person. As an example, consider two men, one of whom has no memory of his past and the other who remembers everything. Both have committed many crimes, but the first man, remembering nothing, leads a rather normal life while the second man, feeling trapped by his own past, continues a life of crime, blaming his own past for "trapping" him in this life. There is nothing essential about his committing crimes, but he ascribes this meaning to his past.
However, to disregard one's facticity during the continual process of self-making, projecting oneself into the future, would be to put oneself in denial of the conditions shaping the present self and would be inauthentic. The origin of one's projection must still be one's facticity, though in the mode of not being it (essentially). An example of one focusing solely on possible projects without reflecting on one's current facticity: would be someone who continually thinks about future possibilities related to being rich (e.g. a better car, bigger house, better quality of life, etc.) without acknowledging the facticity of not currently having the financial means to do so. In this example, considering both facticity and transcendence, an authentic mode of being would be considering future projects that might improve one's current finances (e.g. putting in extra hours, or investing savings) in order to arrive at a future-facticity of a modest pay rise, further leading to purchase of an affordable car.
Another aspect of facticity is that it entails angst. Freedom "produces" angst when limited by facticity and the lack of the possibility of having facticity to "step in" and take responsibility for something one has done also produces angst.
Another aspect of existential freedom is that one can change one's values. One is responsible for one's values, regardless of society's values. The focus on freedom in existentialism is related to the limits of responsibility one bears, as a result of one's freedom. The relationship between freedom and responsibility is one of interdependency and a clarification of freedom also clarifies that for which one is responsible.
Authenticity
Many noted existentialists consider the theme of authentic existence important. Authenticity involves the idea that one has to "create oneself" and live in accordance with this self. For an authentic existence, one should act as oneself, not as "one's acts" or as "one's genes" or as any other essence requires. The authentic act is one in accordance with one's freedom. A component of freedom is facticity, but not to the degree that this facticity determines one's transcendent choices (one could then blame one's background for making the choice one made [chosen project, from one's transcendence]). Facticity, in relation to authenticity, involves acting on one's actual values when making a choice (instead of, like Kierkegaard's Aesthete, "choosing" randomly), so that one takes responsibility for the act instead of choosing either-or without allowing the options to have different values.
In contrast, the inauthentic is the denial to live in accordance with one's freedom. This can take many forms, from pretending choices are meaningless or random, convincing oneself that some form of determinism is true, or "mimicry" where one acts as "one should".
How one "should" act is often determined by an image one has, of how one in such a role (bank manager, lion tamer, sex worker, etc.) acts. In Being and Nothingness, Sartre uses the example of a waiter in "bad faith". He merely takes part in the "act" of being a typical waiter, albeit very convincingly. This image usually corresponds to a social norm, but this does not mean that all acting in accordance with social norms is inauthentic. The main point is the attitude one takes to one's own freedom and responsibility and the extent to which one acts in accordance with this freedom.
The Other and the Look
The Other (written with a capital "O") is a concept more properly belonging to phenomenology and its account of intersubjectivity. However, it has seen widespread use in existentialist writings, and the conclusions drawn differ slightly from the phenomenological accounts. The Other is the experience of another free subject who inhabits the same world as a person does. In its most basic form, it is this experience of the Other that constitutes intersubjectivity and objectivity. To clarify, when one experiences someone else, and this Other person experiences the world (the same world that a person experiences)—only from "over there"—the world is constituted as objective in that it is something that is "there" as identical for both of the subjects; a person experiences the other person as experiencing the same things. This experience of the Other's look is what is termed the Look (sometimes the Gaze).
While this experience, in its basic phenomenological sense, constitutes the world as objective and oneself as objectively existing subjectivity (one experiences oneself as seen in the Other's Look in precisely the same way that one experiences the Other as seen by him, as subjectivity), in existentialism, it also acts as a kind of limitation of freedom. This is because the Look tends to objectify what it sees. When one experiences oneself in the Look, one does not experience oneself as nothing (no thing), but as something (some thing). In Sartre's example of a man peeping at someone through a keyhole, the man is entirely caught up in the situation he is in. He is in a pre-reflexive state where his entire consciousness is directed at what goes on in the room. Suddenly, he hears a creaking floorboard behind him and he becomes aware of himself as seen by the Other. He is then filled with shame for he perceives himself as he would perceive someone else doing what he was doing—as a Peeping Tom. For Sartre, this phenomenological experience of shame establishes proof for the existence of other minds and defeats the problem of solipsism. For the conscious state of shame to be experienced, one has to become aware of oneself as an object of another look, proving a priori, that other minds exist. The Look is then co-constitutive of one's facticity.
Another characteristic feature of the Look is that no Other really needs to have been there: It is possible that the creaking floorboard was simply the movement of an old house; the Look is not some kind of mystical telepathic experience of the actual way the Other sees one (there may have been someone there, but he could have not noticed that person). It is only one's perception of the way another might perceive him.
Angst and dread
"Existential angst", sometimes called existential dread, anxiety, or anguish, is a term common to many existentialist thinkers. It is generally held to be a negative feeling arising from the experience of human freedom and responsibility. The archetypal example is the experience one has when standing on a cliff where one not only fears falling off it, but also dreads the possibility of throwing oneself off. In this experience that "nothing is holding me back", one senses the lack of anything that predetermines one to either throw oneself off or to stand still, and one experiences one's own freedom.
It can also be seen in relation to the previous point how angst is before nothing, and this is what sets it apart from fear that has an object. While one can take measures to remove an object of fear, for angst no such "constructive" measures are possible. The use of the word "nothing" in this context relates to the inherent insecurity about the consequences of one's actions and to the fact that, in experiencing freedom as angst, one also realizes that one is fully responsible for these consequences. There is nothing in people (genetically, for instance) that acts in their stead—that they can blame if something goes wrong. Therefore, not every choice is perceived as having dreadful possible consequences (and, it can be claimed, human lives would be unbearable if every choice facilitated dread). However, this does not change the fact that freedom remains a condition of every action.
Despair
Despair is generally defined as a loss of hope. In existentialism, it is more specifically a loss of hope in reaction to a breakdown in one or more of the defining qualities of one's self or identity. If a person is invested in being a particular thing, such as a bus driver or an upstanding citizen, and then finds their being-thing compromised, they would normally be found in a state of despair—a hopeless state. For example, a singer who loses the ability to sing may despair if they have nothing else to fall back on—nothing to rely on for their identity. They find themselves unable to be what defined their being.
What sets the existentialist notion of despair apart from the conventional definition is that existentialist despair is a state one is in even when they are not overtly in despair. So long as a person's identity depends on qualities that can crumble, they are in perpetual despair—and as there is, in Sartrean terms, no human essence found in conventional reality on which to constitute the individual's sense of identity, despair is a universal human condition. As Kierkegaard defines it in Either/Or: "Let each one learn what he can; both of us can learn that a person's unhappiness never lies in his lack of control over external conditions, since this would only make him completely unhappy." In Works of Love, he says:
Opposition to positivism and rationalism
Existentialists oppose defining human beings as primarily rational, and, therefore, oppose both positivism and rationalism. Existentialism asserts that people make decisions based on subjective meaning rather than pure rationality.
The rejection of reason as the source of meaning is a common theme of existentialist thought, as is the focus on the anxiety and dread that we feel in the face of our own radical free will and our awareness of death. Kierkegaard advocated rationality as a means to interact with the objective world (e.g., in the natural sciences), but when it comes to existential problems, reason is insufficient: "Human reason has boundaries".
Like Kierkegaard, Sartre saw problems with rationality, calling it a form of "bad faith", an attempt by the self to impose structure on a world of phenomena—"the Other"—that is fundamentally irrational and random. According to Sartre, rationality and other forms of bad faith hinder people from finding meaning in freedom. To try to suppress feelings of anxiety and dread, people confine themselves within everyday experience, Sartre asserted, thereby relinquishing their freedom and acquiescing to being possessed in one form or another by "the Look" of "the Other" (i.e., possessed by another person—or at least one's idea of that other person).
Religion
An existentialist reading of the Bible would demand that the reader recognize that they are an existing subject studying the words more as a recollection of events. This is in contrast to looking at a collection of "truths" that are outside and unrelated to the reader, but may develop a sense of reality/God. Such a reader is not obligated to follow the commandments as if an external agent is forcing these commandments upon them, but as though they are inside them and guiding them from inside. This is the task Kierkegaard takes up when he asks: "Who has the more difficult task: the teacher who lectures on earnest things a meteor's distance from everyday life—or the learner who should put it to use?" Philosophers such as Hans Jonas and Rudolph Bultmann introduced the concept of existentialist demythologization into the field of Early Christianity and Christian Theology, respectively.
Confusion with nihilism
Although nihilism and existentialism are distinct philosophies, they are often confused with one another since both are rooted in the human experience of anguish and confusion that stems from the apparent meaninglessness of a world in which humans are compelled to find or create meaning. A primary cause of confusion is that Friedrich Nietzsche was an important philosopher in both fields.
Existentialist philosophers often stress the importance of angst as signifying the absolute lack of any objective ground for action, a move that is often reduced to moral or existential nihilism. A pervasive theme in existentialist philosophy, however, is to persist through encounters with the absurd, as seen in Albert Camus's philosophical essay The Myth of Sisyphus (1942): "One must imagine Sisyphus happy". and it is only very rarely that existentialist philosophers dismiss morality or one's self-created meaning: Søren Kierkegaard regained a sort of morality in the religious (although he would not agree that it was ethical; the religious suspends the ethical), and Jean-Paul Sartre's final words in Being and Nothingness (1943): "All these questions, which refer us to a pure and not an accessory (or impure) reflection, can find their reply only on the ethical plane. We shall devote to them a future work."
History
Precursors
Some have argued that existentialism has long been an element of European religious thought, even before the term came into use. William Barrett identified Blaise Pascal and Søren Kierkegaard as two specific examples. Jean Wahl also identified William Shakespeare's Prince Hamlet ("To be, or not to be"), Jules Lequier, Thomas Carlyle, and William James as existentialists. According to Wahl, "the origins of most great philosophies, like those of Plato, Descartes, and Kant, are to be found in existential reflections." Precursors to existentialism can also be identified in the works of Iranian Muslim philosopher Mulla Sadra (c. 1571–1635), who would posit that "existence precedes essence" becoming the principle expositor of the School of Isfahan, which is described as "alive and active".
19th century
Kierkegaard and Nietzsche
Kierkegaard is generally considered to have been the first existentialist philosopher. He proposed that each individual—not reason, society, or religious orthodoxy—is solely tasked with giving meaning to life and living it sincerely, or "authentically".
Kierkegaard and Nietzsche were two of the first philosophers considered fundamental to the existentialist movement, though neither used the term "existentialism" and it is unclear whether they would have supported the existentialism of the 20th century. They focused on subjective human experience rather than the objective truths of mathematics and science, which they believed were too detached or observational to truly get at the human experience. Like Pascal, they were interested in people's quiet struggle with the apparent meaninglessness of life and the use of diversion to escape from boredom. Unlike Pascal, Kierkegaard and Nietzsche also considered the role of making free choices, particularly regarding fundamental values and beliefs, and how such choices change the nature and identity of the chooser. Kierkegaard's knight of faith and Nietzsche's Übermensch are representative of people who exhibit freedom, in that they define the nature of their own existence. Nietzsche's idealized individual invents his own values and creates the very terms they excel under. By contrast, Kierkegaard, opposed to the level of abstraction in Hegel, and not nearly as hostile (actually welcoming) to Christianity as Nietzsche, argues through a pseudonym that the objective certainty of religious truths (specifically Christian) is not only impossible, but even founded on logical paradoxes. Yet he continues to imply that a leap of faith is a possible means for an individual to reach a higher stage of existence that transcends and contains both an aesthetic and ethical value of life. Kierkegaard and Nietzsche were also precursors to other intellectual movements, including postmodernism, and various strands of psychotherapy. However, Kierkegaard believed that individuals should live in accordance with their thinking.
In Twilight of the Idols, Nietzsche's sentiments resonate the idea of "existence precedes essence." He writes, "no one gives man his qualities-- neither God, nor society, nor his parents and ancestors, nor he himself...No one is responsible for man's being there at all, for his being such-and-such, or for his being in these circumstances or in this environment...Man is not the effect of some special purpose of a will, and end..." Within this view, Nietzsche ties in his rejection of the existence of God, which he sees as a means to "redeem the world." By rejecting the existence of God, Nietzsche also rejects beliefs that claim humans have a predestined purpose according to what God has instructed.
Dostoyevsky
The first important literary author also important to existentialism was the Russian, Dostoyevsky. Dostoyevsky's Notes from Underground portrays a man unable to fit into society and unhappy with the identities he creates for himself. Sartre, in his book on existentialism Existentialism is a Humanism, quoted Dostoyevsky's The Brothers Karamazov as an example of existential crisis. Other Dostoyevsky novels covered issues raised in existentialist philosophy while presenting story lines divergent from secular existentialism: for example, in Crime and Punishment, the protagonist Raskolnikov experiences an existential crisis and then moves toward a Christian Orthodox worldview similar to that advocated by Dostoyevsky himself.
Early 20th century
In the first decades of the 20th century, a number of philosophers and writers explored existentialist ideas. The Spanish philosopher Miguel de Unamuno y Jugo, in his 1913 book The Tragic Sense of Life in Men and Nations, emphasized the life of "flesh and bone" as opposed to that of abstract rationalism. Unamuno rejected systematic philosophy in favor of the individual's quest for faith. He retained a sense of the tragic, even absurd nature of the quest, symbolized by his enduring interest in the eponymous character from the Miguel de Cervantes novel Don Quixote. A novelist, poet and dramatist as well as philosophy professor at the University of Salamanca, Unamuno wrote a short story about a priest's crisis of faith, Saint Manuel the Good, Martyr, which has been collected in anthologies of existentialist fiction. Another Spanish thinker, José Ortega y Gasset, writing in 1914, held that human existence must always be defined as the individual person combined with the concrete circumstances of his life: "Yo soy yo y mi circunstancia" ("I am myself and my circumstances"). Sartre likewise believed that human existence is not an abstract matter, but is always situated ("en situation").
Although Martin Buber wrote his major philosophical works in German, and studied and taught at the Universities of Berlin and Frankfurt, he stands apart from the mainstream of German philosophy. Born into a Jewish family in Vienna in 1878, he was also a scholar of Jewish culture and involved at various times in Zionism and Hasidism. In 1938, he moved permanently to Jerusalem. His best-known philosophical work was the short book I and Thou, published in 1922. For Buber, the fundamental fact of human existence, too readily overlooked by scientific rationalism and abstract philosophical thought, is "man with man", a dialogue that takes place in the so-called "sphere of between" ("das Zwischenmenschliche").
Two Russian philosophers, Lev Shestov and Nikolai Berdyaev, became well known as existentialist thinkers during their post-Revolutionary exiles in Paris. Shestov had launched an attack on rationalism and systematization in philosophy as early as 1905 in his book of aphorisms All Things Are Possible. Berdyaev drew a radical distinction between the world of spirit and the everyday world of objects. Human freedom, for Berdyaev, is rooted in the realm of spirit, a realm independent of scientific notions of causation. To the extent the individual human being lives in the objective world, he is estranged from authentic spiritual freedom. "Man" is not to be interpreted naturalistically, but as a being created in God's image, an originator of free, creative acts. He published a major work on these themes, The Destiny of Man, in 1931.
Gabriel Marcel, long before coining the term "existentialism", introduced important existentialist themes to a French audience in his early essay "Existence and Objectivity" (1925) and in his Metaphysical Journal (1927). A dramatist as well as a philosopher, Marcel found his philosophical starting point in a condition of metaphysical alienation: the human individual searching for harmony in a transient life. Harmony, for Marcel, was to be sought through "secondary reflection", a "dialogical" rather than "dialectical" approach to the world, characterized by "wonder and astonishment" and open to the "presence" of other people and of God rather than merely to "information" about them. For Marcel, such presence implied more than simply being there (as one thing might be in the presence of another thing); it connoted "extravagant" availability, and the willingness to put oneself at the disposal of the other.
Marcel contrasted secondary reflection with abstract, scientific-technical primary reflection, which he associated with the activity of the abstract Cartesian ego. For Marcel, philosophy was a concrete activity undertaken by a sensing, feeling human being incarnate—embodied—in a concrete world. Although Sartre adopted the term "existentialism" for his own philosophy in the 1940s, Marcel's thought has been described as "almost diametrically opposed" to that of Sartre. Unlike Sartre, Marcel was a Christian, and became a Catholic convert in 1929.
In Germany, the psychiatrist and philosopher Karl Jaspers—who later described existentialism as a "phantom" created by the public—called his own thought, heavily influenced by Kierkegaard and Nietzsche, Existenzphilosophie. For Jaspers, "Existenz-philosophy is the way of thought by means of which man seeks to become himself...This way of thought does not cognize objects, but elucidates and makes actual the being of the thinker".
Jaspers, a professor at the university of Heidelberg, was acquainted with Heidegger, who held a professorship at Marburg before acceding to Husserl's chair at Freiburg in 1928. They held many philosophical discussions, but later became estranged over Heidegger's support of National Socialism. They shared an admiration for Kierkegaard, and in the 1930s, Heidegger lectured extensively on Nietzsche. Nevertheless, the extent to which Heidegger should be considered an existentialist is debatable. In Being and Time he presented a method of rooting philosophical explanations in human existence (Dasein) to be analysed in terms of existential categories (existentiale); and this has led many commentators to treat him as an important figure in the existentialist movement.
After the Second World War
Following the Second World War, existentialism became a well-known and significant philosophical and cultural movement, mainly through the public prominence of two French writers, Jean-Paul Sartre and Albert Camus, who wrote best-selling novels, plays and widely read journalism as well as theoretical texts. These years also saw the growing reputation of Being and Time outside Germany.
Sartre dealt with existentialist themes in his 1938 novel Nausea and the short stories in his 1939 collection The Wall, and had published his treatise on existentialism, Being and Nothingness, in 1943, but it was in the two years following the liberation of Paris from the German occupying forces that he and his close associates—Camus, Simone de Beauvoir, Maurice Merleau-Ponty, and others—became internationally famous as the leading figures of a movement known as existentialism. In a very short period of time, Camus and Sartre in particular became the leading public intellectuals of post-war France, achieving by the end of 1945 "a fame that reached across all audiences." Camus was an editor of the most popular leftist (former French Resistance) newspaper Combat; Sartre launched his journal of leftist thought, Les Temps Modernes, and two weeks later gave the widely reported lecture on existentialism and secular humanism to a packed meeting of the Club Maintenant. Beauvoir wrote that "not a week passed without the newspapers discussing us"; existentialism became "the first media craze of the postwar era."
By the end of 1947, Camus' earlier fiction and plays had been reprinted, his new play Caligula had been performed and his novel The Plague published; the first two novels of Sartre's The Roads to Freedom trilogy had appeared, as had Beauvoir's novel The Blood of Others. Works by Camus and Sartre were already appearing in foreign editions. The Paris-based existentialists had become famous.
Sartre had traveled to Germany in 1930 to study the phenomenology of Edmund Husserl and Martin Heidegger, and he included critical comments on their work in his major treatise Being and Nothingness. Heidegger's thought had also become known in French philosophical circles through its use by Alexandre Kojève in explicating Hegel in a series of lectures given in Paris in the 1930s. The lectures were highly influential; members of the audience included not only Sartre and Merleau-Ponty, but Raymond Queneau, Georges Bataille, Louis Althusser, André Breton, and Jacques Lacan. A selection from Being and Time was published in French in 1938, and his essays began to appear in French philosophy journals.
Heidegger read Sartre's work and was initially impressed, commenting: "Here for the first time I encountered an independent thinker who, from the foundations up, has experienced the area out of which I think. Your work shows such an immediate comprehension of my philosophy as I have never before encountered." Later, however, in response to a question posed by his French follower Jean Beaufret, Heidegger distanced himself from Sartre's position and existentialism in general in his Letter on Humanism. Heidegger's reputation continued to grow in France during the 1950s and 1960s. In the 1960s, Sartre attempted to reconcile existentialism and Marxism in his work Critique of Dialectical Reason. A major theme throughout his writings was freedom and responsibility.
Camus was a friend of Sartre, until their falling-out, and wrote several works with existential themes including The Rebel, Summer in Algiers, The Myth of Sisyphus, and The Stranger, the latter being "considered—to what would have been Camus's irritation—the exemplary existentialist novel." Camus, like many others, rejected the existentialist label, and considered his works concerned with facing the absurd. In the titular book, Camus uses the analogy of the Greek myth of Sisyphus to demonstrate the futility of existence. In the myth, Sisyphus is condemned for eternity to roll a rock up a hill, but when he reaches the summit, the rock will roll to the bottom again. Camus believes that this existence is pointless but that Sisyphus ultimately finds meaning and purpose in his task, simply by continually applying himself to it. The first half of the book contains an extended rebuttal of what Camus took to be existentialist philosophy in the works of Kierkegaard, Shestov, Heidegger, and Jaspers.
Simone de Beauvoir, an important existentialist who spent much of her life as Sartre's partner, wrote about feminist and existentialist ethics in her works, including The Second Sex and The Ethics of Ambiguity. Although often overlooked due to her relationship with Sartre, de Beauvoir integrated existentialism with other forms of thinking such as feminism, unheard of at the time, resulting in alienation from fellow writers such as Camus.
Paul Tillich, an important existentialist theologian following Kierkegaard and Karl Barth, applied existentialist concepts to Christian theology, and helped introduce existential theology to the general public. His seminal work The Courage to Be follows Kierkegaard's analysis of anxiety and life's absurdity, but puts forward the thesis that modern humans must, via God, achieve selfhood in spite of life's absurdity. Rudolf Bultmann used Kierkegaard's and Heidegger's philosophy of existence to demythologize Christianity by interpreting Christian mythical concepts into existentialist concepts.
Maurice Merleau-Ponty, an existential phenomenologist, was for a time a companion of Sartre. Merleau-Ponty's Phenomenology of Perception (1945) was recognized as a major statement of French existentialism. It has been said that Merleau-Ponty's work Humanism and Terror greatly influenced Sartre. However, in later years they were to disagree irreparably, dividing many existentialists such as de Beauvoir, who sided with Sartre.
Colin Wilson, an English writer, published his study The Outsider in 1956, initially to critical acclaim. In this book and others (e.g. Introduction to the New Existentialism), he attempted to reinvigorate what he perceived as a pessimistic philosophy and bring it to a wider audience. He was not, however, academically trained, and his work was attacked by professional philosophers for lack of rigor and critical standards.
Influence outside philosophy
Art
Film and television
Stanley Kubrick's 1957 anti-war film Paths of Glory "illustrates, and even illuminates...existentialism" by examining the "necessary absurdity of the human condition" and the "horror of war". The film tells the story of a fictional World War I French army regiment ordered to attack an impregnable German stronghold; when the attack fails, three soldiers are chosen at random, court-martialed by a "kangaroo court", and executed by firing squad. The film examines existentialist ethics, such as the issue of whether objectivity is possible and the "problem of authenticity". Orson Welles's 1962 film The Trial, based upon Franz Kafka's book of the same name (Der Prozeß), is characteristic of both existentialist and absurdist themes in its depiction of a man (Joseph K.) arrested for a crime for which the charges are neither revealed to him nor to the reader.
Neon Genesis Evangelion is a Japanese science fiction animation series created by the anime studio Gainax and was both directed and written by Hideaki Anno. Existential themes of individuality, consciousness, freedom, choice, and responsibility are heavily relied upon throughout the entire series, particularly through the philosophies of Jean-Paul Sartre and Søren Kierkegaard. Episode 16's title, is a reference to Kierkegaard's book, The Sickness Unto Death.
Some contemporary films dealing with existentialist issues include Melancholia, Fight Club, I Heart Huckabees, Waking Life, The Matrix, Ordinary People, Life in a Day, Barbie, and Everything Everywhere All at Once. Likewise, films throughout the 20th century such as The Seventh Seal, Ikiru, Taxi Driver, the Toy Story films, The Great Silence, Ghost in the Shell, Harold and Maude, High Noon, Easy Rider, One Flew Over the Cuckoo's Nest, A Clockwork Orange, Groundhog Day, Apocalypse Now, Badlands, and Blade Runner also have existentialist qualities.
Notable directors known for their existentialist films include Ingmar Bergman, Bela Tarr, Robert Bresson, Jean-Pierre Melville, François Truffaut, Jean-Luc Godard, Michelangelo Antonioni, Akira Kurosawa, Terrence Malick, Stanley Kubrick, Andrei Tarkovsky, Éric Rohmer, Wes Anderson, Woody Allen, and Christopher Nolan. Charlie Kaufman's Synecdoche, New York focuses on the protagonist's desire to find existential meaning. Similarly, in Kurosawa's Red Beard, the protagonist's experiences as an intern in a rural health clinic in Japan lead him to an existential crisis whereby he questions his reason for being. This, in turn, leads him to a better understanding of humanity. The French film, Mood Indigo (directed by Michel Gondry) embraced various elements of existentialism. The film The Shawshank Redemption, released in 1994, depicts life in a prison in Maine, United States to explore several existentialist concepts.
Literature
Existential perspectives are also found in modern literature to varying degrees, especially since the 1920s. Louis-Ferdinand Céline's Journey to the End of the Night (Voyage au bout de la nuit, 1932) celebrated by both Sartre and Beauvoir, contained many of the themes that would be found in later existential literature, and is in some ways, the proto-existential novel. Jean-Paul Sartre's 1938 novel Nausea was "steeped in Existential ideas", and is considered an accessible way of grasping his philosophical stance. Between 1900 and 1960, other authors such as Albert Camus, Franz Kafka, Rainer Maria Rilke, T. S. Eliot, Yukio Mishima, Hermann Hesse, Luigi Pirandello, Ralph Ellison, and Jack Kerouac composed literature or poetry that contained, to varying degrees, elements of existential or proto-existential thought. The philosophy's influence even reached pulp literature shortly after the turn of the 20th century, as seen in the existential disparity witnessed in Man's lack of control of his fate in the works of H. P. Lovecraft.
Theatre
Sartre wrote No Exit in 1944, an existentialist play originally published in French as Huis Clos (meaning In Camera or "behind closed doors"), which is the source of the popular quote, "Hell is other people." (In French, "L'enfer, c'est les autres"). The play begins with a Valet leading a man into a room that the audience soon realizes is in hell. Eventually he is joined by two women. After their entry, the Valet leaves and the door is shut and locked. All three expect to be tortured, but no torturer arrives. Instead, they realize they are there to torture each other, which they do effectively by probing each other's sins, desires, and unpleasant memories.
Existentialist themes are displayed in the Theatre of the Absurd, notably in Samuel Beckett's Waiting for Godot, in which two men divert themselves while they wait expectantly for someone (or something) named Godot who never arrives. They claim Godot is an acquaintance, but in fact, hardly know him, admitting they would not recognize him if they saw him. Samuel Beckett, once asked who or what Godot is, replied, "If I knew, I would have said so in the play." To occupy themselves, the men eat, sleep, talk, argue, sing, play games, exercise, swap hats, and contemplate suicide—anything "to hold the terrible silence at bay". The play "exploits several archetypal forms and situations, all of which lend themselves to both comedy and pathos." The play also illustrates an attitude toward human experience on earth: the poignancy, oppression, camaraderie, hope, corruption, and bewilderment of human experience that can be reconciled only in the mind and art of the absurdist. The play examines questions such as death, the meaning of human existence and the place of God in human existence.
Tom Stoppard's Rosencrantz & Guildenstern Are Dead is an absurdist tragicomedy first staged at the Edinburgh Festival Fringe in 1966. The play expands upon the exploits of two minor characters from Shakespeare's Hamlet. Comparisons have also been drawn to Samuel Beckett's Waiting for Godot, for the presence of two central characters who appear almost as two halves of a single character. Many plot features are similar as well: the characters pass time by playing Questions, impersonating other characters, and interrupting each other or remaining silent for long periods of time. The two characters are portrayed as two clowns or fools in a world beyond their understanding. They stumble through philosophical arguments while not realizing the implications, and muse on the irrationality and randomness of the world.
Jean Anouilh's Antigone also presents arguments founded on existentialist ideas. It is a tragedy inspired by Greek mythology and the play of the same name (Antigone, by Sophocles) from the fifth century BC. In English, it is often distinguished from its antecedent by being pronounced in its original French form, approximately "Ante-GŌN." The play was first performed in Paris on 6 February 1944, during the Nazi occupation of France. Produced under Nazi censorship, the play is purposefully ambiguous with regards to the rejection of authority (represented by Antigone) and the acceptance of it (represented by Creon). The parallels to the French Resistance and the Nazi occupation have been drawn. Antigone rejects life as desperately meaningless but without affirmatively choosing a noble death. The crux of the play is the lengthy dialogue concerning the nature of power, fate, and choice, during which Antigone says that she is, "... disgusted with [the]...promise of a humdrum happiness." She states that she would rather die than live a mediocre existence.
Critic Martin Esslin in his book Theatre of the Absurd pointed out how many contemporary playwrights such as Samuel Beckett, Eugène Ionesco, Jean Genet, and Arthur Adamov wove into their plays the existentialist belief that we are absurd beings loose in a universe empty of real meaning. Esslin noted that many of these playwrights demonstrated the philosophy better than did the plays by Sartre and Camus. Though most of such playwrights, subsequently labeled "Absurdist" (based on Esslin's book), denied affiliations with existentialism and were often staunchly anti-philosophical (for example Ionesco often claimed he identified more with 'Pataphysics or with Surrealism than with existentialism), the playwrights are often linked to existentialism based on Esslin's observation.
Activism
Black existentialism explores the existence and experiences of Black people in the world. Classical and contemporary thinkers include C.L.R James, Frederick Douglass, W.E.B DuBois, Frantz Fanon, Angela Davis, Cornel West, Naomi Zack, bell hooks, Stuart Hall, Lewis Gordon, and Audre Lorde.
Psychoanalysis and psychotherapy
A major offshoot of existentialism as a philosophy is existentialist psychology and psychoanalysis, which first crystallized in the work of Otto Rank, Freud's closest associate for 20 years. Without awareness of the writings of Rank, Ludwig Binswanger was influenced by Freud, Edmund Husserl, Heidegger, and Sartre. A later figure was Viktor Frankl, who briefly met Freud as a young man. His logotherapy can be regarded as a form of existentialist therapy. The existentialists would also influence social psychology, antipositivist micro-sociology, symbolic interactionism, and post-structuralism, with the work of thinkers such as Georg Simmel and Michel Foucault. Foucault was a great reader of Kierkegaard even though he almost never refers to this author, who nonetheless had for him an importance as secret as it was decisive.
An early contributor to existentialist psychology in the United States was Rollo May, who was strongly influenced by Kierkegaard and Otto Rank. One of the most prolific writers on techniques and theory of existentialist psychology in the US is Irvin D. Yalom. Yalom states that
A more recent contributor to the development of a European version of existentialist psychotherapy is the British-based Emmy van Deurzen.
Anxiety's importance in existentialism makes it a popular topic in psychotherapy. Therapists often offer existentialist philosophy as an explanation for anxiety. The assertion is that anxiety is manifested of an individual's complete freedom to decide, and complete responsibility for the outcome of such decisions. Psychotherapists using an existentialist approach believe that a patient can harness his anxiety and use it constructively. Instead of suppressing anxiety, patients are advised to use it as grounds for change. By embracing anxiety as inevitable, a person can use it to achieve his full potential in life. Humanistic psychology also had major impetus from existentialist psychology and shares many of the fundamental tenets. Terror management theory, based on the writings of Ernest Becker and Otto Rank, is a developing area of study within the academic study of psychology. It looks at what researchers claim are implicit emotional reactions of people confronted with the knowledge that they will eventually die.
Also, Gerd B. Achenbach has refreshed the Socratic tradition with his own blend of philosophical counseling; as did Michel Weber with his Chromatiques Center in Belgium.
Criticisms
General criticisms
Walter Kaufmann criticized "the profoundly unsound methods and the dangerous contempt for reason that have been so prominent in existentialism." Logical positivist philosophers, such as Rudolf Carnap and A. J. Ayer, assert that existentialists are often confused about the verb "to be" in their analyses of "being". Specifically, they argue that the verb "is" is transitive and pre-fixed to a predicate (e.g., an apple is red) (without a predicate, the word "is" is meaningless), and that existentialists frequently misuse the term in this manner. Wilson has stated in his book The Angry Years that existentialism has created many of its own difficulties: "We can see how this question of freedom of the will has been vitiated by post-romantic philosophy, with its inbuilt tendency to laziness and boredom, we can also see how it came about that existentialism found itself in a hole of its own digging, and how the philosophical developments since then have amounted to walking in circles round that hole."
Sartre's philosophy
Many critics argue Sartre's philosophy is contradictory. For example, see Magda Stroe's arguments. Specifically, they argue that Sartre makes metaphysical arguments despite his claiming that his philosophical views ignore metaphysics. Herbert Marcuse criticized Being and Nothingness for projecting anxiety and meaninglessness onto the nature of existence itself: "Insofar as Existentialism is a philosophical doctrine, it remains an idealistic doctrine: it hypostatizes specific historical conditions of human existence into ontological and metaphysical characteristics. Existentialism thus becomes part of the very ideology which it attacks, and its radicalism is illusory."
In Letter on Humanism, Heidegger criticized Sartre's existentialism:
See also
Abandonment (existentialism)
Disenchantment
Existential phenomenology
Existential risk
Existentiell
List of existentialists
Meaning (existential)
Meaning-making
Philosophical pessimism
Self-reflection
References
Citations
Sources
Bibliography
Albert Camus: Lyrical and Critical Essays. Edited by Philip Thody (interviev with Jeanie Delpech, in Les Nouvelles littéraires, November 15, 1945). p. 345.
Further reading
Fallico, Arthuro B. (1962). Art & Existentialism. Englewood Cliffs, N.J.: Prentice-Hall.
External links
"Existentialism is a Humanism", a lecture given by Jean-Paul Sartre
The Existential Primer
Buddhists, Existentialists and Situationists: Waking up in Waking Life
Journals and articles
Stirrings Still: The International Journal of Existential Literature
Existential Analysis published by The Society for Existential Analysis
19th century in philosophy
20th century in philosophy
1940s neologisms
Criticism of rationalism
Individualism
Metaphysical theories
Modernism
Philosophical schools and traditions
Philosophy of life
Social theories
Teleology | 0.7798 | 0.999731 | 0.779591 |
A priori and a posteriori | ('from the earlier') and ('from the later') are Latin phrases used in philosophy to distinguish types of knowledge, justification, or argument by their reliance on experience. knowledge is independent from any experience. Examples include mathematics, tautologies and deduction from pure reason. knowledge depends on empirical evidence. Examples include most fields of science and aspects of personal knowledge.
The terms originate from the analytic methods found in Organon, a collection of works by Aristotle. Prior analytics is about deductive logic, which comes from definitions and first principles. Posterior analytics is about inductive logic, which comes from observational evidence.
Both terms appear in Euclid's Elements and were popularized by Immanuel Kant's Critique of Pure Reason, an influential work in the history of philosophy. Both terms are primarily used as modifiers to the noun knowledge (e.g., knowledge). can be used to modify other nouns such as truth. Philosophers may use apriority, apriorist and aprioricity as nouns referring to the quality of being .
Examples
A priori
Consider the proposition: "If George V reigned at least four days, then he reigned more than three days." This is something that one knows a priori because it expresses a statement that one can derive by reason alone.
A posteriori
Consider the proposition: "George V reigned from 1910 to 1936." This is something that (if true) one must come to know a posteriori because it expresses an empirical fact unknowable by reason alone.
Aprioricity, analyticity and necessity
Relation to the analytic–synthetic distinction
Several philosophers, in reaction to Immanuel Kant, sought to explain a priori knowledge without appealing to, as Paul Boghossian describes as "a special faculty [intuition]... that has never been described in satisfactory terms." One theory, popular among the logical positivists of the early 20th century, is what Boghossian calls the "analytic explanation of the a priori." The distinction between analytic and synthetic propositions was first introduced by Kant. While his original distinction was primarily drawn in terms of conceptual containment, the contemporary version of such distinction primarily involves, as American philosopher W. V. O. Quine put it, the notions of "true by virtue of meanings and independently of fact."
Analytic propositions are considered true by virtue of their meaning alone, while a posteriori propositions by virtue of their meaning and of certain facts about the world. According to the analytic explanation of the a priori, all a priori knowledge is analytic; so a priori knowledge need not require a special faculty of pure intuition, since it can be accounted for simply by one's ability to understand the meaning of the proposition in question. More simply, proponents of this explanation claimed to have reduced a dubious metaphysical faculty of pure reason to a legitimate linguistic notion of analyticity.
The analytic explanation of a priori knowledge has undergone several criticisms. Most notably, Quine argues that the analytic–synthetic distinction is illegitimate:But for all its a priori reasonableness, a boundary between analytic and synthetic statements simply has not been drawn. That there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith.
Although the soundness of Quine's proposition remains uncertain, it had a powerful effect on the project of explaining the a priori in terms of the analytic.
Relation to the necessary truths and contingent truths
The metaphysical distinction between necessary and contingent truths has also been related to a priori and a posteriori knowledge.
A proposition that is necessarily true is one in which its negation is self-contradictory; it is true in every possible world. For example, considering the proposition "all bachelors are unmarried:" its negation (i.e. the proposition that some bachelors are married) is incoherent due to the concept of being unmarried (or the meaning of the word "unmarried") being tied to part of the concept of being a bachelor (or part of the definition of the word "bachelor"). To the extent that contradictions are impossible, self-contradictory propositions are necessarily false as it is impossible for them to be true. The negation of a self-contradictory proposition is, therefore, supposed to be necessarily true.
By contrast, a proposition that is contingently true is one in which its negation is not self-contradictory. Thus, it is said not to be true in every possible world. As Jason Baehr suggests, it seems plausible that all necessary propositions are known a priori, because "[s]ense experience can tell us only about the actual world and hence about what is the case; it can say nothing about what must or must not be the case."
Following Kant, some philosophers have considered the relationship between aprioricity, analyticity and necessity to be extremely close. According to Jerry Fodor, "positivism, in particular, took it for granted that a priori truths must be necessary." Since Kant, the distinction between analytic and synthetic propositions has slightly changed. Analytic propositions were largely taken to be "true by virtue of meanings and independently of fact", while synthetic propositions were not—one must conduct some sort of empirical investigation, looking to the world, to determine the truth-value of synthetic propositions.
Separation
Aprioricity, analyticity and necessity have since been more clearly separated from each other. American philosopher Saul Kripke (1972), for example, provides strong arguments against this position, whereby he contends that there are necessary a posteriori truths. For example, the proposition that water is H2O (if it is true): According to Kripke, this statement is both necessarily true, because water and H2O are the same thing, they are identical in every possible world, and truths of identity are logically necessary; and a posteriori, because it is known only through empirical investigation. Following such considerations of Kripke and others (see Hilary Putnam), philosophers tend to distinguish the notion of aprioricity more clearly from that of necessity and analyticity.
Kripke's definitions of these terms diverge in subtle ways from Kant's. Taking these differences into account, Kripke's controversial analysis of naming as contingent and a priori would, according to Stephen Palmquist, best fit into Kant's epistemological framework by calling it "analytic a posteriori." Aaron Sloman presented a brief defence of Kant's three distinctions (analytic/synthetic, apriori/empirical and necessary/contingent), in that it did not assume "possible world semantics" for the third distinction, merely that some part of this world might have been different.
The relationship between aprioricity, necessity and analyticity is not easy to discern. Most philosophers at least seem to agree that while the various distinctions may overlap, the notions are clearly not identical: the a priori/a posteriori distinction is epistemological; the analytic/synthetic distinction is linguistic; and the necessary/contingent distinction is metaphysical.
History
Early uses
The term a priori is Latin for 'from what comes before' (or, less literally, 'from first principles, before experience'). In contrast, the term a posteriori is Latin for 'from what comes later' (or 'after experience').
They appear in Latin translations of Euclid's Elements, a work widely considered during the early European modern period as the model for precise thinking.
An early philosophical use of what might be considered a notion of a priori knowledge (though not called by that name) is Plato's theory of recollection, related in the dialogue Meno, according to which something like a priori knowledge is knowledge inherent, intrinsic in the human mind.
Albert of Saxony, a 14th-century logician, wrote on both a priori and a posteriori.
The early modern Thomistic philosopher John Sergeant differentiates the terms by the direction of inference regarding proper causes and effects. To demonstrate something a priori is to "Demonstrate Proper Effects from Proper Efficient Causes" and likewise to demonstrate a posteriori is to demonstrate "Proper Efficient Causes from Proper Effects", according to his 1696 work The Method to Science Book III, Lesson IV, Section 7.
G. W. Leibniz introduced a distinction between a priori and a posteriori criteria for the possibility of a notion in his (1684) short treatise "Meditations on Knowledge, Truth, and Ideas". A priori and a posteriori arguments for the existence of God appear in his Monadology (1714).
George Berkeley outlined the distinction in his 1710 work A Treatise Concerning the Principles of Human Knowledge (para. XXI).
Immanuel Kant
The 18th-century German philosopher Immanuel Kant (1781) advocated a blend of rationalist and empiricist theories. Kant says, "Although all our cognition begins with experience, it does not follow that it arises from [is caused by] experience." According to Kant, a priori cognition is transcendental, or based on the form of all possible experience, while a posteriori cognition is empirical, based on the content of experience: It is quite possible that our empirical knowledge is a compound of that which we receive through impressions, and that which the faculty of cognition supplies from itself sensuous impressions [sense data] giving merely the occasion [opportunity for a cause to produce its effect]. Contrary to contemporary usages of the term, Kant believes that a priori knowledge is not entirely independent of the content of experience. Unlike the rationalists, Kant thinks that a priori cognition, in its pure form, that is without the admixture of any empirical content, is limited to the deduction of the conditions of possible experience. These a priori, or transcendental, conditions are seated in one's cognitive faculties, and are not provided by experience in general or any experience in particular (although an argument exists that a priori intuitions can be "triggered" by experience).
Kant nominated and explored the possibility of a transcendental logic with which to consider the deduction of the a priori in its pure form. Space, time and causality are considered pure a priori intuitions. Kant reasoned that the pure a priori intuitions are established via his transcendental aesthetic and transcendental logic. He claimed that the human subject would not have the kind of experience that it has were these a priori forms not in some way constitutive of him as a human subject. For instance, a person would not experience the world as an orderly, rule-governed place unless time, space and causality were determinant functions in the form of perceptual faculties, i. e., there can be no experience in general without space, time or causality as particular determinants thereon. The claim is more formally known as Kant's transcendental deduction and it is the central argument of his major work, the Critique of Pure Reason. The transcendental deduction argues that time, space and causality are ideal as much as real. In consideration of a possible logic of the a priori, this most famous of Kant's deductions has made the successful attempt in the case for the fact of subjectivity, what constitutes subjectivity and what relation it holds with objectivity and the empirical.
Johann Fichte
After Kant's death, a number of philosophers saw themselves as correcting and expanding his philosophy, leading to the various forms of German Idealism. One of these philosophers was Johann Fichte. His student (and critic), Arthur Schopenhauer, accused him of rejecting the distinction between a priori and a posteriori knowledge:
See also
A priori probability
A posteriori necessity
Ab initio
Abductive reasoning
Deductive reasoning
Inductive reasoning
Off the verandah
Relativized a priori
Tabula rasa
Transcendental empiricism
Transcendental hermeneutic phenomenology
Transcendental nominalism
References
Notes
Citations
Sources
Further reading
.
External links
A priori / a posteriori – in the Philosophical Dictionary online.
"Rationalism vs. Empiricism" – an article by Peter Markie in the Stanford Encyclopedia of Philosophy.
Concepts in epistemology
Conceptual distinctions
Justification (epistemology)
Kantianism
Latin philosophical phrases
Definitions of knowledge
Concepts in logic | 0.781094 | 0.998072 | 0.779588 |
Philosophical realism | Philosophical realism – usually not treated as a position of its own but as a stance towards other subject matters – is the view that a certain kind of thing (ranging widely from abstract objects like numbers to moral statements to the physical world itself) has mind-independent existence, i.e. that it exists even in the absence of any mind perceiving it or that its existence is not just a mere appearance in the eye of the beholder. This includes a number of positions within epistemology and metaphysics which express that a given thing instead exists independently of knowledge, thought, or understanding. This can apply to items such as the physical world, the past and future, other minds, and the self, though may also apply less directly to things such as universals, mathematical truths, moral truths, and thought itself. However, realism may also include various positions which instead reject metaphysical treatments of reality entirely.
Realism can also be a view about the properties of reality in general, holding that reality exists independent of the mind, as opposed to non-realist views (like some forms of skepticism and solipsism) which question the certainty of anything beyond one's own mind. Philosophers who profess realism often claim that truth consists in a correspondence between cognitive representations and reality.
Realists tend to believe that whatever we believe now is only an approximation of reality but that the accuracy and fullness of understanding can be improved. In some contexts, realism is contrasted with idealism. Today it is more often contrasted with anti-realism, for example in the philosophy of science.
The oldest use of the term "realism" appeared in medieval scholastic interpretations and adaptations of ancient Greek philosophy.
The position was also held among many ancient Indian philosophies.
Etymology
The term comes from Late Latin realis "real" and was first used in the abstract metaphysical sense by Immanuel Kant in 1781 (CPR A 369).
Varieties
Metaphysical realism
Metaphysical realism maintains that "whatever exists does so, and has the properties and relations it does, independently of deriving its existence or nature from being thought of or experienced." In other words, an objective reality exists (not merely one or more subjective realities).
Naive or direct realism
Naive realism, also known as direct realism, is a philosophy of mind rooted in a common sense theory of perception that claims that the senses provide us with direct awareness of the external world.
In contrast, some forms of idealism assert that no world exists apart from mind-dependent ideas and some forms of skepticism say we cannot trust our senses. The naive realist view is that objects have properties, such as texture, smell, taste and colour, that are usually perceived absolutely correctly. We perceive them as they really are.
Immanent realism
Immanent realism is the ontological understanding which holds that universals are immanently real within particulars themselves, not in a separate realm, and not mere names. Most often associated with Aristotle and the Aristotelian tradition.
Scientific realism
Scientific realism is, at the most general level, the view that the world described by science is the real world, as it is, independent of what we might take it to be. Within philosophy of science, it is often framed as an answer to the question "how is the success of science to be explained?" The debate over what the success of science involves centers primarily on the status of unobservable entities apparently talked about by scientific theories. Generally, those who are scientific realists assert that one can make reliable claims about unobservables (viz., that they have the same ontological status) as observables. Analytic philosophers generally have a commitment to scientific realism, in the sense of regarding the scientific method as a reliable guide to the nature of reality. The main alternative to scientific realism is instrumentalism.
Scientific realism in physics
Realism in physics (especially quantum mechanics) is the claim that the world is in some sense mind-independent: that even if the results of a possible measurement do not pre-exist the act of measurement, that does not require that they are the creation of the observer (contrary to the "consciousness causes collapse" interpretation of quantum mechanics). That interpretation of quantum mechanics, on the other hand, states that the wave function is already the full description of reality. The different possible realities described by the wave function are equally true. The observer collapses the wave function into their own reality. One's reality can be mind-dependent under this interpretation of quantum mechanics.
Moral realism
Moral realism is the position that ethical sentences express propositions that refer to objective features of the world.
Aesthetic realism
Aesthetic realism (not to be confused with Aesthetic Realism, the philosophy developed by Eli Siegel, or "realism" in the arts) is the view that there are mind-independent aesthetic facts.
History of metaphysical realism
Ancient Greek philosophy
In ancient Greek philosophy, realist doctrines about universals were proposed by Plato and Aristotle.
Platonic realism is a radical form of realism regarding the existence of abstract objects, including universals, which are often translated from Plato's works as "Forms". Since Plato frames Forms as ideas that are literally real (existing even outside of human minds), this stance is also called Platonic idealism. This should not be confused with "idealistic" in the ordinary sense of "optimistic" or with other types of philosophical idealism, as presented by philosophers such as George Berkeley. As Platonic abstractions are not spatial, temporal, or subjectively mental, they are arguably not compatible with the emphasis of Berkeley's idealism grounded in mental existence. Plato's Forms include numbers and geometrical figures, making his theory also include mathematical realism; they also include the Form of the Good, making it additionally include ethical realism.
In Aristotle's more modest view, the existence of universals (like "blueness") is dependent on the particulars that exemplify them (like a particular "blue bird", "blue piece of paper", "blue robe", etc.), and those particulars exist independent of any minds: classic metaphysical realism.
Ancient Indian Philosophy
There were many ancient Indian realist schools, such as the Mimamsa, Vishishtadvaita, Dvaita, Nyaya, Yoga, Samkhya, Sauntrantika, Jain, Vaisesika, and others. They argued for their realist positions, and heavily criticized idealism, like that of the Yogacara, and composed refutations of the Yogacara position.
Medieval philosophy
Medieval realism developed out of debates over the problem of universals. Universals are terms or properties that can be applied to many things, such as "red", "beauty", "five", or "dog". Realism (also known as exaggerated realism) in this context, contrasted with conceptualism and nominalism, holds that such universals really exist, independently and somehow prior to the world. Moderate realism holds that they exist, but only insofar as they are instantiated in specific things; they do not exist separately from the specific thing. Conceptualism holds that they exist, but only in the mind, while nominalism holds that universals do not "exist" at all but are no more than words (flatus vocis) that describe specific objects.
Proponents of moderate realism included Thomas Aquinas, Bonaventure, and Duns Scotus (cf. Scotist realism).
Early modern philosophy
In early modern philosophy, Scottish Common Sense Realism was a school of philosophy which sought to defend naive realism against philosophical paradox and scepticism, arguing that matters of common sense are within the reach of common understanding and that common-sense beliefs even govern the lives and thoughts of those who hold non-commonsensical beliefs. It originated in the ideas of the most prominent members of the Scottish School of Common Sense, Thomas Reid, Adam Ferguson and Dugald Stewart, during the 18th century Scottish Enlightenment and flourished in the late 18th and early 19th centuries in Scotland and America.
The roots of Scottish Common Sense Realism can be found in responses to such philosophers as John Locke, George Berkeley, and David Hume. The approach was a response to the "ideal system" that began with Descartes' concept of the limitations of sense experience and led Locke and Hume to a skepticism that called religion and the evidence of the senses equally into question. The common sense realists found skepticism to be absurd and so contrary to common experience that it had to be rejected. They taught that ordinary experiences provide intuitively certain assurance of the existence of the self, of real objects that could be seen and felt and of certain "first principles" upon which sound morality and religious beliefs could be established. Its basic principle was enunciated by its founder and greatest figure, Thomas Reid:
If there are certain principles, as I think there are, which the constitution of our nature leads us to believe, and which we are under a necessity to take for granted in the common concerns of life, without being able to give a reason for them—these are what we call the principles of common sense; and what is manifestly contrary to them, is what we call absurd.
Late modern philosophy
In late modern philosophy, a notable school of thought advocating metaphysical realism was Austrian realism. Its members included Franz Brentano, Alexius Meinong, Vittorio Benussi, Ernst Mally, and early Edmund Husserl. These thinkers stressed the objectivity of truth and its independence of the nature of those who judge it. (See also Graz School.)
Dialectical materialism, a philosophy of nature based on the writings of late modern philosophers Karl Marx and Friedrich Engels, is interpreted to be a form of ontological realism.
According to Michael Resnik, Gottlob Frege's work after 1891 can be interpreted as a contribution to realism.
Contemporary philosophy
In contemporary analytic philosophy, Bertrand Russell, Ludwig Wittgenstein, J. L. Austin, Karl Popper, and Gustav Bergmann espoused metaphysical realism. Hilary Putnam initially espoused metaphysical realism, but he later embraced a form of anti-realism that he termed "internal realism." Conceptualist realism (a view put forward by David Wiggins) is a form of realism, according to which our conceptual framework maps reality.
Speculative realism is a movement in contemporary Continental-inspired philosophy that defines itself loosely in its stance of metaphysical realism against the dominant forms of post-Kantian philosophy.
See also
Anti-realism
Critical realism
Dialectical realism
Epistemological realism
Extended modal realism
Legal realism
Modal realism
Objectivism
Philosophy of social science
Principle of bivalence
Problem of future contingents
Realism (disambiguation)
Truth-value link realism
Speculative realism
Direct and indirect realism
Notes
References
External links
Miller, Alexander, "Realism", The Stanford Encyclopedia of Philosophy (SEP)
O'Brien, Daniel, "Objects of Perception", The Internet Encyclopedia of Philosophy (IEP)
An experimental test of non-local realism. Physics research paper in Nature which gives negative experimental results for certain classes of realism in the sense of physics. | 0.78339 | 0.994898 | 0.779394 |
Logic | Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term "a logic" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.
Logic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises "it's Sunday" and "if it's Sunday then I don't have to work" leading to the conclusion "I don't have to work". Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like (and) or (if...then). Simple propositions also have parts, like "Sunday" or "work" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts.
Arguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalizations, such as inferring that all ravens are black based on many individual observations of black ravens. Abductive arguments are inferences to the best explanation, for example, when a doctor concludes that a patient has a certain disease which explains the symptoms they suffer. Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments.
Logic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.
Definition
The word "logic" originates from the Greek word "logos", which has a variety of translations, such as reason, discourse, or language. Logic is traditionally defined as the study of the laws of thought or correct reasoning, and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences. An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion. These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments. Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic.
Formal logic
Formal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content.
Formal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false. For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. For example, modus ponens is a rule of inference according to which all arguments of the form "(1) p, (2) if p then q, (3) therefore q" are valid, independent of what the terms p and q stand for. In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim "either it is raining, or it is not". These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from p to q is deductively valid then the claim "if p then q" is a logical truth.
Formal logic uses formal languages to express and analyze arguments. They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid. Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed.
The term "logic" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them. Starting in the late 19th century, many new formal systems have been proposed. There are disagreements about what makes a formal system a logic. For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense.
Informal logic
When understood in a wide sense, logic encompasses both formal and informal logic. Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse. Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments. In this regard, it considers problems that formal logic on its own is unable to address. Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies.
Many characterizations of informal logic have been suggested but there is no general agreement on its precise definition. The most literal approach sees the terms "formal" and "informal" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language. Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form. On this view, the argument "Birds fly. Tweety is a bird. Therefore, Tweety flies." belongs to natural language and is examined by informal logic. But the formal translation "(1) ; (2) ; (3) " is studied by formal logic. The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent. Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation.
Another characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic. Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that "all ravens I have seen so far are black" to the conclusion "all ravens are black".
A further approach is to define informal logic as the study of informal fallacies. Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument. A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy "you are either with us or against us; you are not with us; therefore, you are against us". Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of logical constants for correct inferences while informal logic also takes the meaning of substantive concepts into account. Further approaches focus on the discussion of logical topics with or without formal devices and on the role of epistemology for the assessment of arguments.
Basic concepts
Premises, conclusions, and truth
Premises and conclusions
Premises and conclusions are the basic parts of inferences or arguments and therefore play a central role in logic. In the case of a valid inference or a correct argument, the conclusion follows from the premises, or in other words, the premises support the conclusion. For instance, the premises "Mars is red" and "Mars is a planet" support the conclusion "Mars is a red planet". For most types of logic, it is accepted that premises and conclusions have to be truth-bearers. This means that they have a truth value: they are either true or false. Contemporary philosophy generally sees them either as propositions or as sentences. Propositions are the denotations of sentences and are usually seen as abstract objects. For example, the English sentence "the tree is green" is different from the German sentence "der Baum ist grün" but both express the same proposition.
Propositional theories of premises and conclusions are often criticized because they rely on abstract objects. For instance, philosophical naturalists usually reject the existence of abstract objects. Other arguments concern the challenges involved in specifying the identity criteria of propositions. These objections are avoided by seeing premises and conclusions not as propositions but as sentences, i.e. as concrete linguistic objects like the symbols displayed on a page of a book. But this approach comes with new problems of its own: sentences are often context-dependent and ambiguous, meaning an argument's validity would not only depend on its parts but also on its context and on how it is interpreted. Another approach is to understand premises and conclusions in psychological terms as thoughts or judgments. This position is known as psychologism. It was discussed at length around the turn of the 20th century but it is not widely accepted today.
Internal structure
Premises and conclusions have an internal structure. As propositions or sentences, they can be either simple or complex. A complex proposition has other propositions as its constituents, which are linked to each other through propositional connectives like "and" or "if...then". Simple propositions, on the other hand, do not have propositional parts. But they can also be conceived as having an internal structure: they are made up of subpropositional parts, like singular terms and predicates. For example, the simple proposition "Mars is red" can be formed by applying the predicate "red" to the singular term "Mars". In contrast, the complex proposition "Mars is red and Venus is white" is made up of two simple propositions connected by the propositional connective "and".
Whether a proposition is true depends, at least in part, on its constituents. For complex propositions formed using truth-functional propositional connectives, their truth only depends on the truth values of their parts. But this relation is more complicated in the case of simple propositions and their subpropositional parts. These subpropositional parts have meanings of their own, like referring to objects or classes of objects. Whether the simple proposition they form is true depends on their relation to reality, i.e. what the objects they refer to are like. This topic is studied by theories of reference.
Logical truth
Some complex propositions are true independently of the substantive meanings of their parts. In classical logic, for example, the complex proposition "either Mars is red or Mars is not red" is true independent of whether its parts, like the simple proposition "Mars is red", are true or false. In such cases, the truth is called a logical truth: a proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true under all interpretations of its non-logical terms. In some modal logics, this means that the proposition is true in all possible worlds. Some theorists define logic as the study of logical truths.
Truth tables
Truth tables can be used to show how logical connectives work or how the truth values of complex propositions depends on their parts. They have a column for each input variable. Each row corresponds to one possible combination of the truth values these variables can take; for truth tables presented in the English literature, the symbols "T" and "F" or "1" and "0" are commonly used as abbreviations for the truth values "true" and "false". The first columns present all the possible truth-value combinations for the input variables. Entries in the other columns present the truth values of the corresponding expressions as determined by the input values. For example, the expression uses the logical connective (and). It could be used to express a sentence like "yesterday was Sunday and the weather was good". It is only true if both of its input variables, ("yesterday was Sunday") and ("the weather was good"), are true. In all other cases, the expression as a whole is false. Other important logical connectives are (not), (or), (if...then), and (Sheffer stroke). Given the conditional proposition , one can form truth tables of its converse , its inverse , and its contrapositive . Truth tables can also be defined for more complex expressions that use several propositional connectives.
Arguments and inferences
Logic is commonly defined in terms of arguments or inferences as the study of their correctness. An argument is a set of premises together with a conclusion. An inference is the process of reasoning from these premises to the conclusion. But these terms are often used interchangeably in logic. Arguments are correct or incorrect depending on whether their premises support their conclusion. Premises and conclusions, on the other hand, are true or false depending on whether they are in accord with reality. In formal logic, a sound argument is an argument that is both correct and has only true premises. Sometimes a distinction is made between simple and complex arguments. A complex argument is made up of a chain of simple arguments. This means that the conclusion of one argument acts as a premise of later arguments. For a complex argument to be successful, each link of the chain has to be successful.
Arguments and inferences are either correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning. The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still be good arguments because their premises offer non-deductive support to their conclusions. For such cases, the term ampliative or inductive reasoning is used. Deductive arguments are associated with formal logic in contrast to the relation between ampliative arguments and informal logic.
Deductive
A deductively valid argument is one whose premises guarantee the truth of its conclusion. For instance, the argument "(1) all frogs are amphibians; (2) no cats are amphibians; (3) therefore no cats are frogs" is deductively valid. For deductive validity, it does not matter whether the premises or the conclusion are actually true. So the argument "(1) all frogs are mammals; (2) no cats are mammals; (3) therefore no cats are frogs" is also valid because the conclusion follows necessarily from the premises.
According to an influential view by Alfred Tarski, deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances.
Because of the first feature, the focus on formality, deductive inference is usually identified with rules of inference. Rules of inference specify the form of the premises and the conclusion: how they have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid. The modus ponens is a prominent rule of inference. It has the form "p; if p, then q; therefore q". Knowing that it has just rained and that after rain the streets are wet, one can use modus ponens to deduce that the streets are wet.
The third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false. Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises. But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes between surface and depth information. The surface information of a sentence is the information it presents explicitly. Depth information is the totality of the information contained in the sentence, both explicitly and implicitly. According to this view, deductive inferences are uninformative on the depth level. But they can be highly informative on the surface level by making implicit information explicit. This happens, for example, in mathematical proofs.
Ampliative
Ampliative arguments are arguments whose conclusions contain additional information not found in their premises. In this regard, they are more interesting since they contain information on the depth level and the thinker may learn something genuinely new. But this feature comes with a certain cost: the premises support the conclusion in the sense that they make its truth more likely but they do not ensure its truth. This means that the conclusion of an ampliative argument may be false even though all its premises are true. This characteristic is closely related to non-monotonicity and defeasibility: it may be necessary to retract an earlier conclusion upon receiving new information or in light of new inferences drawn. Ampliative reasoning plays a central role in many arguments found in everyday discourse and the sciences. Ampliative arguments are not automatically incorrect. Instead, they just follow different standards of correctness. The support they provide for their conclusion usually comes in degrees. This means that strong ampliative arguments make their conclusion very likely while weak ones are less certain. As a consequence, the line between correct and incorrect arguments is blurry in some cases, such as when the premises offer weak but non-negligible support. This contrasts with deductive arguments, which are either valid or invalid with nothing in-between.
The terminology used to categorize ampliative arguments is inconsistent. Some authors, like James Hawthorne, use the term "induction" to cover all forms of non-deductive arguments. But in a more narrow sense, induction is only one type of ampliative argument alongside abductive arguments. Some philosophers, like Leo Groarke, also allow conductive arguments as another type. In this narrow sense, induction is often defined as a form of statistical generalization. In this case, the premises of an inductive argument are many individual observations that all show a certain pattern. The conclusion then is a general law that this pattern always obtains. In this sense, one may infer that "all elephants are gray" based on one's past observations of the color of elephants. A closely related form of inductive inference has as its conclusion not a general law but one more specific instance, as when it is inferred that an elephant one has not seen yet is also gray. Some theorists, like Igor Douven, stipulate that inductive inferences rest only on statistical considerations. This way, they can be distinguished from abductive inference.
Abductive inference may or may not take statistical observations into consideration. In either case, the premises offer support for the conclusion because the conclusion is the best explanation of why the premises are true. In this sense, abduction is also called the inference to the best explanation. For example, given the premise that there is a plate with breadcrumbs in the kitchen in the early morning, one may infer the conclusion that one's house-mate had a midnight snack and was too tired to clean the table. This conclusion is justified because it is the best explanation of the current state of the kitchen. For abduction, it is not sufficient that the conclusion explains the premises. For example, the conclusion that a burglar broke into the house last night, got hungry on the job, and had a midnight snack, would also explain the state of the kitchen. But this conclusion is not justified because it is not the best or most likely explanation.
Fallacies
Not all arguments live up to the standards of correct reasoning. When they do not, they are usually referred to as fallacies. Their central aspect is not that their conclusion is false but that there is some flaw with the reasoning leading to this conclusion. So the argument "it is sunny today; therefore spiders have eight legs" is fallacious even though the conclusion is true. Some theorists, like John Stuart Mill, give a more restrictive definition of fallacies by additionally requiring that they appear to be correct. This way, genuine fallacies can be distinguished from mere mistakes of reasoning due to carelessness. This explains why people tend to commit fallacies: because they have an alluring element that seduces people into committing and accepting them. However, this reference to appearances is controversial because it belongs to the field of psychology, not logic, and because appearances may be different for different people.
Fallacies are usually divided into formal and informal fallacies. For formal fallacies, the source of the error is found in the form of the argument. For example, denying the antecedent is one type of formal fallacy, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore Othello is not male". But most fallacies fall into the category of informal fallacies, of which a great variety is discussed in the academic literature. The source of their error is usually found in the content or the context of the argument. Informal fallacies are sometimes categorized as fallacies of ambiguity, fallacies of presumption, or fallacies of relevance. For fallacies of ambiguity, the ambiguity and vagueness of natural language are responsible for their flaw, as in "feathers are light; what is light cannot be dark; therefore feathers cannot be dark". Fallacies of presumption have a wrong or unjustified premise but may be valid otherwise. In the case of fallacies of relevance, the premises do not support the conclusion because they are not relevant to it.
Definitory and strategic rules
The main focus of most logicians is to study the criteria according to which an argument is correct or incorrect. A fallacy is committed if these criteria are violated. In the case of formal logic, they are known as rules of inference. They are definitory rules, which determine whether an inference is correct or which inferences are allowed. Definitory rules contrast with strategic rules. Strategic rules specify which inferential moves are necessary to reach a given conclusion based on a set of premises. This distinction does not just apply to logic but also to games. In chess, for example, the definitory rules dictate that bishops may only move diagonally. The strategic rules, on the other hand, describe how the allowed moves may be used to win a game, for instance, by controlling the center and by defending one's king. It has been argued that logicians should give more emphasis to strategic rules since they are highly relevant for effective reasoning.
Formal systems
A formal system of logic consists of a formal language together with a set of axioms and a proof system used to draw inferences from these axioms. In logic, axioms are statements that are accepted without proof. They are used to justify other statements. Some theorists also include a semantics that specifies how the expressions of the formal language relate to real objects. Starting in the late 19th century, many new formal systems have been proposed.
A formal language consists of an alphabet and syntactic rules. The alphabet is the set of basic symbols used in expressions. The syntactic rules determine how these symbols may be arranged to result in well-formed formulas. For instance, the syntactic rules of propositional logic determine that is a well-formed formula but is not since the logical conjunction requires terms on both sides.
A proof system is a collection of rules to construct formal proofs. It is a tool to arrive at conclusions from a set of axioms. Rules in a proof system are defined in terms of the syntactic form of formulas independent of their specific content. For instance, the classical rule of conjunction introduction states that follows from the premises and . Such rules can be applied sequentially, giving a mechanical procedure for generating conclusions from premises. There are different types of proof systems including natural deduction and sequent calculi.
A semantics is a system for mapping expressions of a formal language to their denotations. In many systems of logic, denotations are truth values. For instance, the semantics for classical propositional logic assigns the formula the denotation "true" whenever and are true. From the semantic point of view, a premise entails a conclusion if the conclusion is true whenever the premise is true.
A system of logic is sound when its proof system cannot derive a conclusion from a set of premises unless it is semantically entailed by them. In other words, its proof system cannot lead to false conclusions, as defined by the semantics. A system is complete when its proof system can derive every conclusion that is semantically entailed by its premises. In other words, its proof system can lead to any true conclusion, as defined by the semantics. Thus, soundness and completeness together describe a system whose notions of validity and entailment line up perfectly.
Systems of logic
Systems of logic are theoretical frameworks for assessing the correctness of reasoning and arguments. For over two thousand years, Aristotelian logic was treated as the canon of logic in the Western world, but modern developments in this field have led to a vast proliferation of logical systems. One prominent categorization divides modern formal logical systems into classical logic, extended logics, and deviant logics.
Aristotelian
Aristotelian logic encompasses a great variety of topics. They include metaphysical theses about ontological categories and problems of scientific explanation. But in a more narrow sense, it is identical to term logic or syllogistics. A syllogism is a form of argument involving three propositions: two premises and a conclusion. Each proposition has three essential parts: a subject, a predicate, and a copula connecting the subject to the predicate. For example, the proposition "Socrates is wise" is made up of the subject "Socrates", the predicate "wise", and the copula "is". The subject and the predicate are the terms of the proposition. Aristotelian logic does not contain complex propositions made up of simple propositions. It differs in this aspect from propositional logic, in which any two propositions can be linked using a logical connective like "and" to form a new complex proposition.
In Aristotelian logic, the subject can be universal, particular, indefinite, or singular. For example, the term "all humans" is a universal subject in the proposition "all humans are mortal". A similar proposition could be formed by replacing it with the particular term "some humans", the indefinite term "a human", or the singular term "Socrates".
Aristotelian logic only includes predicates for simple properties of entities. But it lacks predicates corresponding to relations between entities. The predicate can be linked to the subject in two ways: either by affirming it or by denying it. For example, the proposition "Socrates is not a cat" involves the denial of the predicate "cat" to the subject "Socrates". Using combinations of subjects and predicates, a great variety of propositions and syllogisms can be formed. Syllogisms are characterized by the fact that the premises are linked to each other and to the conclusion by sharing one predicate in each case. Thus, these three propositions contain three predicates, referred to as major term, minor term, and middle term. The central aspect of Aristotelian logic involves classifying all possible syllogisms into valid and invalid arguments according to how the propositions are formed. For example, the syllogism "all men are mortal; Socrates is a man; therefore Socrates is mortal" is valid. The syllogism "all cats are mortal; Socrates is mortal; therefore Socrates is a cat", on the other hand, is invalid.
Classical
Classical logic is distinct from traditional or Aristotelian logic. It encompasses propositional logic and first-order logic. It is "classical" in the sense that it is based on basic logical intuitions shared by most logicians. These intuitions include the law of excluded middle, the double negation elimination, the principle of explosion, and the bivalence of truth. It was originally developed to analyze mathematical arguments and was only later applied to other fields as well. Because of this focus on mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future. Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics.
Propositional logic
Propositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component. Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition.
First-order logic
First-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like "some" and "all". For example, to express the proposition "this raven is black", one may use the predicate for the property "black" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined with the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from .
Extended
Extended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology.
Modal logic
Modal logic is an extension of classical logic. In its original form, sometimes called "alethic modal logic", it introduces two new symbols: expresses that something is possible while expresses that something is necessary. For example, if the formula stands for the sentence "Socrates is a banker" then the formula articulates the sentence "It is possible that Socrates is a banker". To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to .
Other forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it. The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time. In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case.
Higher order logic
Higher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification. Quantifiers correspond to terms like "all" or "some". In classical first-order logic, quantifiers are only applied to individuals. The formula (some apples are sweet) is an example of the existential quantifier applied to the individual variable . In higher-order logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula . In this case, the existential quantifier is applied to the predicate variable . The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories. But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used.
Deviant
Deviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue.
Intuitionistic logic is a restricted version of classical logic. It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true. These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence.
Multi-valued logics depart from classicality by rejecting the principle of bivalence, which requires all propositions to be either true or false. For instance, Jan Łukasiewicz and Stephen Cole Kleene both proposed ternary logics which have a third truth value representing that a statement's truth value is indeterminate. These logics have been applied in the field of linguistics. Fuzzy logics are multivalued logics that have an infinite number of "degrees of truth", represented by a real number between 0 and 1.
Paraconsistent logics are logical systems that can deal with contradictions. They are formulated to avoid the principle of explosion: for them, it is not the case that anything follows from a contradiction. They are often motivated by dialetheism, the view that contradictions are real or that reality itself is contradictory. Graham Priest is an influential contemporary proponent of this position and similar views have been ascribed to Georg Wilhelm Friedrich Hegel.
Informal
Informal logic is usually carried out in a less systematic way. It often focuses on more specific issues, like investigating a particular type of fallacy or studying a certain aspect of argumentation. Nonetheless, some frameworks of informal logic have also been presented that try to provide a systematic characterization of the correctness of arguments.
The pragmatic or dialogical approach to informal logic sees arguments as speech acts and not merely as a set of premises together with a conclusion. As speech acts, they occur in a certain context, like a dialogue, which affects the standards of right and wrong arguments. A prominent version by Douglas N. Walton understands a dialogue as a game between two players. The initial position of each player is characterized by the propositions to which they are committed and the conclusion they intend to prove. Dialogues are games of persuasion: each player has the goal of convincing the opponent of their own conclusion. This is achieved by making arguments: arguments are the moves of the game. They affect to which propositions the players are committed. A winning move is a successful argument that takes the opponent's commitments as premises and shows how one's own conclusion follows from them. This is usually not possible straight away. For this reason, it is normally necessary to formulate a sequence of arguments as intermediary steps, each of which brings the opponent a little closer to one's intended conclusion. Besides these positive arguments leading one closer to victory, there are also negative arguments preventing the opponent's victory by denying their conclusion. Whether an argument is correct depends on whether it promotes the progress of the dialogue. Fallacies, on the other hand, are violations of the standards of proper argumentative rules. These standards also depend on the type of dialogue. For example, the standards governing the scientific discourse differ from the standards in business negotiations.
The epistemic approach to informal logic, on the other hand, focuses on the epistemic role of arguments. It is based on the idea that arguments aim to increase our knowledge. They achieve this by linking justified beliefs to beliefs that are not yet justified. Correct arguments succeed at expanding knowledge while fallacies are epistemic failures: they do not justify the belief in their conclusion. For example, the fallacy of begging the question is a fallacy because it fails to provide independent justification for its conclusion, even though it is deductively valid. In this sense, logical normativity consists in epistemic success or rationality. The Bayesian approach is one example of an epistemic approach. Central to Bayesianism is not just whether the agent believes something but the degree to which they believe it, the so-called credence. Degrees of belief are seen as subjective probabilities in the believed proposition, i.e. how certain the agent is that the proposition is true. On this view, reasoning can be interpreted as a process of changing one's credences, often in reaction to new incoming information. Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws.
Areas of research
Logic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science. In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems.
Philosophy of logic and philosophical logic
Philosophy of logic is the philosophical discipline studying the scope and nature of logic. It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them. It is also concerned with how to classify logical systems and considers the ontological commitments they incur. Philosophical logic is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. This application usually happens in the form of extended or deviant logical systems.
Metalogic
Metalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics.
Mathematical logic
The term "mathematical logic" is sometimes used as a synonym of "formal logic". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts to use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopher-logicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems.
Set theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms.
Computability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of its main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue.
Computational logic
Computational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention. Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic. Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits. This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output.
Formal semantics of natural language
Formal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase "walk and sing" depends on the meanings of the individual expressions "walk" and "sing". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term "walk" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language.
Epistemology of logic
The epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true. This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false. The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. In this regard, it is often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths. A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary.
Some theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic.
History
Logic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed term logic in his Organon and Prior Analytics. He was responsible for the introduction of the hypothetical syllogism and temporal modal logic. Further innovations include inductive logic as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. It has now been superseded by later work, though many of its key insights are still present in modern systems of logic.
Ibn Sina (Avicenna) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world. It influenced Western medieval writers such as Albertus Magnus and William of Ockham. Ibn Sina wrote on the hypothetical syllogism and on the propositional calculus. He developed an original "temporally modalized" syllogistic theory, involving temporal logic and modal logic. He also made use of inductive logic, such as his methods of agreement, difference, and concomitant variation, which are critical to the scientific method. Fakhr al-Din al-Razi was another influential Muslim logician. He criticized Aristotelian syllogistics and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill.
During the Middle Ages, many translations and interpretations of Aristotelian logic were made. The works of Boethius were particularly influential. Besides translating Aristotle's work into Latin, he also produced textbooks on logic. Later, the works of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes) were drawn on. This expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential Summa Logicae was released. It is a comprehensive treatise on logic that discusses many basic concepts of logic and provides a systematic exposition of types of propositions and their truth conditions.
In Chinese philosophy, the School of Names and Mohism were particularly influential. The School of Names focused on the use of language and on paradoxes. For example, Gongsun Long proposed the white horse paradox, which defends the thesis that a white horse is not a horse. The school of Mohism also acknowledged the importance of language for logic and tried to relate the ideas in these fields to the realm of ethics.
In India, the study of logic was primarily pursued by the schools of Nyaya, Buddhism, and Jainism. It was not treated as a separate academic discipline and discussions of its topics usually happened in the context of epistemology and theories of dialogue or argumentation. In Nyaya, inference is understood as a source of knowledge (pramāṇa). It follows the perception of an object and tries to arrive at conclusions, for example, about the cause of this object. A similar emphasis on the relation to epistemology is also found in Buddhist and Jainist schools of logic, where inference is used to expand the knowledge gained through other sources. Some of the later theories of Nyaya, belonging to the Navya-Nyāya school, resemble modern forms of logic, such as Gottlob Frege's distinction between sense and reference and his definition of number.
The syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of modern symbolic logic. Many see Gottlob Frege's Begriffsschrift as the birthplace of modern logic. Gottfried Wilhelm Leibniz's idea of a universal formal language is often considered a forerunner. Other pioneers were George Boole, who invented Boolean algebra as a mathematical system of logic, and Charles Peirce, who developed the logic of relatives. Alfred North Whitehead and Bertrand Russell, in turn, condensed many of these insights in their work Principia Mathematica. Modern logic introduced novel concepts, such as functions, quantifiers, and relational predicates. A hallmark of modern symbolic logic is its use of formal language to precisely codify its insights. In this regard, it departs from earlier logicians, who relied mainly on natural language. Of particular influence was the development of first-order logic, which is usually treated as the standard system of modern logic. Its analytical generality allowed the formalization of mathematics and drove the investigation of set theory. It also made Alfred Tarski's approach to model theory possible and provided the foundation of modern mathematical logic.
See also
References
Notes
Citations
Bibliography
External links
Formal sciences | 0.779538 | 0.999569 | 0.779202 |
Epistemic theories of truth | In philosophy and epistemology, epistemic theories of truth are attempts to analyze the notion of truth in terms of epistemic notions such as knowledge, belief, acceptance, verification, justification, and perspective.
A variety of such conceptions can be classified into verificationist theories, perspectivist or relativist theories, and pragmatic theories.
Verificationism is based on verifying propositions. The distinctive claim of verificationism is that the result of such verifications is, by definition, truth. That is, truth is reducible to this process of verification.
According to perspectivism and relativism, a proposition is only true relative to a particular perspective. Roughly, a proposition is true relative to a perspective if and only if it is accepted, endorsed, or legitimated by that perspective.
Many authors writing on the topic of the notion of truth advocate or endorse combinations of the above positions. Each of these epistemic conceptions of truth can be subjected to various criticisms. Some criticisms apply across the board, while others are more specific.
Verificationist views
The two main kinds of verification philosophies are positivism and a-priorism.
In positivism, a proposition is meaningful, and thus capable of being true or false, if and only if it is verifiable by sensory experiences.
A-priorism, often used in the domains of logic and mathematics, holds a proposition true if and only if a priori reasoning can verify it. In the related certainty theory, associated with Descartes and Spinoza, a proposition is true if and only if it is known with certainty.
Logical positivism attempts to combine positivism with a version of a-priorism.
Another theory of truth that is related to a-priorism is the concept-containment theory of truth. The concept-containment theory of truth is the view that a proposition is true if and only if the concept of the predicate of the proposition is "contained in" the concept of the subject. For example, the proposition that bachelors are unmarried men is true, in this view, because the concept of the predicate (unmarried men) is contained in the concept of the subject (bachelor). A contemporary reading of the concept-containment theory of truth is to say that every true proposition is an analytically true proposition.
Perspectivist views
According to perspectivism and relativism, a proposition is only true relative to a particular perspective. The Sophists' relativist and Nietzsche's philosophy are some of the most famous examples of such perspectivism. There are three main versions of perspectivism, and some interesting subdivisions:
Individualist perspectivism
According to individualist perspectivism (also individual perspectivism) perspectives are the points of view of particular individual persons. So, a proposition is true for a person if and only if it is accepted or believed by that person (i.e., "true for me").
Collectivist perspectivism
In collectivist perspectivism, perspectives are understood as collective (cultures-dependent).
There are, roughly, three versions of collectivism:
Consensus
A perspective is, roughly, the broad opinions, and perhaps norms and practices, of a community of people, perhaps all having some special feature in common. So, a proposition is true (for a community C) if, and only if, there is a consensus amongst the members of C for believing it.
Power
In the power-oriented view, a perspective is a community enforced by power, authority, military might, privilege, etc. So, a proposition is true if it "makes us powerful" or is "produced by power", thus the slogan "truth is power".
This view of truth as a political stake may be loosely associated with Martin Heidegger or with Michel Foucault's specific analysis of historical and political discourse, as well as with some social constructivists.
However, the Nazi mysticism of a communitarian "blood community" conception radically differs from Heidegger or Foucault's criticism of the notion of the individual or collective subject.
Marxist
Truth-generating perspectives are collective and opposed to, or engaged in a struggle against, power and authority. For example, the collective perspective of the "proletariat". So, the proposition is true if it is the "product of political struggle" for the "emancipation of the workers" (Theodor Adorno). This view is again associated with some social constructivists (e.g., feminist epistemologists).
Transcendental perspectivism
On this conception, a truth-conferring perspective is something transcendental, and outside immediate human reach. The idea is that there is a transcendental or ideal epistemic perspective and the truth is, roughly, what is accepted or recognized-as-true from that ideal perspective. There are two subvarieties of transcendental perspectivism:
Coherentism
The ideal epistemic perspective is the set of "maximally coherent and consistent propositions". A proposition is true if and only if it is a member of this maximally coherent and consistent set of propositions (associated with several German and British 19th century idealists).
Theological perspectivism
Theological perspectivism is the idea that a proposition is true if and only if it agrees with the thoughts of God.
Pragmatic views
Although the pragmatic theory of truth is not strictly classifiable as an epistemic theory of truth, it does bear a relationship to theories of truth that are based on concepts of inquiry and knowledge.
The ideal epistemic perspective is that of "completed science", which will appear in the (temporal) "limit of scientific inquiry". A proposition is true if and only if, in the long run it will come to be accepted by a group of inquirers using scientific rational inquiry. This can also be modalized: a proposition is true if, and only if, in the long run it would come to be accepted by a group of inquirers, if they were to use scientific rational inquiry. This view is thus a modification of the consensus view. The consensus needs to satisfy certain constraints in order for the accepted propositions to be true. For example, the methods used must be those of scientific inquiry (criticism, observation, reproducibility, etc.). This "modification" of the consensus view is an appeal to the correspondence theory of truth, which is opposed to the consensus theory of truth.
Long-run scientific pragmatism was defended by Charles Sanders Peirce. A variant of this viewpoint is associated with Jürgen Habermas, though he later abandoned it.
See also
Confirmation holism
Criteria of truth
Related topics
Pragmaticism
Pragmatic maxim
Scientific method
Testability
References
Theories of truth | 0.810141 | 0.961725 | 0.779133 |
Pedagogy | Pedagogy, most commonly understood as the approach to teaching, is the theory and practice of learning, and how this process influences, and is influenced by, the social, political, and psychological development of learners. Pedagogy, taken as an academic discipline, is the study of how knowledge and skills are imparted in an educational context, and it considers the interactions that take place during learning. Both the theory and practice of pedagogy vary greatly as they reflect different social, political, and cultural contexts.
Pedagogy is often described as the act of teaching. The pedagogy adopted by teachers shapes their actions, judgments, and teaching strategies by taking into consideration theories of learning, understandings of students and their needs, and the backgrounds and interests of individual students. Its aims may range from furthering liberal education (the general development of human potential) to the narrower specifics of vocational education (the imparting and acquisition of specific skills).
Instructive strategies are governed by the pupil's background knowledge and experience, situation and environment, as well as learning goals set by the student and teacher. One example would be the Socratic method.
Definition
The meaning of the term "pedagogy" is often contested and a great variety of definitions has been suggested. The most common approach is to define it as the study or science of teaching methods. In this sense, it is the methodology of education. As a methodology, it investigates the ways and practices that can be used to realize the aims of education. The main aim is often identified with the transmission of knowledge. Other aims include fostering skills and character traits. They include helping the student develop their intellectual and social abilities as well as psychomotor and affective learning, which are about developing practical skills and adequate emotional dispositions, respectively.
However, not everyone agrees with this characterization of pedagogy and some see it less as a science and more as an art or a craft. This characterization puts more emphasis on the practical aspect of pedagogy, which may involve various forms of "tacit knowledge that is hard to put into words". This approach is often based on the idea that the most central aspects of teaching are only acquired by practice and cannot be easily codified through scientific inquiry. In this regard, pedagogy is concerned with "observing and refining one's skill as a teacher". A more inclusive definition combines these two characterizations and sees pedagogy both as the practice of teaching and the discourse and study of teaching methods. Some theorists give an even wider definition by including considerations such as "the development of health and bodily fitness, social and moral welfare, ethics and aesthetics". Due to this variety of meanings, it is sometimes suggested that pedagogy is a "catch-all term" associated with various issues of teaching and learning. In this sense, it lacks a precise definition.
According to Patricia Murphy, a detailed reflection on the meaning of the term "pedagogy" is important nonetheless since different theorists often use it in very different ways. In some cases, non-trivial assumptions about the nature of learning are even included in its definition. Pedagogy is often specifically understood in relation to school education. But in a wider sense, it includes all forms of education, both inside and outside schools. In this wide sense, it is concerned with the process of teaching taking place between two parties: teachers and learners. The teacher's goal is to bring about certain experiences in the learner to foster their understanding of the subject matter to be taught. Pedagogy is interested in the forms and methods used to convey this understanding.
Pedagogy is closely related to didactics but there are some differences. Usually, didactics is seen as the more limited term that refers mainly to the teacher's role and activities, i.e how their behavior is most beneficial to the process of education. This is one central aspect of pedagogy besides other aspects that consider the learner's perspective as well. In this wider sense, pedagogy focuses on "any conscious activity by one person designed to enhance learning in another".
The word pedagogy is a derivative of the Greek (paidagōgia), from (paidagōgos), itself a synthesis of (ágō), "I lead", and (, genitive , ) "boy, child": hence, "attendance on boys, to lead a child". It is pronounced variously, as , , or . The related word pedagogue has had a negative connotation of pedantry, dating from at least the 1650s; a related expression is educational theorist. The term "pedagogy" is also found in the English discourse, but it is more broadly discussed in other European languages, such as French and German.
History
Western
In the Western world, pedagogy is associated with the Greek tradition of philosophical dialogue, particularly the Socratic method of inquiry. A more general account of its development holds that it emerged from the active concept of humanity as distinct from a fatalistic one and that history and human destiny are results of human actions. This idea germinated in ancient Greece and was further developed during the Renaissance, the Reformation, and the Age of Enlightenment.
Socrates
Socrates (470 – 399 BCE) employed the Socratic method while engaging with a student or peer. This style does not impart knowledge, but rather tries to strengthen the logic of the student by revealing the conclusions of the statement of the student as erroneous or supported. The instructor in this learning environment recognizes the learners' need to think for themselves to facilitate their ability to think about problems and issues. It was first described by Plato in the Socratic Dialogues.
Plato
Plato (428/427 or 424/423 – 348/347 BCE) describes a system of education in The Republic (375 BCE) in which individual and family rights are sacrificed to the State. He describes three castes: one to learn a trade; one to learn literary and aesthetic ideas; and one to be trained in literary, aesthetic, scientific, and philosophical ideas. Plato saw education as a fulfillment of the soul, and by fulfilling the soul the body subsequently benefited. Plato viewed physical education for all as a necessity to a stable society.
Aristotle
Aristotle (384–322 BCE) composed a treatise, On Education, which was subsequently lost. However, he renounced Plato's view in subsequent works, advocating for a common education mandated to all citizens by the State. A small minority of people residing within Greek city-states at this time were considered citizens, and thus Aristotle still limited education to a minority within Greece. Aristotle advocates physical education should precede intellectual studies.
Quintilian
Marcus Fabius Quintilianus (35 – 100 CE) published his pedagogy in Institutio Oratoria (95 CE). He describes education as a gradual affair, and places certain responsibilities on the teacher. He advocates for rhetorical, grammatical, scientific, and philosophical education.
Tertullian
Quintus Septimius Florens Tertullianus (155 – 240 CE) was a Christian scholar who rejected all pagan education, insisting this was "a road to the false and arrogant wisdom of ancient philosophers".
Jerome
Saint Jerome (347 – 30 September 420 CE), or Saint Hieronymus, was a Christian scholar who detailed his pedagogy of girls in numerous letters throughout his life. He did not believe the body in need of training, and thus advocated for fasting and mortification to subdue the body. He only recommends the Bible as reading material, with limited exposure, and cautions against musical instruments. He advocates against letting girls interact with society, and of having "affections for one of her companions than for others." He does recommend teaching the alphabet by ivory blocks instead of memorization so "She will thus learn by playing." He is an advocate of positive reinforcement, stating "Do not chide her for the difficulty she may have in learning. On the contrary, encourage her by commendation..."
Jean Gerson
Jean Charlier de Gerson (13 December 1363 – 12 July 1429), the Chancellor of the University of Paris, wrote in De parvulis ad Christum trahendis "Little children are more easily managed by caresses than fear," supporting a more gentle approach than his Christian predecessors. He also states "Above all else, let the teacher make an effort to be a father to his pupils." He is considered a precursor of Fenelon.
John Amos Comenius
John Amos Comenius (28 March 1592 – 15 November 1670) is considered the father of modern education.
Johann Pestalozzi
Johann Heinrich Pestalozzi (January 12, 1746 – February 17, 1827), founder of several educational institutions both in German- and French-speaking regions of Switzerland and wrote many works explaining his revolutionary modern principles of education. His motto was "Learning by head, hand and heart".
Johann Herbart
The educational philosophy and pedagogy of Johann Friedrich Herbart (4 May 1776 – 14 August 1841) highlighted the correlation between personal development and the resulting benefits to society. In other words, Herbart proposed that humans become fulfilled once they establish themselves as productive citizens. Herbartianism refers to the movement underpinned by Herbart's theoretical perspectives. Referring to the teaching process, Herbart suggested five steps as crucial components. Specifically, these five steps include: preparation, presentation, association, generalization, and application. Herbart suggests that pedagogy relates to having assumptions as an educator and a specific set of abilities with a deliberate end goal in mind.
John Dewey
The pedagogy of John Dewey (20 October 1859 – 1 June 1952) is presented in several works, including My Pedagogic Creed (1897), The School and Society (1900), The Child and the Curriculum (1902), Democracy and Education (1916), Schools of To-morrow (1915) with Evelyn Dewey, and Experience and Education (1938). In his eyes, the purpose of education should not revolve around the acquisition of a pre-determined set of skills, but rather the realization of one's full potential and the ability to use those skills for the greater good (My Pedagogic Creed, Dewey, 1897). Dewey advocated for an educational structure that strikes a balance between delivering knowledge while also taking into account the interests and experiences of the student (The Child and the Curriculum, Dewey, 1902). Dewey not only re-imagined the way that the learning process should take place but also the role that the teacher should play within that process. He envisioned a divergence from the mastery of a pre-selected set of skills to the cultivation of autonomy and critical-thinking within the teacher and student alike.
Eastern
Confucius
Confucius (551–479 BCE) stated that authority has the responsibility to provide oral and written instruction to the people under the rule, and "should do them good in every possible way." One of the deepest teachings of Confucius may have been the superiority of personal exemplification over explicit rules of behavior. His moral teachings emphasized self-cultivation, emulation of moral exemplars, and the attainment of skilled judgement rather than knowledge of rules. Other relevant practices in the Confucian teaching tradition include the Rite and its notion of body-knowledge as well as Confucian understanding of the self, one that has a broader conceptualization than the Western individual self.
Pedagogical considerations
Teaching method
Hidden curriculum
A hidden curriculum refers to extra educational activities or side effect of an education, "[lessons] which are learned but not openly intended" such as the transmission of norms, values, and beliefs conveyed in the classroom and the social environment.
Learning space
Learning space or learning setting refers to a physical setting for a learning environment, a place in which teaching and learning occur. The term is commonly used as a more definitive alternative to "classroom", but it may also refer to an indoor or outdoor location, either actual or virtual. Learning spaces are highly diverse in use, learning styles, configuration, location, and educational institution. They support a variety of pedagogies, including quiet study, passive or active learning, kinesthetic or physical learning, vocational learning, experiential learning, and others.
Learning theories
Learning theories are conceptual frameworks describing how knowledge is absorbed, processed, and retained during learning. Cognitive, emotional, and environmental influences, as well as prior experience, all play a part in how understanding, or a world view, is acquired or changed and knowledge and skills retained.
Distance learning
Distance education or long-distance learning is the education of students who may not always be physically present at a school. Traditionally, this usually involved correspondence courses wherein the student corresponded with the school via post. Today it involves online education. Courses that are conducted (51 percent or more) are either hybrid, blended or 100% distance learning. Massive open online courses (MOOCs), offering large-scale interactive participation and open access through the World Wide Web or other network technologies, are recent developments in distance education. A number of other terms (distributed learning, e-learning, online learning, etc.) are used roughly synonymously with distance education.
Teaching resource adaptation
Adapting the teaching resource should suit appropriate teaching and learning environments, national and local cultural norms, and make it accessible to different types of learners. Key adaptations in teaching resource include:
Classroom constraints
Large class size – consider smaller groups or have discussions in pairs;
Time available – shorten or lengthen the duration of activities;
Modifying materials needed – find, make or substitute required materials;
Space requirements – reorganize classroom, use a larger space, move indoors or outdoors.
Cultural familiarity
Change references to names, food and items to make them more familiar;
Substitute local texts or art (folklore, stories, songs, games, artwork and proverbs).
Local relevance
Use the names and processes for local institutions such as courts;
Be sensitive of local behavior norms (e.g. for genders and ages);
Ensure content is sensitive to the degree of rule of law in society (trust in authorities and institutions).
Inclusivity for diverse students
Appropriate reading level(s) of texts for student use;
Activities for different learning styles;
Accommodation for students with special educational needs;
Sensitivity to cultural, ethnic and linguistic diversity;
Sensitivity to students' socioeconomic status.
Pedagogical approaches
Evidence-based
Dialogic learning
Dialogic learning is learning that takes place through dialogue. It is typically the result of egalitarian dialogue; in other words, the consequence of a dialogue in which different people provide arguments based on validity claims and not on power claims.
Student-centered learning
Student-centered learning, also known as learner-centered education, broadly encompasses methods of teaching that shift the focus of instruction from the teacher to the student. In original usage, student-centered learning aims to develop learner autonomy and independence by putting responsibility for the learning path in the hands of students. Student-centered instruction focuses on skills and practices that enable lifelong learning and independent problem-solving.
Critical pedagogy
Critical pedagogy applies critical theory to pedagogy and asserts that educational practices are contested and shaped by history, that schools are not politically neutral spaces, and that teaching is political. Decisions regarding the curriculum, disciplinary practices, student testing, textbook selection, the language used by the teacher, and more can empower or disempower students. It asserts that educational practices favor some students over others and some practices harm all students. It also asserts that educational practices often favor some voices and perspectives while marginalizing or ignoring others.
Academic degrees
The academic degree Ped. D., Doctor of Pedagogy, is awarded honorarily by some US universities to distinguished teachers (in the US and UK, earned degrees within the instructive field are classified as an Ed.D., Doctor of Education, or a Ph.D., Doctor of Philosophy). The term is also used to denote an emphasis in education as a specialty in a field (for instance, a Doctor of Music degree in piano pedagogy).
Pedagogues around the world
The education of pedagogues, and their role in society, varies greatly from culture to culture.
Belgium
Important pedagogues in Belgium are Jan Masschelein and Maarten Simons (Catholic University of Leuven). According to these scholars, schools nowadays are often dismissed as outdated or ineffective. Deschoolers even argue that schools rest on the false premise that schools are necessary for learning but that people learn faster or better outside the classroom. Others critique the fact that some teachers stand before a classroom with only six weeks of teacher education. Against this background, Masschelein and Simons propose to look at school from a different point of view. Their educational morphology approaches the school as a particular scholastic 'form of gathering'. What the authors mean with that, is the following: school is a particular time-space-matter arrangement. This thus includes concretes architectures, technologies, practices and figures. This arrangement "deals in a specific way with the new generation, allows for a particular relation to the world, and for a particular experience of potentiality and of commonality (of making things public)".
Masschelein and Simons' most famous work is the book "Looking after school: a critical analysis of personalisation in Education". It takes a critical look at the main discourse of today's education. Education is seen through a socio-economic lens: education is aimed at mobilising talents and competencies (p23). This is seen in multiple texts from governing bodies, in Belgium and Europe. One of the most significant examples is quoted on page 23: "Education and training can only contribute to growth and job-creation if learning is focused on the knowledge, skills and competences to be acquired by students (learning outcomes) through the learning process, rather than on completing a specific stage or on time spent in school." (European Commission, 2012, p.7) This is, according to Masschelein and Simons a plea for learning outcomes and demonstrates a vision of education in which the institution is no longer the point of departure. The main ambition in this discourse of education is the efficient and effective realisation of learning outcomes for all. Things like the place and time of learning, didactic and pedagogic support are means to an end: the acquisition of preplanned learning outcomes. And these outcomes are a direct input for the knowledge economy. Masschelein and Simons' main critique here is that the main concern is not the educational institution (anymore). Rather, the focus lies on the learning processes and mainly on the learning outcomes of the individual learner.
Brazil
In Brazil, a pedagogue is a multidisciplinary educator. Undergraduate education in Pedagogy qualifies students to become school administrators or coordinators at all educational levels, and also to become multidisciplinary teachers, such as pre-school, elementary and special teachers.
Denmark
In Scandinavia, a pedagogue (pædagog) is broadly speaking a practitioner of pedagogy, but the term is primarily reserved for individuals who occupy jobs in pre-school education (such as kindergartens and nurseries). A pedagogue can occupy various kinds of jobs, within this restrictive definition, e.g. in retirement homes, prisons, orphanages, and human resource management. When working with at-risk families or youths they are referred to as social pedagogues (socialpædagog).
The pedagogue's job is usually distinguished from a teacher's by primarily focusing on teaching children life-preparing knowledge such as social or non-curriculum skills, and cultural norms. There is also a very big focus on the care and well-being of the child. Many pedagogical institutions also practice social inclusion. The pedagogue's work also consists of supporting the child in their mental and social development.
In Denmark all pedagogues are educated at a series of national institutes for social educators located in all major cities. The education is a 3.5-year academic course, giving the student the title of a Bachelor in Social Education (Danish: Professionsbachelor som pædagog).
It is also possible to earn a master's degree in pedagogy/educational science from the University of Copenhagen. This BA and MA program has a more theoretical focus compared to the more vocational Bachelor in Social Education.
Hungary
In Hungary, the word pedagogue (pedagógus) is synonymous with the teacher (tanár); therefore, teachers of both primary and secondary schools may be referred to as pedagogues, a word that appears also in the name of their lobbyist organizations and labor unions (e.g. Labor Union of Pedagogues, Democratic Labor Union of Pedagogues). However, undergraduate education in Pedagogy does not qualify students to become teachers in primary or secondary schools but makes them able to apply to be educational assistants. As of 2013, the six-year training period was re-installed in place of the undergraduate and postgraduate division which characterized the previous practice.
Modern pedagogy
An article from Kathmandu Post published on 3 June 2018 described the usual first day of school in an academic calendar. Teachers meet their students with distinct traits. The diversity of attributions among children or teens exceeds similarities. Educators have to teach students with different cultural, social, and religious backgrounds. This situation entails a differentiated strategy in pedagogy and not the traditional approach for teachers to accomplish goals efficiently.
American author and educator Carol Ann Tomlinson defined Differentiated Instruction as "teachers' efforts in responding to inconsistencies among students in the classroom." Differentiation refers to methods of teaching. She explained that Differentiated Instruction gives learners a variety of alternatives for acquiring information. Primary principles comprising the structure of Differentiated Instruction include formative and ongoing assessment, group collaboration, recognition of students' diverse levels of knowledge, problem-solving, and choice in reading and writing experiences.
Howard Gardner gained prominence in the education sector for his Multiple Intelligences Theory. He named seven of these intelligences in 1983: Linguistic, Logical and Mathematical, Visual and Spatial, Body and Kinesthetic, Musical and Rhythmic, Intrapersonal, and Interpersonal. Critics say the theory is based only on Gardner's intuition instead of empirical data. Another criticism is that the intelligence is too identical for types of personalities. The theory of Howard Gardner came from cognitive research and states these intelligences help people to "know the world, understand themselves, and other people." Said differences dispute an educational system that presumes students can "understand the same materials in the same manner and that a standardized, collective measure is very much impartial towards linguistic approaches in instruction and assessment as well as to some extent logical and quantitative styles."
Educational research
See also
Outline of education
References
Sources
See also
List of important publications in philosophy
List of important publications in anthropology
List of important publications in economics
Further reading
Bruner, J. S. (1960). The Process of Education, Cambridge, Massachusetts: Harvard University Press.
Bruner, J. S. (1971). The Relevance of Education. New York, NY: Norton
Bruner, J. S. (1966). Toward a Theory of Instruction. Cambridge, Massachusetts: Belkapp Press.
John Dewey, Experience and Education, 1938
Paulo Freire, Pedagogy of the Oppressed, 1968 (English translation: 1970)
Ivan Illich, Deschooling Society, 1971
David L. Kirp, The Sandbox Investment, 2007
Montessori, M. (1910). Antropologia Pedagogica.
Montessori, M. (1921). Manuale di Pedagogia Scientifica.
Montessori, M. (1934). Psico Aritmética.
Montessori, M. (1934). Psico Geométria.
Piaget, J. (1926). The Language and Thought of the Child. London: Routledge & Kegan.
Karl Rosenkranz (1848). Pedagogics as a System. Translated 1872 by Anna C. Brackett, R.P. Studley Company
Karl Rosenkranz (1899). The philosophy of education. D. Appleton and Co.
Friedrich Schiller, On the Aesthetic Education of Man, 1794
Vygotsky, L. (1962). Thought and Language. Cambridge, Massachusetts: MIT Press.
Didactics
Educational psychology
Teaching | 0.779083 | 0.999131 | 0.778406 |
Aporia | In philosophy, an aporia is a conundrum or state of puzzlement. In rhetoric, it is a declaration of doubt, made for rhetorical purpose and often feigned.
Philosophy
In philosophy, an aporia is a philosophical puzzle or a seemingly irresoluble impasse in an inquiry, often arising as a result of equally plausible yet inconsistent premises, i.e., a paradox. It can also denote the state of being perplexed, or at a loss, at such a puzzle or impasse. The notion of an aporia is principally found in Greek philosophy, but it also plays a role in post-structuralist philosophy, as in the writings of Jacques Derrida and Luce Irigaray, and it has also served as an instrument of investigation in analytic philosophy.
Plato's early dialogues are often called his 'aporetic' (Greek: ) dialogues because they typically end in aporia. In such a dialogue, Socrates questions his interlocutor about the nature or definition of a concept, e.g., virtue or courage. Socrates then, through elenctic testing, shows his interlocutor that his answer is unsatisfactory. After a number of such failed attempts, the interlocutor admits he is in aporia about the examined concept, concluding that he does not know what it is. In Plato's Meno (84a-c), Socrates describes the purgative effect of reducing someone to aporia: it shows someone who merely thought he knew something that he does not in fact know it and instills in him a desire to investigate it.
In Aristotle's Metaphysics, aporia plays a role in his method of inquiry. In contrast to a rationalist inquiry that begins from a priori principles, or an empiricist inquiry that begins from a tabula rasa, he begins the Metaphysics by surveying the various aporiai that exist, drawing in particular on what puzzled his predecessors: "with a view to the science we are seeking [i.e., metaphysics], it is necessary that we should first review the things about which we need, from the outset, to be puzzled" (995a24). Book Beta of the Metaphysics is a list of the aporiai that preoccupy the rest of the work.
In Pyrrhonism, aporia is intentionally induced as a means of producing ataraxia.
Contemporary academic studies of the term further characterize its usage in philosophical discourses. In "Aporetics: Rational Deliberation in the Face of Inconsistency" (2009), Nicholas Rescher is concerned with the methods in which an aporia, or "apory", is intellectually processed and resolved. In the Preface, Rescher identifies the work as an attempt to "synthesize and systematize an aporetic procedure for dealing with information overload (of 'cognitive dissonance', as it is sometimes called)" (ix). The text is also useful in that it provides a more precise (although specialized) definition of the concept: "any cognitive situation in which the threat of inconsistency confronts us" (1). Rescher further introduces his specific study of the apory by qualifying the term as "a group of individually plausible but collectively incompatible theses", a designation he illustrates with the following syllogism or "cluster of contentions":
The aporia, or "apory" of this syllogism lies in the fact that, while each of these assertions is individually conceivable, together they are inconsistent or impossible (i.e. they constitute a paradox). Rescher's study is indicative of the continuing presence of scholarly examinations of the concept of aporia and, furthermore, of the continuing attempts of scholars to translate the word, to describe its modern meaning.
Rhetoric
Aporia is also a rhetorical device whereby the speaker expresses a doubt—often feigned—about their position or asks the audience rhetorically how the speaker should proceed. One aim of aporia may be to discredit the speaker's opponent. Aporia is also called dubitatio. For example:
See also
Antinomy
Cognition
Dubitative mood
Figure of speech
Intuition
Rhetorical question
Thought experiment
Zeno's paradoxes
Gordian knot
References
Vasilis Politis (2006). "Aporia and Searching in the Early Plato" in L. Judson and V. Karasmanis eds. Remembering Socrates. Oxford University Press.
Concepts in ancient Greek epistemology
Concepts in ancient Greek philosophy of mind
Figures of speech
Mental states
Pyrrhonism
Rhetoric
Theories in ancient Greek philosophy | 0.782542 | 0.994711 | 0.778403 |
Applied ontology | Applied ontology is the application of Ontology for practical purposes. This can involve employing ontological methods or resources to specific domains,
such as management, relationships, biomedicine, information science or geography. Alternatively, applied ontology can aim more generally at developing improved methodologies for recording and organizing knowledge.
Much work in applied ontology is carried out within the framework of the Semantic Web. Ontologies can structure data and add useful semantic content to it, such as definitions of classes and relations between entities, including subclass relations. The semantic web makes use of languages designed to allow for ontological content, including the Resource Description Framework (RDF) and the Web Ontology Language (OWL).
Applying ontology to relationships
The challenge of applying ontology is ontology's emphasis on a world view orthogonal to epistemology. The emphasis is on being rather than on doing (as implied by "applied") or on knowing. This is explored by philosophers and pragmatists like Fernando Flores and Martin Heidegger.
One way in which that emphasis plays out is in the concept of "speech acts": acts of promising, ordering, apologizing, requesting, inviting or sharing. The study of these acts from an ontological perspective is one of the driving forces behind relationship-oriented applied ontology. This can involve concepts championed by ordinary language philosophers like Ludwig Wittgenstein.
Applying ontology can also involve looking at the relationship between a person's world and that person's actions. The context or clearing is highly influenced by the being of the subject or the field of being itself. This view is highly influenced by the philosophy of phenomenology, the works of Heidegger, and others.
Ontological perspectives
Social scientists adopt a number of approaches to ontology. Some of these are:
Realism - the idea that facts are "out there" just waiting to be discovered;
Empiricism - the idea that we can observe the world and evaluate those observations in relation to facts;
Positivism - which focuses on the observations themselves, attending more to claims about facts than to facts themselves;
Grounded theory - which seeks to derive theories from facts;
Engaged theory - which moves across different levels of interpretation, linking different empirical questions to ontological understandings;
Postmodernism - which regards facts as fluid and elusive, and recommends focusing only on observational claims.
Data ontology
Ontologies can be used for structuring data in a machine-readable manner. In this context, an ontology is a controlled vocabulary of classes that can be placed in hierarchical relations with each other. These classes can represent entities in the real world which data is about. Data can then be linked to the formal structure of these ontologies to aid dataset interoperability, along with retrieval and discovery of information. The classes in an ontology can be limited to a relatively narrow domain (such as an ontology of occupations), or expansively cover all of reality with highly general classes (such as in Basic Formal Ontology).
Applied ontology is a quickly growing field. It has found major applications in areas such as biological research, artificial intelligence, banking, healthcare, and defense.
See also
Foundation ontology
Applied philosophy
John Searle
Bertrand Russell
Barry Smith, ontologist with a focus on biomedicine
Nicola Guarino, researcher in the formal ontology of information systems
References
External links
Applied philosophy
Applied ontology | 0.801775 | 0.97051 | 0.778131 |
Intellectualism | Intellectualism is the mental perspective that emphasizes the use, development, and exercise of the intellect, and is identified with the life of the mind of the intellectual. In the field of philosophy, the term intellectualism indicates one of two ways of critically thinking about the character of the world: (i) rationalism, which is knowledge derived solely from reason; and (ii) empiricism, which is knowledge derived solely from sense experience. Each intellectual approach attempts to eliminate fallacies that ignore, mistake, or distort evidence about "what ought to be" instead of "what is" the character of the world.
Moreover, hierarchical intellectualism is a theory of intelligence which postulates that the mental capabilities that constitute intelligence occur and are arranged in a hierarchy ranging from the general to the specific, e.g. the I.Q. test.
Ancient moral intellectualism
The Greek philosopher Socrates (c. 470 – 399 BC) said that intellectualism allows that "one will do what is right or [what is] best, just as soon as one truly understands what is right or best"; that virtue is a matter of the intellect, because virtue and Knowledge are related qualities that a person accrues, possesses, and improves by dedication to the use of Reason. Socrates's definition of moral intellectualism is a basis of the philosophy of Stoicism, wherein the consequences of that definition are called "Socratic paradoxes", such as "There is no weakness of will", because a person either knowingly does evil or knowingly seeks to do evil (moral wrong); that anyone who does commit evil or seeks to commit evil does so involuntarily; and that virtue is knowledge, that there are few virtues, but that all virtues are one.
The conceptions of Truth and of Knowledge of contemporary philosophy are unlike Socrates's conceptions of Truth and Knowledge and of ethical conduct, and cannot be equated with modern, post–Cartesian conceptions of knowledge and rational intellectualism. In that vein, by way of detailed study of history, Michel Foucault demonstrated that in Classical Antiquity (800 BC – AD 1000), "knowing the truth" was akin to "spiritual knowledge", which is integral to the principle of "caring for the self".
In effort to become a moral person the care for the self is realised through ascetic exercises meant to ensure that knowledge of truth was learned and integrated to the Self. Therefore, to understand truth meant possessing "intellectual knowledge" that integrated the self to the (universal) truth and to living an authentic life. Achieving that ethical state required continual care for the self, but also meant being someone who embodies truth, and so can readily practice the Classical-era rhetorical device of parrhesia: "to speak candidly, and to ask forgiveness for so speaking"; and, by extension, to practice the moral obligation to speak truth for the common good, even at personal risk.
Medieval theological intellectualism
Medieval theological intellectualism is a doctrine of divine action, wherein the faculty of intellect precedes, and is superior to, the faculty of the will (voluntas intellectum sequitur). As such, Intellectualism is contrasted with voluntarism, which proposes the Will as superior to the intellect, and to the emotions; hence, the stance that "according to intellectualism, choices of the Will result from that which the intellect recognizes as good; the will, itself, is determined. For voluntarism, by contrast, it is the Will which identifies which objects are good, and the Will, itself, is indetermined". From that philosophical perspective and historical context, the Spanish Muslim polymath Averroës (1126–1198) in the 12th century, the English theologian Roger Bacon, the Italian Christian theologian Thomas Aquinas (1225–1274), and the German Christian theologian Meister Eckhart (1260–1327) in the 13th century, are recognised intellectualists.
See also
Anti-intellectualism
Chinese intellectualism
Intellectual
Intellectual movements in Iran
Intelligentsia
Scientia potentia est
References
Academic terminology
Intellectual history
Intelligence
Philosophy of education
Rationalism
Thought | 0.786344 | 0.989458 | 0.778054 |
Posthumanism | Posthumanism or post-humanism (meaning "after humanism" or "beyond humanism") is an idea in continental philosophy and critical theory responding to the presence of anthropocentrism in 21st-century thought. Posthumanization comprises "those processes by which a society comes to include members other than 'natural' biological human beings who, in one way or another, contribute to the structures, dynamics, or meaning of the society."
It encompasses a wide variety of branches, including:
Antihumanism: a branch of theory that is critical of traditional humanism and traditional ideas about the human condition, vitality and agency.
Cultural posthumanism: A branch of cultural theory critical of the foundational assumptions of humanism and its legacy that examines and questions the historical notions of "human" and "human nature", often challenging typical notions of human subjectivity and embodiment and strives to move beyond "archaic" concepts of "human nature" to develop ones which constantly adapt to contemporary technoscientific knowledge.
Philosophical posthumanism: A philosophical direction that draws on cultural posthumanism, the philosophical strand examines the ethical implications of expanding the circle of moral concern and extending subjectivities beyond the human species.
Posthuman condition: The deconstruction of the human condition by critical theorists.
Existential posthumanism: it embraces posthumanism as a praxis of existence. Its sources are drawn from non-dualistic global philosophies, such as Advaita Vedanta, Taoism and Zen Buddhism, the philosophies of Yoga, continental existentialism, native epistemologies and Sufism, among others. It examines and challenges hegemonic notions of being "human" by delving into the history of embodied practices of being human and, thus, expanding the reflection on human nature.
Posthuman transhumanism: A transhuman ideology and movement which, drawing from posthumanist philosophy, seeks to develop and make available technologies that enable immortality and greatly enhance human intellectual, physical, and psychological capacities in order to achieve a "posthuman future".
AI takeover: A variant of transhumanism in which humans will not be enhanced, but rather eventually replaced by artificial intelligences. Some philosophers and theorists, including Nick Land, promote the view that humans should embrace and accept their eventual demise as a consequence of a technological singularity. This is related to the view of "cosmism", which supports the building of strong artificial intelligence even if it may entail the end of humanity, as in their view it "would be a cosmic tragedy if humanity freezes evolution at the puny human level".
Voluntary human extinction: Seeks a "posthuman future" that in this case is a future without humans.
Philosophical posthumanism
Philosopher Theodore Schatzki suggests there are two varieties of posthumanism of the philosophical kind:
One, which he calls "objectivism", tries to counter the overemphasis of the subjective, or intersubjective, that pervades humanism, and emphasises the role of the nonhuman agents, whether they be animals and plants, or computers or other things, because "Humans and nonhumans, it [objectivism] proclaims, codetermine one another", and also claims "independence of (some) objects from human activity and conceptualization".
A second posthumanist agenda is "the prioritization of practices over individuals (or individual subjects)", which, they say, constitute the individual.
There may be a third kind of posthumanism, propounded by the philosopher Herman Dooyeweerd. Though he did not label it "posthumanism", he made an immanent critique of humanism, and then constructed a philosophy that presupposed neither humanist, nor scholastic, nor Greek thought but started with a different religious ground motive. Dooyeweerd prioritized law and meaningfulness as that which enables humanity and all else to exist, behave, live, occur, etc. "Meaning is the being of all that has been created", Dooyeweerd wrote, "and the nature even of our selfhood". Both human and nonhuman alike function subject to a common law-side, which is diverse, composed of a number of distinct law-spheres or aspects. The temporal being of both human and non-human is multi-aspectual; for example, both plants and humans are bodies, functioning in the biotic aspect, and both computers and humans function in the formative and lingual aspect, but humans function in the aesthetic, juridical, ethical and faith aspects too. The Dooyeweerdian version is able to incorporate and integrate both the objectivist version and the practices version, because it allows nonhuman agents their own subject-functioning in various aspects and places emphasis on aspectual functioning.
Emergence of philosophical posthumanism
Ihab Hassan, theorist in the academic study of literature, once stated: "Humanism may be coming to an end as humanism transforms itself into something one must helplessly call posthumanism." This view predates most currents of posthumanism which have developed over the late 20th century in somewhat diverse, but complementary, domains of thought and practice. For example, Hassan is a known scholar whose theoretical writings expressly address postmodernity in society. Beyond postmodernist studies, posthumanism has been developed and deployed by various cultural theorists, often in reaction to problematic inherent assumptions within humanistic and enlightenment thought.
Theorists who both complement and contrast Hassan include Michel Foucault, Judith Butler, cyberneticists such as Gregory Bateson, Warren McCullouch, Norbert Wiener, Bruno Latour, Cary Wolfe, Elaine Graham, N. Katherine Hayles, Benjamin H. Bratton, Donna Haraway, Peter Sloterdijk, Stefan Lorenz Sorgner, Evan Thompson, Francisco Varela, Humberto Maturana, Timothy Morton, and Douglas Kellner. Among the theorists are philosophers, such as Robert Pepperell, who have written about a "posthuman condition", which is often substituted for the term posthumanism.
Posthumanism differs from classical humanism by relegating humanity back to one of many natural species, thereby rejecting any claims founded on anthropocentric dominance. According to this claim, humans have no inherent rights to destroy nature or set themselves above it in ethical considerations a priori. Human knowledge is also reduced to a less controlling position, previously seen as the defining aspect of the world. Human rights exist on a spectrum with animal rights and posthuman rights. The limitations and fallibility of human intelligence are confessed, even though it does not imply abandoning the rational tradition of humanism.
Proponents of a posthuman discourse, suggest that innovative advancements and emerging technologies have transcended the traditional model of the human, as proposed by Descartes among others associated with philosophy of the Enlightenment period. Posthumanistic views were also found in the works of Shakespeare. In contrast to humanism, the discourse of posthumanism seeks to redefine the boundaries surrounding modern philosophical understanding of the human. Posthumanism represents an evolution of thought beyond that of the contemporary social boundaries and is predicated on the seeking of truth within a postmodern context. In so doing, it rejects previous attempts to establish "anthropological universals" that are imbued with anthropocentric assumptions. Recently, critics have sought to describe the emergence of posthumanism as a critical moment in modernity, arguing for the origins of key posthuman ideas in modern fiction, in Nietzsche, or in a modernist response to the crisis of historicity.
Although Nietzsche's philosophy has been characterized as posthumanist, Foucault placed posthumanism within a context that differentiated humanism from Enlightenment thought. According to Foucault, the two existed in a state of tension: as humanism sought to establish norms while Enlightenment thought attempted to transcend all that is material, including the boundaries that are constructed by humanistic thought. Drawing on the Enlightenment's challenges to the boundaries of humanism, posthumanism rejects the various assumptions of human dogmas (anthropological, political, scientific) and takes the next step by attempting to change the nature of thought about what it means to be human. This requires not only decentering the human in multiple discourses (evolutionary, ecological and technological) but also examining those discourses to uncover inherent humanistic, anthropocentric, normative notions of humanness and the concept of the human.
Contemporary posthuman discourse
Posthumanistic discourse aims to open up spaces to examine what it means to be human and critically question the concept of "the human" in light of current cultural and historical contexts. In her book How We Became Posthuman, N. Katherine Hayles, writes about the struggle between different versions of the posthuman as it continually co-evolves alongside intelligent machines. Such coevolution, according to some strands of the posthuman discourse, allows one to extend their subjective understandings of real experiences beyond the boundaries of embodied existence. According to Hayles's view of posthuman, often referred to as "technological posthumanism", visual perception and digital representations thus paradoxically become ever more salient. Even as one seeks to extend knowledge by deconstructing perceived boundaries, it is these same boundaries that make knowledge acquisition possible. The use of technology in a contemporary society is thought to complicate this relationship.
Hayles discusses the translation of human bodies into information (as suggested by Hans Moravec) in order to illuminate how the boundaries of our embodied reality have been compromised in the current age and how narrow definitions of humanness no longer apply. Because of this, according to Hayles, posthumanism is characterized by a loss of subjectivity based on bodily boundaries. This strand of posthumanism, including the changing notion of subjectivity and the disruption of ideas concerning what it means to be human, is often associated with Donna Haraway's concept of the cyborg. However, Haraway has distanced herself from posthumanistic discourse due to other theorists' use of the term to promote utopian views of technological innovation to extend the human biological capacity (even though these notions would more correctly fall into the realm of transhumanism).
While posthumanism is a broad and complex ideology, it has relevant implications today and for the future. It attempts to redefine social structures without inherently humanly or even biological origins, but rather in terms of social and psychological systems where consciousness and communication could potentially exist as unique disembodied entities. Questions subsequently emerge with respect to the current use and the future of technology in shaping human existence, as do new concerns with regards to language, symbolism, subjectivity, phenomenology, ethics, justice and creativity.
Technological versus non-technological
Posthumanism can be divided into non-technological and technological forms.
Non-technological posthumanism
While posthumanization has links with the scholarly methodologies of posthumanism, it is a distinct phenomenon. The rise of explicit posthumanism as a scholarly approach is relatively recent, occurring since the late 1970s; however, some of the processes of posthumanization that it studies are ancient. For example, the dynamics of non-technological posthumanization have existed historically in all societies in which animals were incorporated into families as household pets or in which ghosts, monsters, angels, or semidivine heroes were considered to play some role in the world.
Such non-technological posthumanization has been manifested not only in mythological and literary works but also in the construction of temples, cemeteries, zoos, or other physical structures that were considered to be inhabited or used by quasi- or para-human beings who were not natural, living, biological human beings but who nevertheless played some role within a given society, to the extent that, according to philosopher Francesca Ferrando: "the notion of spirituality dramatically broadens our understanding of the posthuman, allowing us to investigate not only technical technologies (robotics, cybernetics, biotechnology, nanotechnology, among others), but also, technologies of existence."
Technological posthumanism
Some forms of technological posthumanization involve efforts to directly alter the social, psychological, or physical structures and behaviors of the human being through the development and application of technologies relating to genetic engineering or neurocybernetic augmentation; such forms of posthumanization are studied, e.g., by cyborg theory. Other forms of technological posthumanization indirectly "posthumanize" human society through the deployment of social robots or attempts to develop artificial general intelligences, sentient networks, or other entities that can collaborate and interact with human beings as members of posthumanized societies.
The dynamics of technological posthumanization have long been an important element of science fiction; genres such as cyberpunk take them as a central focus. In recent decades, technological posthumanization has also become the subject of increasing attention by scholars and policymakers. The expanding and accelerating forces of technological posthumanization have generated diverse and conflicting responses, with some researchers viewing the processes of posthumanization as opening the door to a more meaningful and advanced transhumanist future for humanity, while other bioconservative critiques warn that such processes may lead to a fragmentation of human society, loss of meaning, and subjugation to the forces of technology.
Common features
Processes of technological and non-technological posthumanization both tend to result in a partial "de-anthropocentrization" of human society, as its circle of membership is expanded to include other types of entities and the position of human beings is decentered. A common theme of posthumanist study is the way in which processes of posthumanization challenge or blur simple binaries, such as those of "human versus non-human", "natural versus artificial", "alive versus non-alive", and "biological versus mechanical".
Relationship with transhumanism
Sociologist James Hughes comments that there is considerable confusion between the two terms. In the introduction to their book on post- and transhumanism, Robert Ranisch and Stefan Sorgner address the source of this confusion, stating that posthumanism is often used as an umbrella term that includes both transhumanism and critical posthumanism.
Although both subjects relate to the future of humanity, they differ in their view of anthropocentrism. Pramod Nayar, author of Posthumanism, states that posthumanism has two main branches: ontological and critical. Ontological posthumanism is synonymous with transhumanism. The subject is regarded as "an intensification of humanism". Transhumanist thought suggests that humans are not post human yet, but that human enhancement, often through technological advancement and application, is the passage of becoming post human. Transhumanism retains humanism's focus on the Homo sapiens as the center of the world but also considers technology to be an integral aid to human progression. Critical posthumanism, however, is opposed to these views. Critical posthumanism "rejects both human exceptionalism (the idea that humans are unique creatures) and human instrumentalism (that humans have a right to control the natural world)". These contrasting views on the importance of human beings are the main distinctions between the two subjects.
Transhumanism is also more ingrained in popular culture than critical posthumanism, especially in science fiction. The term is referred to by Pramod Nayar as "the pop posthumanism of cinema and pop culture".
Criticism
Some critics have argued that all forms of posthumanism, including transhumanism, have more in common than their respective proponents realize. Linking these different approaches, Paul James suggests that "the key political problem is that, in effect, the position allows the human as a category of being to flow down the plughole of history":
However, some posthumanists in the humanities and the arts are critical of transhumanism (the brunt of James's criticism), in part, because they argue that it incorporates and extends many of the values of Enlightenment humanism and classical liberalism, namely scientism, according to performance philosopher Shannon Bell:
While many modern leaders of thought are accepting of nature of ideologies described by posthumanism, some are more skeptical of the term. Haraway, the author of A Cyborg Manifesto, has outspokenly rejected the term, though acknowledges a philosophical alignment with posthumanism. Haraway opts instead for the term of companion species, referring to nonhuman entities with which humans coexist.
Questions of race, some argue, are suspiciously elided within the "turn" to posthumanism. Noting that the terms "post" and "human" are already loaded with racial meaning, critical theorist Zakiyyah Iman Jackson argues that the impulse to move "beyond" the human within posthumanism too often ignores "praxes of humanity and critiques produced by black people", including Frantz Fanon, Aime Cesaire, Hortense Spillers and Fred Moten. Interrogating the conceptual grounds in which such a mode of "beyond" is rendered legible and viable, Jackson argues that it is important to observe that "blackness conditions and constitutes the very nonhuman disruption and/or disruption" which posthumanists invite. In other words, given that race in general and blackness in particular constitute the very terms through which human-nonhuman distinctions are made, for example in enduring legacies of scientific racism, a gesture toward a "beyond" actually "returns us to a Eurocentric transcendentalism long challenged". Posthumanist scholarship, due to characteristic rhetorical techniques, is also frequently subject to the same critiques commonly made of postmodernist scholarship in the 1980s and 1990s.
See also
Bioconservatism
Cyborg anthropology
Posthuman
Superhuman
Technological change
Technological transitions
Transhumanism
References
Works cited
Via Project Muse .
Critical theory
Ontology
Philosophical theories
Philosophical schools and traditions
Postmodernism | 0.781209 | 0.995869 | 0.777982 |
Methodology | In its most common sense, methodology is the study of research methods. However, the term can also refer to the methods themselves or to the philosophical discussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal, like acquiring knowledge or verifying knowledge claims. This normally involves various steps, like choosing a sample, collecting data from this sample, and interpreting the data. The study of methods concerns a detailed description and analysis of these processes. It includes evaluative aspects by comparing different methods. This way, it is assessed what advantages and disadvantages they have and for what research goals they may be used. These descriptions and evaluations depend on philosophical background assumptions. Examples are how to conceptualize the studied phenomena and what constitutes evidence for or against them. When understood in the widest sense, methodology also includes the discussion of these more abstract issues.
Methodologies are traditionally divided into quantitative and qualitative research. Quantitative research is the main methodology of the natural sciences. It uses precise numerical measurements. Its goal is usually to find universal laws used to make predictions about future events. The dominant methodology in the natural sciences is called the scientific method. It includes steps like observation and the formulation of a hypothesis. Further steps are to test the hypothesis using an experiment, to compare the measurements to the expected results, and to publish the findings.
Qualitative research is more characteristic of the social sciences and gives less prominence to exact numerical measurements. It aims more at an in-depth understanding of the meaning of the studied phenomena and less at universal and predictive laws. Common methods found in the social sciences are surveys, interviews, focus groups, and the nominal group technique. They differ from each other concerning their sample size, the types of questions asked, and the general setting. In recent decades, many social scientists have started using mixed-methods research, which combines quantitative and qualitative methodologies.
Many discussions in methodology concern the question of whether the quantitative approach is superior, especially whether it is adequate when applied to the social domain. A few theorists reject methodology as a discipline in general. For example, some argue that it is useless since methods should be used rather than studied. Others hold that it is harmful because it restricts the freedom and creativity of researchers. Methodologists often respond to these objections by claiming that a good methodology helps researchers arrive at reliable theories in an efficient way. The choice of method often matters since the same factual material can lead to different conclusions depending on one's method. Interest in methodology has risen in the 20th century due to the increased importance of interdisciplinary work and the obstacles hindering efficient cooperation.
Definitions
The term "methodology" is associated with a variety of meanings. In its most common usage, it refers either to a method, to the field of inquiry studying methods, or to philosophical discussions of background assumptions involved in these processes. Some researchers distinguish methods from methodologies by holding that methods are modes of data collection while methodologies are more general research strategies that determine how to conduct a research project. In this sense, methodologies include various theoretical commitments about the intended outcomes of the investigation.
As method
The term "methodology" is sometimes used as a synonym for the term "method". A method is a way of reaching some predefined goal. It is a planned and structured procedure for solving a theoretical or practical problem. In this regard, methods stand in contrast to free and unstructured approaches to problem-solving. For example, descriptive statistics is a method of data analysis, radiocarbon dating is a method of determining the age of organic objects, sautéing is a method of cooking, and project-based learning is an educational method. The term "technique" is often used as a synonym both in the academic and the everyday discourse. Methods usually involve a clearly defined series of decisions and actions to be used under certain circumstances, usually expressable as a sequence of repeatable instructions. The goal of following the steps of a method is to bring about the result promised by it. In the context of inquiry, methods may be defined as systems of rules and procedures to discover regularities of nature, society, and thought. In this sense, methodology can refer to procedures used to arrive at new knowledge or to techniques of verifying and falsifying pre-existing knowledge claims. This encompasses various issues pertaining both to the collection of data and their analysis. Concerning the collection, it involves the problem of sampling and of how to go about the data collection itself, like surveys, interviews, or observation. There are also numerous methods of how the collected data can be analyzed using statistics or other ways of interpreting it to extract interesting conclusions.
As study of methods
However, many theorists emphasize the differences between the terms "method" and "methodology". In this regard, methodology may be defined as "the study or description of methods" or as "the analysis of the principles of methods, rules, and postulates employed by a discipline". This study or analysis involves uncovering assumptions and practices associated with the different methods and a detailed description of research designs and hypothesis testing. It also includes evaluative aspects: forms of data collection, measurement strategies, and ways to analyze data are compared and their advantages and disadvantages relative to different research goals and situations are assessed. In this regard, methodology provides the skills, knowledge, and practical guidance needed to conduct scientific research in an efficient manner. It acts as a guideline for various decisions researchers need to take in the scientific process.
Methodology can be understood as the middle ground between concrete particular methods and the abstract and general issues discussed by the philosophy of science. In this regard, methodology comes after formulating a research question and helps the researchers decide what methods to use in the process. For example, methodology should assist the researcher in deciding why one method of sampling is preferable to another in a particular case or which form of data analysis is likely to bring the best results. Methodology achieves this by explaining, evaluating and justifying methods. Just as there are different methods, there are also different methodologies. Different methodologies provide different approaches to how methods are evaluated and explained and may thus make different suggestions on what method to use in a particular case.
According to Aleksandr Georgievich Spirkin, "[a] methodology is a system of principles and general ways of organising and structuring theoretical and practical activity, and also the theory of this system". Helen Kara defines methodology as "a contextual framework for research, a coherent and logical scheme based on views, beliefs, and values, that guides the choices researchers make". Ginny E. Garcia and Dudley L. Poston understand methodology either as a complex body of rules and postulates guiding research or as the analysis of such rules and procedures. As a body of rules and postulates, a methodology defines the subject of analysis as well as the conceptual tools used by the analysis and the limits of the analysis. Research projects are usually governed by a structured procedure known as the research process. The goal of this process is given by a research question, which determines what kind of information one intends to acquire.
As discussion of background assumptions
Some theorists prefer an even wider understanding of methodology that involves not just the description, comparison, and evaluation of methods but includes additionally more general philosophical issues. One reason for this wider approach is that discussions of when to use which method often take various background assumptions for granted, for example, concerning the goal and nature of research. These assumptions can at times play an important role concerning which method to choose and how to follow it. For example, Thomas Kuhn argues in his The Structure of Scientific Revolutions that sciences operate within a framework or a paradigm that determines which questions are asked and what counts as good science. This concerns philosophical disagreements both about how to conceptualize the phenomena studied, what constitutes evidence for and against them, and what the general goal of researching them is. So in this wider sense, methodology overlaps with philosophy by making these assumptions explicit and presenting arguments for and against them. According to C. S. Herrman, a good methodology clarifies the structure of the data to be analyzed and helps the researchers see the phenomena in a new light. In this regard, a methodology is similar to a paradigm. A similar view is defended by Spirkin, who holds that a central aspect of every methodology is the world view that comes with it.
The discussion of background assumptions can include metaphysical and ontological issues in cases where they have important implications for the proper research methodology. For example, a realist perspective considering the observed phenomena as an external and independent reality is often associated with an emphasis on empirical data collection and a more distanced and objective attitude. Idealists, on the other hand, hold that external reality is not fully independent of the mind and tend, therefore, to include more subjective tendencies in the research process as well.
For the quantitative approach, philosophical debates in methodology include the distinction between the inductive and the hypothetico-deductive interpretation of the scientific method. For qualitative research, many basic assumptions are tied to philosophical positions such as hermeneutics, pragmatism, Marxism, critical theory, and postmodernism. According to Kuhn, an important factor in such debates is that the different paradigms are incommensurable. This means that there is no overarching framework to assess the conflicting theoretical and methodological assumptions. This critique puts into question various presumptions of the quantitative approach associated with scientific progress based on the steady accumulation of data.
Other discussions of abstract theoretical issues in the philosophy of science are also sometimes included. This can involve questions like how and whether scientific research differs from fictional writing as well as whether research studies objective facts rather than constructing the phenomena it claims to study. In the latter sense, some methodologists have even claimed that the goal of science is less to represent a pre-existing reality and more to bring about some kind of social change in favor of repressed groups in society.
Related terms and issues
Viknesh Andiappan and Yoke Kin Wan use the field of process systems engineering to distinguish the term "methodology" from the closely related terms "approach", "method", "procedure", and "technique". On their view, "approach" is the most general term. It can be defined as "a way or direction used to address a problem based on a set of assumptions". An example is the difference between hierarchical approaches, which consider one task at a time in a hierarchical manner, and concurrent approaches, which consider them all simultaneously. Methodologies are a little more specific. They are general strategies needed to realize an approach and may be understood as guidelines for how to make choices. Often the term "framework" is used as a synonym. A method is a still more specific way of practically implementing the approach. Methodologies provide the guidelines that help researchers decide which method to follow. The method itself may be understood as a sequence of techniques. A technique is a step taken that can be observed and measured. Each technique has some immediate result. The whole sequence of steps is termed a "procedure". A similar but less complex characterization is sometimes found in the field of language teaching, where the teaching process may be described through a three-level conceptualization based on "approach", "method", and "technique".
One question concerning the definition of methodology is whether it should be understood as a descriptive or a normative discipline. The key difference in this regard is whether methodology just provides a value-neutral description of methods or what scientists actually do. Many methodologists practice their craft in a normative sense, meaning that they express clear opinions about the advantages and disadvantages of different methods. In this regard, methodology is not just about what researchers actually do but about what they ought to do or how to perform good research.
Types
Theorists often distinguish various general types or approaches to methodology. The most influential classification contrasts quantitative and qualitative methodology.
Quantitative and qualitative
Quantitative research is closely associated with the natural sciences. It is based on precise numerical measurements, which are then used to arrive at exact general laws. This precision is also reflected in the goal of making predictions that can later be verified by other researchers. Examples of quantitative research include physicists at the Large Hadron Collider measuring the mass of newly created particles and positive psychologists conducting an online survey to determine the correlation between income and self-assessed well-being.
Qualitative research is characterized in various ways in the academic literature but there are very few precise definitions of the term. It is often used in contrast to quantitative research for forms of study that do not quantify their subject matter numerically. However, the distinction between these two types is not always obvious and various theorists have argued that it should be understood as a continuum and not as a dichotomy. A lot of qualitative research is concerned with some form of human experience or behavior, in which case it tends to focus on a few individuals and their in-depth understanding of the meaning of the studied phenomena. Examples of the qualitative method are a market researcher conducting a focus group in order to learn how people react to a new product or a medical researcher performing an unstructured in-depth interview with a participant from a new experimental therapy to assess its potential benefits and drawbacks. It is also used to improve quantitative research, such as informing data collection materials and questionnaire design. Qualitative research is frequently employed in fields where the pre-existing knowledge is inadequate. This way, it is possible to get a first impression of the field and potential theories, thus paving the way for investigating the issue in further studies.
Quantitative methods dominate in the natural sciences but both methodologies are used in the social sciences. Some social scientists focus mostly on one method while others try to investigate the same phenomenon using a variety of different methods. It is central to both approaches how the group of individuals used for the data collection is selected. This process is known as sampling. It involves the selection of a subset of individuals or phenomena to be measured. Important in this regard is that the selected samples are representative of the whole population, i.e. that no significant biases were involved when choosing. If this is not the case, the data collected does not reflect what the population as a whole is like. This affects generalizations and predictions drawn from the biased data. The number of individuals selected is called the sample size. For qualitative research, the sample size is usually rather small, while quantitative research tends to focus on big groups and collecting a lot of data. After the collection, the data needs to be analyzed and interpreted to arrive at interesting conclusions that pertain directly to the research question. This way, the wealth of information obtained is summarized and thus made more accessible to others. Especially in the case of quantitative research, this often involves the application of some form of statistics to make sense of the numerous individual measurements.
Many discussions in the history of methodology center around the quantitative methods used by the natural sciences. A central question in this regard is to what extent they can be applied to other fields, like the social sciences and history. The success of the natural sciences was often seen as an indication of the superiority of the quantitative methodology and used as an argument to apply this approach to other fields as well. However, this outlook has been put into question in the more recent methodological discourse. In this regard, it is often argued that the paradigm of the natural sciences is a one-sided development of reason, which is not equally well suited to all areas of inquiry. The divide between quantitative and qualitative methods in the social sciences is one consequence of this criticism.
Which method is more appropriate often depends on the goal of the research. For example, quantitative methods usually excel for evaluating preconceived hypotheses that can be clearly formulated and measured. Qualitative methods, on the other hand, can be used to study complex individual issues, often with the goal of formulating new hypotheses. This is especially relevant when the existing knowledge of the subject is inadequate. Important advantages of quantitative methods include precision and reliability. However, they have often difficulties in studying very complex phenomena that are commonly of interest to the social sciences. Additional problems can arise when the data is misinterpreted to defend conclusions that are not directly supported by the measurements themselves. In recent decades, many researchers in the social sciences have started combining both methodologies. This is known as mixed-methods research. A central motivation for this is that the two approaches can complement each other in various ways: some issues are ignored or too difficult to study with one methodology and are better approached with the other. In other cases, both approaches are applied to the same issue to produce more comprehensive and well-rounded results.
Qualitative and quantitative research are often associated with different research paradigms and background assumptions. Qualitative researchers often use an interpretive or critical approach while quantitative researchers tend to prefer a positivistic approach. Important disagreements between these approaches concern the role of objectivity and hard empirical data as well as the research goal of predictive success rather than in-depth understanding or social change.
Others
Various other classifications have been proposed. One distinguishes between substantive and formal methodologies. Substantive methodologies tend to focus on one specific area of inquiry. The findings are initially restricted to this specific field but may be transferrable to other areas of inquiry. Formal methodologies, on the other hand, are based on a variety of studies and try to arrive at more general principles applying to different fields. They may also give particular prominence to the analysis of the language of science and the formal structure of scientific explanation. A closely related classification distinguishes between philosophical, general scientific, and special scientific methods.
One type of methodological outlook is called "proceduralism". According to it, the goal of methodology is to boil down the research process to a simple set of rules or a recipe that automatically leads to good research if followed precisely. However, it has been argued that, while this ideal may be acceptable for some forms of quantitative research, it fails for qualitative research. One argument for this position is based on the claim that research is not a technique but a craft that cannot be achieved by blindly following a method. In this regard, research depends on forms of creativity and improvisation to amount to good science.
Other types include inductive, deductive, and transcendental methods. Inductive methods are common in the empirical sciences and proceed through inductive reasoning from many particular observations to arrive at general conclusions, often in the form of universal laws. Deductive methods, also referred to as axiomatic methods, are often found in formal sciences, such as geometry. They start from a set of self-evident axioms or first principles and use deduction to infer interesting conclusions from these axioms. Transcendental methods are common in Kantian and post-Kantian philosophy. They start with certain particular observations. It is then argued that the observed phenomena can only exist if their conditions of possibility are fulfilled. This way, the researcher may draw general psychological or metaphysical conclusions based on the claim that the phenomenon would not be observable otherwise.
Importance
It has been argued that a proper understanding of methodology is important for various issues in the field of research. They include both the problem of conducting efficient and reliable research as well as being able to validate knowledge claims by others. Method is often seen as one of the main factors of scientific progress. This is especially true for the natural sciences where the developments of experimental methods in the 16th and 17th century are often seen as the driving force behind the success and prominence of the natural sciences. In some cases, the choice of methodology may have a severe impact on a research project. The reason is that very different and sometimes even opposite conclusions may follow from the same factual material based on the chosen methodology.
Aleksandr Georgievich Spirkin argues that methodology, when understood in a wide sense, is of great importance since the world presents us with innumerable entities and relations between them. Methods are needed to simplify this complexity and find a way of mastering it. On the theoretical side, this concerns ways of forming true beliefs and solving problems. On the practical side, this concerns skills of influencing nature and dealing with each other. These different methods are usually passed down from one generation to the next. Spirkin holds that the interest in methodology on a more abstract level arose in attempts to formalize these techniques to improve them as well as to make it easier to use them and pass them on. In the field of research, for example, the goal of this process is to find reliable means to acquire knowledge in contrast to mere opinions acquired by unreliable means. In this regard, "methodology is a way of obtaining and building up ... knowledge".
Various theorists have observed that the interest in methodology has risen significantly in the 20th century. This increased interest is reflected not just in academic publications on the subject but also in the institutionalized establishment of training programs focusing specifically on methodology. This phenomenon can be interpreted in different ways. Some see it as a positive indication of the topic's theoretical and practical importance. Others interpret this interest in methodology as an excessive preoccupation that draws time and energy away from doing research on concrete subjects by applying the methods instead of researching them. This ambiguous attitude towards methodology is sometimes even exemplified in the same person. Max Weber, for example, criticized the focus on methodology during his time while making significant contributions to it himself. Spirkin believes that one important reason for this development is that contemporary society faces many global problems. These problems cannot be solved by a single researcher or a single discipline but are in need of collaborative efforts from many fields. Such interdisciplinary undertakings profit a lot from methodological advances, both concerning the ability to understand the methods of the respective fields and in relation to developing more homogeneous methods equally used by all of them.
Criticism
Most criticism of methodology is directed at one specific form or understanding of it. In such cases, one particular methodological theory is rejected but not methodology at large when understood as a field of research comprising many different theories. In this regard, many objections to methodology focus on the quantitative approach, specifically when it is treated as the only viable approach. Nonetheless, there are also more fundamental criticisms of methodology in general. They are often based on the idea that there is little value to abstract discussions of methods and the reasons cited for and against them. In this regard, it may be argued that what matters is the correct employment of methods and not their meticulous study. Sigmund Freud, for example, compared methodologists to "people who clean their glasses so thoroughly that they never have time to look through them". According to C. Wright Mills, the practice of methodology often degenerates into a "fetishism of method and technique".
Some even hold that methodological reflection is not just a waste of time but actually has negative side effects. Such an argument may be defended by analogy to other skills that work best when the agent focuses only on employing them. In this regard, reflection may interfere with the process and lead to avoidable mistakes. According to an example by Gilbert Ryle, "[w]e run, as a rule, worse, not better, if we think a lot about our feet". A less severe version of this criticism does not reject methodology per se but denies its importance and rejects an intense focus on it. In this regard, methodology has still a limited and subordinate utility but becomes a diversion or even counterproductive by hindering practice when given too much emphasis.
Another line of criticism concerns more the general and abstract nature of methodology. It states that the discussion of methods is only useful in concrete and particular cases but not concerning abstract guidelines governing many or all cases. Some anti-methodologists reject methodology based on the claim that researchers need freedom to do their work effectively. But this freedom may be constrained and stifled by "inflexible and inappropriate guidelines". For example, according to Kerry Chamberlain, a good interpretation needs creativity to be provocative and insightful, which is prohibited by a strictly codified approach. Chamberlain uses the neologism "methodolatry" to refer to this alleged overemphasis on methodology. Similar arguments are given in Paul Feyerabend's book "Against Method".
However, these criticisms of methodology in general are not always accepted. Many methodologists defend their craft by pointing out how the efficiency and reliability of research can be improved through a proper understanding of methodology.
A criticism of more specific forms of methodology is found in the works of the sociologist Howard S. Becker. He is quite critical of methodologists based on the claim that they usually act as advocates of one particular method usually associated with quantitative research. An often-cited quotation in this regard is that "[m]ethodology is too important to be left to methodologists". Alan Bryman has rejected this negative outlook on methodology. He holds that Becker's criticism can be avoided by understanding methodology as an inclusive inquiry into all kinds of methods and not as a mere doctrine for converting non-believers to one's preferred method.
In different fields
Part of the importance of methodology is reflected in the number of fields to which it is relevant. They include the natural sciences and the social sciences as well as philosophy and mathematics.
Natural sciences
The dominant methodology in the natural sciences (like astronomy, biology, chemistry, geoscience, and physics) is called the scientific method. Its main cognitive aim is usually seen as the creation of knowledge, but various closely related aims have also been proposed, like understanding, explanation, or predictive success. Strictly speaking, there is no one single scientific method. In this regard, the expression "scientific method" refers not to one specific procedure but to different general or abstract methodological aspects characteristic of all the aforementioned fields. Important features are that the problem is formulated in a clear manner and that the evidence presented for or against a theory is public, reliable, and replicable. The last point is important so that other researchers are able to repeat the experiments to confirm or disconfirm the initial study. For this reason, various factors and variables of the situation often have to be controlled to avoid distorting influences and to ensure that subsequent measurements by other researchers yield the same results. The scientific method is a quantitative approach that aims at obtaining numerical data. This data is often described using mathematical formulas. The goal is usually to arrive at some universal generalizations that apply not just to the artificial situation of the experiment but to the world at large. Some data can only be acquired using advanced measurement instruments. In cases where the data is very complex, it is often necessary to employ sophisticated statistical techniques to draw conclusions from it.
The scientific method is often broken down into several steps. In a typical case, the procedure starts with regular observation and the collection of information. These findings then lead the scientist to formulate a hypothesis describing and explaining the observed phenomena. The next step consists in conducting an experiment designed for this specific hypothesis. The actual results of the experiment are then compared to the expected results based on one's hypothesis. The findings may then be interpreted and published, either as a confirmation or disconfirmation of the initial hypothesis.
Two central aspects of the scientific method are observation and experimentation. This distinction is based on the idea that experimentation involves some form of manipulation or intervention. This way, the studied phenomena are actively created or shaped. For example, a biologist inserting viral DNA into a bacterium is engaged in a form of experimentation. Pure observation, on the other hand, involves studying independent entities in a passive manner. This is the case, for example, when astronomers observe the orbits of astronomical objects far away. Observation played the main role in ancient science. The scientific revolution in the 16th and 17th century affected a paradigm change that gave a much more central role to experimentation in the scientific methodology. This is sometimes expressed by stating that modern science actively "puts questions to nature". While the distinction is usually clear in the paradigmatic cases, there are also many intermediate cases where it is not obvious whether they should be characterized as observation or as experimentation.
A central discussion in this field concerns the distinction between the inductive and the hypothetico-deductive methodology. The core disagreement between these two approaches concerns their understanding of the confirmation of scientific theories. The inductive approach holds that a theory is confirmed or supported by all its positive instances, i.e. by all the observations that exemplify it. For example, the observations of many white swans confirm the universal hypothesis that "all swans are white". The hypothetico-deductive approach, on the other hand, focuses not on positive instances but on deductive consequences of the theory. This way, the researcher uses deduction before conducting an experiment to infer what observations they expect. These expectations are then compared to the observations they actually make. This approach often takes a negative form based on falsification. In this regard, positive instances do not confirm a hypothesis but negative instances disconfirm it. Positive indications that the hypothesis is true are only given indirectly if many attempts to find counterexamples have failed. A cornerstone of this approach is the null hypothesis, which assumes that there is no connection (see causality) between whatever is being observed. It is up to the researcher to do all they can to disprove their own hypothesis through relevant methods or techniques, documented in a clear and replicable process. If they fail to do so, it can be concluded that the null hypothesis is false, which provides support for their own hypothesis about the relation between the observed phenomena.
Social sciences
Significantly more methodological variety is found in the social sciences, where both quantitative and qualitative approaches are used. They employ various forms of data collection, such as surveys, interviews, focus groups, and the nominal group technique. Surveys belong to quantitative research and usually involve some form of questionnaire given to a large group of individuals. It is paramount that the questions are easily understandable by the participants since the answers might not have much value otherwise. Surveys normally restrict themselves to closed questions in order to avoid various problems that come with the interpretation of answers to open questions. They contrast in this regard to interviews, which put more emphasis on the individual participant and often involve open questions. Structured interviews are planned in advance and have a fixed set of questions given to each individual. They contrast with unstructured interviews, which are closer to a free-flow conversation and require more improvisation on the side of the interviewer for finding interesting and relevant questions. Semi-structured interviews constitute a middle ground: they include both predetermined questions and questions not planned in advance. Structured interviews make it easier to compare the responses of the different participants and to draw general conclusions. However, they also limit what may be discovered and thus constrain the investigation in many ways. Depending on the type and depth of the interview, this method belongs either to quantitative or to qualitative research. The terms research conversation and muddy interview have been used to describe interviews conducted in informal settings which may not occur purely for the purposes of data collection. Some researcher employ the go-along method by conducting interviews while they and the participants navigate through and engage with their environment.
Focus groups are a qualitative research method often used in market research. They constitute a form of group interview involving a small number of demographically similar people. Researchers can use this method to collect data based on the interactions and responses of the participants. The interview often starts by asking the participants about their opinions on the topic under investigation, which may, in turn, lead to a free exchange in which the group members express and discuss their personal views. An important advantage of focus groups is that they can provide insight into how ideas and understanding operate in a cultural context. However, it is usually difficult to use these insights to discern more general patterns true for a wider public. One advantage of focus groups is that they can help the researcher identify a wide range of distinct perspectives on the issue in a short time. The group interaction may also help clarify and expand interesting contributions. One disadvantage is due to the moderator's personality and group effects, which may influence the opinions stated by the participants. When applied to cross-cultural settings, cultural and linguistic adaptations and group composition considerations are important to encourage greater participation in the group discussion.
The nominal group technique is similar to focus groups with a few important differences. The group often consists of experts in the field in question. The group size is similar but the interaction between the participants is more structured. The goal is to determine how much agreement there is among the experts on the different issues. The initial responses are often given in written form by each participant without a prior conversation between them. In this manner, group effects potentially influencing the expressed opinions are minimized. In later steps, the different responses and comments may be discussed and compared to each other by the group as a whole.
Most of these forms of data collection involve some type of observation. Observation can take place either in a natural setting, i.e. the field, or in a controlled setting such as a laboratory. Controlled settings carry with them the risk of distorting the results due to their artificiality. Their advantage lies in precisely controlling the relevant factors, which can help make the observations more reliable and repeatable. Non-participatory observation involves a distanced or external approach. In this case, the researcher focuses on describing and recording the observed phenomena without causing or changing them, in contrast to participatory observation.
An important methodological debate in the field of social sciences concerns the question of whether they deal with hard, objective, and value-neutral facts, as the natural sciences do. Positivists agree with this characterization, in contrast to interpretive and critical perspectives on the social sciences. According to William Neumann, positivism can be defined as "an organized method for combining deductive logic with precise empirical observations of individual behavior in order to discover and confirm a set of probabilistic causal laws that can be used to predict general patterns of human activity". This view is rejected by interpretivists. Max Weber, for example, argues that the method of the natural sciences is inadequate for the social sciences. Instead, more importance is placed on meaning and how people create and maintain their social worlds. The critical methodology in social science is associated with Karl Marx and Sigmund Freud. It is based on the assumption that many of the phenomena studied using the other approaches are mere distortions or surface illusions. It seeks to uncover deeper structures of the material world hidden behind these distortions. This approach is often guided by the goal of helping people effect social changes and improvements.
Philosophy
Philosophical methodology is the metaphilosophical field of inquiry studying the methods used in philosophy. These methods structure how philosophers conduct their research, acquire knowledge, and select between competing theories. It concerns both descriptive issues of what methods have been used by philosophers in the past and normative issues of which methods should be used. Many philosophers emphasize that these methods differ significantly from the methods found in the natural sciences in that they usually do not rely on experimental data obtained through measuring equipment. Which method one follows can have wide implications for how philosophical theories are constructed, what theses are defended, and what arguments are cited in favor or against. In this regard, many philosophical disagreements have their source in methodological disagreements. Historically, the discovery of new methods, like methodological skepticism and the phenomenological method, has had important impacts on the philosophical discourse.
A great variety of methods has been employed throughout the history of philosophy. Methodological skepticism gives special importance to the role of systematic doubt. This way, philosophers try to discover absolutely certain first principles that are indubitable. The geometric method starts from such first principles and employs deductive reasoning to construct a comprehensive philosophical system based on them. Phenomenology gives particular importance to how things appear to be. It consists in suspending one's judgments about whether these things actually exist in the external world. This technique is known as epoché and can be used to study appearances independent of assumptions about their causes. The method of conceptual analysis came to particular prominence with the advent of analytic philosophy. It studies concepts by breaking them down into their most fundamental constituents to clarify their meaning. Common sense philosophy uses common and widely accepted beliefs as a philosophical tool. They are used to draw interesting conclusions. This is often employed in a negative sense to discredit radical philosophical positions that go against common sense. Ordinary language philosophy has a very similar method: it approaches philosophical questions by looking at how the corresponding terms are used in ordinary language.
Many methods in philosophy rely on some form of intuition. They are used, for example, to evaluate thought experiments, which involve imagining situations to assess their possible consequences in order to confirm or refute philosophical theories. The method of reflective equilibrium tries to form a coherent perspective by examining and reevaluating all the relevant beliefs and intuitions. Pragmatists focus on the practical consequences of philosophical theories to assess whether they are true or false. Experimental philosophy is a recently developed approach that uses the methodology of social psychology and the cognitive sciences for gathering empirical evidence and justifying philosophical claims.
Mathematics
In the field of mathematics, various methods can be distinguished, such as synthetic, analytic, deductive, inductive, and heuristic methods. For example, the difference between synthetic and analytic methods is that the former start from the known and proceed to the unknown while the latter seek to find a path from the unknown to the known. Geometry textbooks often proceed using the synthetic method. They start by listing known definitions and axioms and proceed by taking inferential steps, one at a time, until the solution to the initial problem is found. An important advantage of the synthetic method is its clear and short logical exposition. One disadvantage is that it is usually not obvious in the beginning that the steps taken lead to the intended conclusion. This may then come as a surprise to the reader since it is not explained how the mathematician knew in the beginning which steps to take. The analytic method often reflects better how mathematicians actually make their discoveries. For this reason, it is often seen as the better method for teaching mathematics. It starts with the intended conclusion and tries to find another formula from which it can be deduced. It then goes on to apply the same process to this new formula until it has traced back all the way to already proven theorems. The difference between the two methods concerns primarily how mathematicians think and present their proofs. The two are equivalent in the sense that the same proof may be presented either way.
Statistics
Statistics investigates the analysis, interpretation, and presentation of data. It plays a central role in many forms of quantitative research that have to deal with the data of many observations and measurements. In such cases, data analysis is used to cleanse, transform, and model the data to arrive at practically useful conclusions. There are numerous methods of data analysis. They are usually divided into descriptive statistics and inferential statistics. Descriptive statistics restricts itself to the data at hand. It tries to summarize the most salient features and present them in insightful ways. This can happen, for example, by visualizing its distribution or by calculating indices such as the mean or the standard deviation. Inferential statistics, on the other hand, uses this data based on a sample to draw inferences about the population at large. That can take the form of making generalizations and predictions or by assessing the probability of a concrete hypothesis.
Pedagogy
Pedagogy can be defined as the study or science of teaching methods. In this regard, it is the methodology of education: it investigates the methods and practices that can be applied to fulfill the aims of education. These aims include the transmission of knowledge as well as fostering skills and character traits. Its main focus is on teaching methods in the context of regular schools. But in its widest sense, it encompasses all forms of education, both inside and outside schools. In this wide sense, pedagogy is concerned with "any conscious activity by one person designed to enhance learning in another". The teaching happening this way is a process taking place between two parties: teachers and learners. Pedagogy investigates how the teacher can help the learner undergo experiences that promote their understanding of the subject matter in question.
Various influential pedagogical theories have been proposed. Mental-discipline theories were already common in ancient Greek and state that the main goal of teaching is to train intellectual capacities. They are usually based on a certain ideal of the capacities, attitudes, and values possessed by educated people. According to naturalistic theories, there is an inborn natural tendency in children to develop in a certain way. For them, pedagogy is about how to help this process happen by ensuring that the required external conditions are set up. Herbartianism identifies five essential components of teaching: preparation, presentation, association, generalization, and application. They correspond to different phases of the educational process: getting ready for it, showing new ideas, bringing these ideas in relation to known ideas, understanding the general principle behind their instances, and putting what one has learned into practice. Learning theories focus primarily on how learning takes place and formulate the proper methods of teaching based on these insights. One of them is apperception or association theory, which understands the mind primarily in terms of associations between ideas and experiences. On this view, the mind is initially a blank slate. Learning is a form of developing the mind by helping it establish the right associations. Behaviorism is a more externally oriented learning theory. It identifies learning with classical conditioning, in which the learner's behavior is shaped by presenting them with a stimulus with the goal of evoking and solidifying the desired response pattern to this stimulus.
The choice of which specific method is best to use depends on various factors, such as the subject matter and the learner's age. Interest and curiosity on the side of the student are among the key factors of learning success. This means that one important aspect of the chosen teaching method is to ensure that these motivational forces are maintained, through intrinsic or extrinsic motivation. Many forms of education also include regular assessment of the learner's progress, for example, in the form of tests. This helps to ensure that the teaching process is successful and to make adjustments to the chosen method if necessary.
Related concepts
Methodology has several related concepts, such as paradigm and algorithm. In the context of science, a paradigm is a conceptual worldview. It consists of a number of basic concepts and general theories, that determine how the studied phenomena are to be conceptualized and which scientific methods are considered reliable for studying them. Various theorists emphasize similar aspects of methodologies, for example, that they shape the general outlook on the studied phenomena and help the researcher see them in a new light.
In computer science, an algorithm is a procedure or methodology to reach the solution of a problem with a finite number of steps. Each step has to be precisely defined so it can be carried out in an unambiguous manner for each application. For example, the Euclidean algorithm is an algorithm that solves the problem of finding the greatest common divisor of two integers. It is based on simple steps like comparing the two numbers and subtracting one from the other.
See also
Philosophical methodology
Political methodology
Scientific method
Software development process
Survey methodology
References
Further reading
Berg, Bruce L., 2009, Qualitative Research Methods for the Social Sciences. Seventh Edition. Boston MA: Pearson Education Inc.
Creswell, J. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, California: Sage Publications.
Creswell, J. (2003). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, California: Sage Publications.
Franklin, M.I. (2012). Understanding Research: Coping with the Quantitative-Qualitative Divide. London and New York: Routledge.
Guba, E. and Lincoln, Y. (1989). Fourth Generation Evaluation. Newbury Park, California: Sage Publications.
Herrman, C. S. (2009). "Fundamentals of Methodology", a series of papers On the Social Science Research Network (SSRN), online.
Howell, K. E. (2013) Introduction to the Philosophy of Methodology. London, UK: Sage Publications.
Ndira, E. Alana, Slater, T. and Bucknam, A. (2011). Action Research for Business, Nonprofit, and Public Administration - A Tool for Complex Times . Thousand Oaks, CA: Sage.
Joubish, Farooq Dr. (2009). Educational Research Department of Education, Federal Urdu University, Karachi, Pakistan
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd edition). Thousand Oaks, California: Sage Publications.
Silverman, David (Ed). (2011). Qualitative Research: Issues of Theory, Method and Practice, Third Edition. London, Thousand Oaks, New Delhi, Singapore: Sage Publications
Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. 2014. Handbook of Research Methods in Military Studies New York: Routledge.
External links
Freedictionary, usage note on the word Methodology
Researcherbook, research methodology forum and resources | 0.779233 | 0.998144 | 0.777787 |
Epistemic democracy | Epistemic democracy refers to a range of views in political science and philosophy which see the value of democracy as based, at least in part, on its ability to make good or correct decisions. Epistemic democrats believe that the legitimacy or justification of democratic government should not be exclusively based on the intrinsic value of its procedures and how they embody or express values such as fairness, equality, or freedom. Instead, they claim that a political system based on political equality can be expected to make good political decisions, and possibly decisions better than any alternative form of government (e.g., oligarchy, aristocracy, or dictatorship).
Theories of epistemic democracy are therefore concerned with the ability of democratic institutions to do such things as communicate, produce, and utilise knowledge, engage in forms of experimentation, aggregate judgements and solve social problems. Based on such abilities, democracy is said to be able to track some standard of correctness, such as the truth, justice, the common good, or the collective interest. Epistemic democracy as such does not recommend any particular form of democracy – whether it be direct, representative, participatory, or deliberative – and epistemic democrats themselves disagree over such questions. Instead, they are united by a common concern for the epistemic value of inclusive and equal political arrangements. Epistemic democrats are therefore often associated with ideas such as collective intelligence and the wisdom of crowds.
Epistemic (or proto epistemic) arguments for democracy have a long history in political thought and can be found in the work of figures such as Aristotle, Jean-Jacques Rousseau, Nicolas de Condorcet, and John Dewey. In contemporary political philosophy and political science, advocates of epistemic democracy include David Estlund, Hélène Landemore, Elizabeth Anderson, Joshua Cohen, Robert Goodin, and Kai Spiekermann.
Overview
Theories of epistemic democracy see the value of democracy as based, at least in part, on the ability of democratic procedures to make good or correct decisions, where ‘good’ and ‘correct’ are normally defined in respect to some procedure-independent standard. This independent standard may be the truth, justice, the common good, or the collective interest. Epistemic democrats therefore claim that democracy is not valuable solely because it embodies or expresses certain intrinsic values. Instead, it is thought to (also) take decisions which can track some conception of the truth, justice or the common good. Such views therefore often take there to be an important instrumental component to any defence or justification of democratic government.
Views along these lines are helpfully contrasted with purely procedural theories of democracy. Pure proceduralism refers to the view that the value and justification of democracy rests solely in the fairness or intrinsic value of democratic institutions. What matters on these views is that democracy embodies or expresses important values – such as equality, freedom, or autonomy – rather than the quality of the outcomes they produce. The only way to evaluate the quality of a political decision on such a view is therefore to look back at the procedure which produced it. They ask whether the decision was taken in a free and fair manner, and if so, judge it to be a good decision. In contrast, epistemic democrats think that political decisions can be judged by some standards which are independent of the procedures which produced them. They can therefore ask, irrespective of whether the decision was taken in a free or fair way, did it track some conception of the truth, justice, or the common good?
Epistemic democracy is therefore analogous to what John Rawls referred to as ‘imperfect procedural justice’. Rawls states that imperfect procedural justice takes there to be independent criteria which define whether an outcome is correct or better. Unlike perfect procedural justice, however, which takes there to be a procedure which can guarantee the right outcome, imperfect procedural justice looks for procedures which can achieve the right outcomes to some level of reliability. An example is trial by jury where legal guilt or innocence are the standards of correctness which the jury aims at in their decision. While there is no guarantee that juries will always convict the guilty and not the innocent, they are thought to track these standards most of the time and therefore with some level of reliability.
For epistemic democrats, then, democracy is analogous to a criminal trial but where the independent standards are not legal guilt or innocence, but truth, justice or the common good. While there is no guarantee that democratic decisions will always track these standards, epistemic democrats argue that they will tend to track them, or that they will tend to track them more often than their alternatives. An argument for the epistemic merits of democracy will therefore be probabilistic. They claim that democracy ‘tends’ to produce good decisions, not that it always will.
Because it sees independent standards as an important part of democracy, epistemic democrats must reject forms of pure proceduralism where democracy is valuable for exclusively intrinsic reasons. They do not, however, necessarily need to deny any procedural or intrinsic value to democracy. Arguments about the ability of democracy to achieve independent standards are freestanding and conceptually independent of procedural arguments. Epistemic democracy therefore includes positions which take democracy to be solely justified on epistemic grounds, and positions which combine epistemic and procedural considerations.
While epistemic democracy is commonly defined in respect to the importance placed on procedure-independent standards, there are a range of epistemological accounts of democracy which do not clearly fit this description. One example is pragmatist inspired views which focus on how the justification of democracy is based not on certain moral or ethical values but rather our epistemic commitments. For instance, if we wish to have correct beliefs, it may be that we should be open to all objections and arguments against our existing views, and this then commits us to an open and inclusive process of inquiry. Contemporary advocates for this kind of view include Robert Talisse and Cheryl Misak. Jürgen Habermas’ democratic theory similarly involves an epistemic component in that he sees deliberation as a process for testing validity claims, such as to empirical truth or moral rightness, which aim to gain acceptance. He, however, considers his view to be a pure proceduralist one. Fabienne Peter also offers a view she calls ‘pure epistemic proceduralism’ which does not rely on independent standards of correctness. Instead, she argues that a decision is legitimate ‘if it is the outcome of a process that satisfies certain conditions of political and epistemic fairness’. As a result, the definition of epistemic democracy may be drawn more broadly to include not only accounts which involve procedure-independent standards, but any epistemic considerations.
Historical Background
While they remain a current topic of discussion within political science and philosophy, Hélène Landemore argues that epistemic arguments for democracy have a long pedigree within the history of political thought. She suggest that these arguments date back at least to the Greeks and can be found in a diverse range of authors including Machiavelli, Spinoza, Rousseau, Condorcet, John Stuart Mill and John Dewey.
In what Jeremy Waldron has referred to as "the doctrine of the wisdom of the multitude", for instance, Aristotle offers a version of such an argument focused on democratic deliberation. This can be found in book III, chapter 11, of the Politics:
Aristotle also refers to the feast analogy in chapter 15:
These passages seem to suggest that group deliberation may allow for better results than can be produced by any one individual because it allows for the pooling of information, arguments, insights, and experiences. Just as people will make different contributions to a feast, they will offer varied contributions to political decision making so that the group is superior to the one. It is unclear, however, whether Aristotle meant this argument as a defence of democracy. This is because an oligarchy or aristocracy may also benefit from this collective pooling of talents when compared to the decisions of a single king or dictator. Waldron therefore suggests that while a strong interpretation of Aristotle’s wisdom of the multitude would see it as defending democratic deliberation over the deliberations of any smaller group, a weaker interpretation would see it as only rejecting rule by one.
Jean-Jacques Rousseau’s account of the general will may also be interpreted as offering an epistemic argument for democratic rule. In The Social Contract, for instance, he writes:
Landemore suggests an epistemic reading of this passage. According to this interpretation Rousseau describes the idea that citizens should vote based on their judgements about whether a proposal coheres with an independent standard of correctness, such as the common interest or good, and that the outcome of a majority vote will offer the correct judgement. This is also suggested in Rousseau’s claim that those whose judgement differed from those of the majority should conclude that they there therefore mistaken. In another passage Rousseau also appears to appeal to the statistical benefits of aggregation and large numbers. In comparing the ‘general will’ to ‘the will of all’ he writes:
Rousseau’s appeal to pluses and minuses cancelling out is similar to the theory of the Miracle of Aggregation found in more recent work in epistemic democracy (discussed further below). This miracle refers to the idea that if errors are randomly distributed, then they will tend to cancel each other out when votes are aggregated, and therefore the majority decision is only influenced by correct votes. These epistemic readings of Rousseau are controversial, however, given that he can also be interpreted in more procedural terms where the general will refers simply to whatever the people want.
A clearer appeal to the benefits of large numbers can be found in Nicolas de Condorcet’s Essays on the Application of Mathematics to the Theory of Decision Making. Providing a mathematical account of the benefits of large groups to decision making, Condorcet argued that majoritarian decisions are all but certain to select the correct option on a simple yes-no decision. This claim required three assumptions to be met: (1) voters make their decisions independently of each other; (2) voters make their decisions sincerely rather than strategically; and (3) each voter has a probability of selecting the correct answer which is greater than 0.5. As long as these conditions hold, then as the voting group becomes larger, the probability that they will select the right answer moves towards 1. While Condorcet developed this theorem with the aim of determining the optimal size of a jury – hence it often being referred to as Condorcet’s jury theorem – it is easy to see how this can also be applied to the votes of a democratic electorate including all of the population rather than a subset (see contemporary discussion of the theorem below).
Another key figure in the historical tradition which informs work on epistemic democracy is the American pragmatist John Dewey. Dewey’s democratic theory offers, in part, a view of democracy as a process of experimentation and inquiry. Rather than seeing it as synonymous with formal political institutions, Dewey instead thought of democracy as a ‘mode of associated living’ which occurred when people came together to identify and solve their collective problems. Democracy provides a process where common sets of problems and interests can be clarified, and solutions debated. Dewey therefore thought that an exclusive focus on majority rule was misplaced, and that the value of democracy came from the preceding public discussion and debates where experiences and values could be expressed, and minorities could voice their opposition. It is through this collective process of inquiry that social intelligence could emerge. This process had to include the public as they had particular knowledge of where social problems occurred given that they experience them directly. As Dewey famously put it, ‘the man who wears the shoe knows best that it pinches’ . A properly functioning democracy still required an educated population, however, and Dewey therefore placed much emphasis on educative reforms aimed at improving citizen competence.
Procedure-Independent Standards
Contemporary work in epistemic democracy can be broadly separated into two general categories. The first is more foundationalist work concerned with determining the importance and role of independent standards of correctness in any justification of democracy. Such work is therefore concerned with showing why a consideration of such standards is necessary and how best to incorporate them into a democratic theory. The second category, considered in following sections, is work focused on the more practical task of showing why democratic institutions can in fact track these independent standards, or why we should have confidence that they will make better decisions than non-democratic alternatives.
In contemporary debates David Estlund is often credited with offering the most developed defence of the importance of procedure-independent standards to the justification of democracy. Estlund argues that most defences of democracy either implicitly assume such standards or remain too weak to justify democratic rule. One of his most influential arguments is that if we were to care only for issues of procedural fairness, then we should be as happy with deciding political issues through the flip of a coin as we are through democratic procedures of majority rule. A random procedure, such as a coin flip or the roll of a dice, gives equal weight to the preferences of citizens and would therefore appear to be as procedurally fair as a democratic vote. Estlund therefore argues that if we prefer a democratic vote to a purely random decision, then it must be because we expect it to make the correct decisions with greater reliability than chance.
Estlund is careful to distinguish his epistemic view from what he calls a ‘correctness theory’ of democracy. According to this theory, a political decision would be legitimate only in the case where it is correct. Any democratic decision which happened to be incorrect would therefore be illegitimate. Instead of a correctness theory, Estlund claims that epistemic democrats can see democracy as legitimate because of its ability to produce good decisions over time.
Jose Luis Marti, alternatively, argues that our normal practices of democratic debate and deliberation tends to implicitly assume some independent standards of correctness. They suggest that to argue in favour of a certain decision, say policy A, is to aim to show that policy A is the right decision, or that it is better in terms of rightness than policy B, C or D. To engage in political deliberation about competing policies is therefore said to require one to appeal to some standard of correctness other than the political procedure itself. If the only mark of a decision was the decision-making procedure itself, then there could not be any argument or reason for making any particular decision, as making arguments and giving reasons means to appeal to some standard independent of the process and at least somewhat independent of the participants’ beliefs and desires.
What, however, are the independent standards which define the quality of political decisions? Epistemic democracy as a broad position allows for a number of answers to this question. The view does not, as some may think, require an endorsement of moral realism about the existence of objective moral facts. Although these standards are independent of the real political decision procedure, they may be dependent on other things. For instance, they may be dependent on an idealised procedure – such as John Rawls’ original position or Jürgen Habermas’ ideal speech situation – or on the norms and practices of a particular community. Nor are epistemic democrats committed to any crude form of consequentialism as the independent standards may themselves involve deontological or virtue constraints, such as respect for basic human rights. Epistemic democracy is consistent with many metaethical positions.
A couple of strategies are therefore open to epistemic democrats when considering the role of procedure-independent standards. The first is to specify a particular independent standard. For instance, they could define the standard as the maximisation of happiness or an equality of welfare, and then look to see if democratic institutions meet this standard. Alternatively, they could define the standard as the avoidance of certain bad outcomes, such as wars or famines. The second strategy is to remain ignorant or agnostic on the standards which define a good decision, and instead look for those procedures which can discover correct answers, whatever they may be. Democratic procedures would therefore be like the institutions of science. We value the scientific method not because we know the correct answer in advance, but rather because we have confidence that it will be able to discover the correct answer when followed.
Most epistemic democrats endorse the second strategy. The first faces the problem that people often reasonably disagree over how to define the procedure-independent standards of correctness, and it therefore risks making the justification of democracy dependent on a controversial account of justice or the common good. It also appears problematic from a democratic point of view because it suggests an account of good political decisions can be determined independently of democratic procedures. It may therefore be objected that such questions should themselves be decided by democratic means.
The second strategy has itself come under criticism. Sean Ingham, for instance, argues that if democratic procedures could discover the correct answer to political questions, then they would seem to bring an implausibly swift end to our deeply held and persistent disagreements. Imagine, for instance, that we had a democratic procedure which had a 99% chance of selecting the correct answer. Given that this procedure is near infallible, its result would provide such strong evidence in favour of one option that we would all have to accept it as true, even if we disagreed beforehand. While a 99% reliability may be implausible, Ingham points out that even if the democratic procedure only had a reliability of just over half, then running it multiple times would provide sufficient evidence for us to change our minds. If democracy can track the truth of political questions it would therefore seem to suggest that we can easily settle our long-held disagreements over what counts as a just political decision. Although this argument does not reject the existence of independent standards, it does suggest that their acceptance may be in tension with the idea that there is deep and reasonable political disagreement in society.
Models of Epistemic Democracy
While some theories of epistemic democracy focus on explaining the importance and role of epistemic considerations, others look to explain why it is that democratic procedures can be thought to make good or correct decisions. Some epistemic democrats merely wish to show that democracy can make these decisions with some level of reliability, but others go further in arguing that democracy will tend to make better decisions than any non-democratic alternative. In other words, they aim to show that decisions taken by the many are superior to those taken by the few. Prominent epistemic arguments for democracy within contemporary political science and democratic theory include the following:
Jury Theorems: While first proposed by Nicolas de Condorcet in 1785, discussion of jury theorems and their connection to democracy has continued into contemporary discussion. The original theorem stated that a choice between two options is best made by a large group if: (1) voters make their decisions independently of each other; (2) voters make their decisions sincerely rather than strategically; and (3) each voter has a probability of selecting the correct answer which is greater than 0.5. Under these conditions the probability of the correct option being selected tends towards 1 as the size of the voting group increases. This means that a democratic vote which includes the whole of the demos will be more reliable than any non-democratic vote which includes a smaller number of voters. Contemporary work on the jury theorem has then aimed to relax these assumptions. The theorem has, for instance, been applied to cases of plural voting with multiple options, where voters have lowly correlated votes, and where they make their minds up autonomously instead of fully independently.
In a recent study that employed measure-theoretic techniques to explore the probability of the theorem's thesis in various settings, it was discovered that the competence of majority rule as a decision procedure is heavily reliant on the probability measure governing voter competence. The thesis predicted by Condorcet's Jury Theorem either occurs almost surely or almost never. Notably, the prior probability of the theorem's thesis is zero. Moreover, under specific circumstances, the theorem's opposite outcome holds true, leading to the wrong option being chosen almost surely. Consequently, invoking this theorem necessitates further examination to better comprehend its applicability.
An immediate issue for the jury theorem is the question of the selection of alternative options and how to guarantee that the correct option is offered to voters in the first place. It therefore seems to acquire another procedure which can effectively narrow down the options. Objections have also been made to the relevance of the main assumptions of the theorem to the political context, where voters often engage in debate with others, have different motivations for voting, and often confront complex political problems.
Miracle of Aggregation: Like the jury theorem, the miracle of aggregation also appeals to the benefits of large numbers to defend democratic voting but focuses on the tendency of incorrect votes to cancel each other out. Those voters who are informed will tend to vote for the correct policy, while those who are ignorant will have to vote randomly. The votes of ignorant voters will therefore be distributed evenly among the available options. Once all votes are aggregated, the votes of bad voters will therefore cancel each other out, and the votes of informed voters will decide the outcome. A key assumption of this argument, however, is that those with none or little information will tend to vote randomly. It has therefore been objected that bad voters may in fact make systemic errors in a certain direction, and that their votes will not be completely cancelled out. An important question therefore concerns whether any systemic errors will be large enough to outnumber the more informed voters so as to influence the final outcome.
In relation to the findings of the study mentioned for Jury Theorems, it has been established that even though, almost surely, at least a proportion of voters is well-informed or almost well-informed the Miracle of Aggregation (MoA) does not occur almost surely using a prior probability measure.
Diversity Trumps Ability: The diversity trumps ability theorem was first developed by Lu Hong and Scott Page, but most prominently applied to epistemic democracy by Hélène Landemore. Unlike the previous two arguments which focused on voting, this argument applies to deliberation. According to the theorem, a random selection of cognitively diverse problem solvers can outperform a group of high ability problem solvers if four conditions are met. These are that: (1) the problem is difficult enough; (2) the problem solvers are relatively smart; (3) problem solvers think differently from each other but can still recognise the best solution; and (4) the population from which problem solvers are taken is large and the number selected is not too small. The idea behind the theorem is that a group of high ability individuals will tend to think in similar ways and will therefore converge on a common local optimum. A group of cognitively diverse problem solvers, alternatively, will think very differently and will therefore be able to guide one another past the local optima and to the global optimum. Landemore has then argued that the theorem supports the epistemic quality democratic deliberation involving citizens over the deliberation of any subset of the demos. The epistemic benefit of democratic deliberation is therefore its ability to draw on the cognitive diversity of the demos.
The application of the Diversity Trumps Ability theorem by Hong, Page, and Landemore in the context of epistemic democracy has been met with substantial skepticism from various quarters of the academic community. Among these skeptics, Thompson's critique stands out, asserting that the theorem's mathematical underpinnings do little more than dress up a rather trivial fact. This criticism posits that the theorem, despite its apparent mathematical rigor, may in fact offer nothing more than a formalized iteration of the preconceived hypotheses. The core of the contention is that the theorem's mathematical formulation serves to obscure rather than clarify the true, straightforward nature of the relationship between diversity and problem-solving efficacy.
Further, the critics highlight that when the theorem is stripped of its mathematical trappings, it reveals its inherent triviality. It is suggested that the theorem's conditions and implications are so narrowly defined as to be unrealistic when applied to the workings of actual democratic deliberation. By reframing the theorem with more realistic assumptions, these analyses expose the limitations of the original claims, often concluding that, contrary to the theorem’s intentions, ability can indeed eclipse diversity. This rigorous scrutiny aims to demystify the purported complexity of the theorem, warning against its uncritical adoption in socio-political theories and advocating for a more discerning use of mathematics in the exploration of democratic processes.
Many objections to the use of the diversity trumps ability theorem focus on assumption (3). Sometimes referred to as the oracle assumption, it requires that all problem solvers can recognise the best solution when made to think about it. Two challenges have confronted this assumption. The first is the issue of moral pluralism and the idea that participants may disagree on the best solution because they have different value commitments. The second is the issue of complexity and the idea that participants may disagree over the best solution, even if they agree on values, because political problems allow for multiple plausible interpretation. While the oracle assumption is therefore controversial, it has been argued that cognitive diversity may still have value even if all deliberators cannot recognise the best solution, although it becomes unclear in such cases whether such diversity will always trump ability.
Experimental Models: Experimental models are inspired by the work of John Dewey and were introduced into the contemporary debate by Elizabeth Anderson. Rather than rely on formal theorems as the previous arguments do, the experimental model instead sees the democratic institutions of regular elections as allowing for a process of trial-and-error learning. The idea is that when democratic governments enact a new policy, citizens will directly experience its results. Elections, petitions, and protests then give these citizens the opportunity to communicate their experiences back to policy makers, who can then use this information to reform the policy. Democratic procedures therefore offer important feedback mechanisms which allow policy makers to update and improve their policies over time. Anderson also argues that these procedures need to be inclusive so that all possible feedback can be considered. Differently situated citizens will have different experiences of social problems and public policies, and therefore an open and inclusive political process is required to make sure all of these distinctive contributions can be taken into account.
One issue concerning this model is the quality of the feedback signals provided by democratic elections, their frequency, and the extent that they may be affected by such things as voter ignorance or irrationality. There has also been debate over the extent to which Anderson’s Dewey inspired model is consistent with Dewey’s own democratic theory.
Epistemic Deliberative Democracy: A range of related arguments for the epistemic value of democracy can be found in the work of deliberative democrats. Deliberative democracy refers to a conception of democratic politics which places emphasis on the importance of a free and open public discussion. As Simone Chambers puts it, such approaches are talk centric rather than vote centric. While many deliberative democrats see deliberation as intrinsically valuable, many also advocate deliberation based on its instrumental and epistemic benefits. For instance, deliberation has been argued to help achieve forms of rational agreement, to help improve people’s understanding of their own preferences and of social problems, and to improve citizen knowledge and beliefs. A large empirical literature has now developed which looks to test these claims and understand the conditions under which deliberation may produce such benefits.
Reflexivity: Although not describing themselves as epistemic democrats, Jack Knight and James Johnson offer an alternative argument based on the concept of reflexivity. They distinguish between first-order institutions which aim to directly address social problems and second-order institutions which aim to coordinate and select between institutions at the first-order. The task of a second-order institution is therefore to coordinate ‘the ongoing process of selecting, implementing and maintaining effective institutional arrangements’, and sustaining an ‘experimental environment that can enhance our knowledge’ of when institutions produce good consequences Democracy is then said to have priority at the second-order because it operates in a reflexive manner. The reflexivity of democratic arrangements derives from the fact that they require ‘relevant parties to assert, defend, and revise their own views and to entertain, challenge, or accept those of others. It derives, in other words, from ongoing disagreement and conflict’. There has, however, been debate over whether democracy best provides this kind of reflexivity. Some authors, for instance, have claimed that reflexivity is more likely to be achieved by decentralised processes, such as system of polycentricity or markets.
Alternatives to Democracy
Given its aim of defending democracy on epistemic grounds, work on epistemic democracy is closely connected with work which advocates for alternative political institutions for epistemic reasons. In the contemporary debate, four alternatives or part alternatives have received most discussion.
Epistocracy: Epistocracy refers to a political system based on ‘rule by the knowers’ rather than ‘rule by the people’. While the term was coined by David Estlund, its most prominent advocate has been Jason Brennan who suggests a range of alternative models of epistocracy. These include proposals for excluding the least knowledgeable voters from the electoral franchise, granting more informed voter more votes than other citizens, or the establishment of an epistocratic veto where a body of knowledgeable members would be able to veto any legislation coming from a democratically elected parliament. Brennan’s argument for epistocracy draws heavily on political science studies which are reported to show the political ignorance of many citizens. Based on such work, he argues that uninformed voters subject other members of society to an undue level of risk and therefore violate their supposed right to a competent government.
Although we cannot assume a priori that the Condorcet Jury Theorem or the Miracle of Aggregation will hold true (see previous section), it has been proposed that by adding epistemic weights to the decision-making process, these theorems can indeed be upheld. By implementing a weighted majority rule based on stochastic weights correlated with epistemic rationality and guaranteeing a minimal weight equal to one for every voter, a more competent information aggregation mechanism can be achieved. This approach of incorporating epistemic weights while ensuring all votes count, albeit not in the same proportion, addresses potential concerns about disrespect or exclusion in the democratic process. By guaranteeing a minimal weight for every voter, the semiotic objections based on the expressive value of democracy are mitigated. In essence, this method strikes a balance between promoting competent decision-making and preserving the inclusive nature of the democratic process.
Epistocrats have been subject to a range of critiques, however. They have been argued to offer an incomplete and overly pessimistic reading of the empirical literature on voter competence, to rely too heavily on rational choice theory, to not give significant attention to potential democratic reforms, and to underestimate the dangers involved in political exclusion and the empowerment of a knowledgeable minority.
Political Meritocracy: Political meritocracy refers to a political system where leaders and officials are selected, at least in part, on their political abilities. Advocates of such systems, such as Daniel Bell and Tongdong Bai, argue that political leaders require intellectual and academic abilities, effective social skills and emotional intelligence, as well as ethical virtues. They should therefore ideally have exceptional academic qualifications, knowledge of the social sciences, records of good performance in government, and training in ethical philosophy. Political meritocrats then argue that elections are unlikely to select for these kinds of qualities and that political officials should therefore be appointed based on rigours processes of examination and their record of service at lower levels of government. These authors often look to China and Singapore as imperfect contemporary examples of political meritocracy, and commonly draw on Confucian philosophy in defending the value of meritocratic procedures.
Political meritocracy has come under similar criticism to epistocracy, offering a too pessimistic account of voter competence, overestimating the ability of meritocratic procedures to select more able and virtuous leaders, and underestimating the dangers of removing democratic elections and leaving political officials unaccountable to the public. The claim that Confucian philosophy supports political meritocracy is also controversial with many authors defending versions of Confucian democracy.
Political decentralisation: While epistocracy and political meritocracy may offer full alternatives to democracy, some critical of democracy’s epistemic value have instead argued for greater forms of decentralisation. Ilya Somin, for instance, argues that democratic voters have little incentive to become informed about political matters, as their one vote among thousands is very unlikely to affect the outcome. This is known as the problem of rational ignorance most associated with Anthony Downs. Somin therefore advocates for systems of political decentralisation, such as federalism, which would allow greater opportunities for exit. Unlike decisions on how to vote, decisions on which political jurisdiction to live in have significant consequences for individuals and therefore provide them with an incentive to get informed. While political decentralisation does not offer a complete alternative to democracy, it is thought to help increase the epistemic quality of government by supplementing mechanisms of voice with exit. Like supporters of epistocracy, however, Somin has been criticised for too greatly relying on rational choice models, for underestimating the competence of voters, and for underestimating the costs associated with exit.
Markets: There is also a free-market tradition sceptical of the epistemic quality of democracy which is most often associated with Friedrich Hayek. Hayek argued that any centralised political authority could not possibly acquire all the knowledge relevant for effective social decisions. A key reason for this was the importance he placed on local forms of knowledge about the particular circumstances of time and space. This knowledge was argued to be only known to on-the-spot-individuals, to be open to change over time, and to often involve a tacit component. Hayek therefore claimed that the best way to utilise this dispersed information was not to attempt to centralise it to any government authority, but allow individuals to make the best use of their own information and rely on a system of market prices to coordinate their individual actions. Hayekian authors, such as Mark Pennington, have therefore argued that there are important epistemic advantages to systems based on price signals rather than voters and deliberation, and advocate for expanding the role of markets into otherwise political domains.
These positions have been criticised on the basis that markets cannot provide many of the functions of democratic governments and on their optimistic view of the functioning of market institutions. Jonathan Benson has also argued that consumers often lack the kinds of information needed to make ethically informed decisions in the market place and that markets will often fail to achieve ethical values for epistemic reasons. Instead, he argues that ethical regulation is better provided by forms of political democracy which have a greater capacity to centralise relevant information, including local knowledge and tacit knowledge.
Selected Bibliography
A list of book-length works on or closely related to epistemic democracy:
Bell, D. A. (2016). The China model: Political meritocracy and the limits of democracy. Princeton University Press.
Brennan, J. (2016). Against Democracy. Princeton University Press.
Brennan, J. and H. Landemore (2022). Debating democracy: do we need more or less?. Oxford University Press.
Caplan, B. (2011). The Myth of the Rational Voter: Why Democracies Choose Bad Policies. Princeton University Press.
Erikson, R. S., M. B. MacKuen and J. A. Stimson (2002). The Macro Polity. Cambridge University Press.
Estlund, D. (2009). Democratic Authority: A philosophical framework. Princeton University Press.
Fishkin, J. S. (2018). Democracy when the people are thinking: Revitalizing our politics through public deliberation. Oxford University Press.
Goodin, R. E. and K. Spiekermann (2018). An Epistemic Theory of Democracy. Oxford University Press.
Knight, J. and J. Johnson (2011). The Priority of Democracy: Political consequences of pragmatism, Princeton University Press.
Landemore, H. (2013). Democratic Reason: Politics, collective intelligence, and the rule of the many. Princeton University Press.
Misak, C. (2002). Truth, Politics, Morality: Pragmatism and deliberation, Routledge.
Ober, J. (2008). Democracy and Knowledge: Innovation and learning in classical Athens, Princeton University Press.
Somin, I. (2016). Democracy and Political Ignorance: Why smaller government is smarter, Stanford University Press.
Talisse, R. B. (2013). A pragmatist philosophy of democracy, Routledge.
See also
Collective intelligence
Deliberative democracy
Democracy
Pragmatism
Jury Theorem
References
Social epistemology
Political theories
Types of democracy | 0.798574 | 0.973881 | 0.777716 |
Discourse | Discourse is a generalization of the notion of a conversation to any form of communication. Discourse is a major topic in social theory, with work spanning fields such as sociology, anthropology, continental philosophy, and discourse analysis. Following pioneering work by Michel Foucault, these fields view discourse as a system of thought, knowledge, or communication that constructs our world experience. Since control of discourse amounts to control of how the world is perceived, social theory often studies discourse as a window into power. Within theoretical linguistics, discourse is understood more narrowly as linguistic information exchange and was one of the major motivations for the framework of dynamic semantics. In these expressions, ' denotations are equated with their ability to update a discourse context.
Social theory
In the humanities and social sciences, discourse describes a formal way of thinking that can be expressed through language. Discourse is a social boundary that defines what statements can be said about a topic. Many definitions of discourse are primarily derived from the work of French philosopher Michel Foucault. In sociology, discourse is defined as "any practice (found in a wide range of forms) by which individuals imbue reality with meaning".
Political science sees discourse as closely linked to politics and policy making. Likewise, different theories among various disciplines understand discourse as linked to power and state, insofar as the control of discourses is understood as a hold on reality itself (e.g. if a state controls the media, they control the "truth"). In essence, discourse is inescapable, since any use of language will have an effect on individual perspectives. In other words, the chosen discourse provides the vocabulary, expressions, or style needed to communicate. For example, two notably distinct discourses can be used about various guerrilla movements, describing them either as "freedom fighters" or "terrorists".
In psychology, discourses are embedded in different rhetorical genres and meta-genres that constrain and enable them—language talking about language. This is exemplified in the APA's Diagnostic and Statistical Manual of Mental Disorders, which tells of the terms that have to be used in speaking about mental health, thereby mediating meanings and dictating practices of professionals in psychology and psychiatry.
Modernism
Modernist theorists focused on achieving progress and believed in natural and social laws that could be used universally to develop knowledge and, thus, a better understanding of society. Such theorists would be preoccupied with obtaining the "truth" and "reality", seeking to develop theories which contained certainty and predictability. Modernist theorists therefore understood discourse to be functional. Discourse and language transformations are ascribed to progress or the need to develop new or more "accurate" words to describe discoveries, understandings, or areas of interest. In modernist theory, language and discourse are dissociated from power and ideology and instead conceptualized as "natural" products of common sense usage or progress. Modernism further gave rise to the liberal discourses of rights, equality, freedom, and justice; however, this rhetoric masked substantive inequality and failed to account for differences, according to Regnier.
Structuralism (Saussure & Lacan)
Structuralist theorists, such as Ferdinand de Saussure and Jacques Lacan, argue that all human actions and social formations are related to language and can be understood as systems of related elements. This means that the "individual elements of a system only have significance when considered about the structure as a whole, and that structures are to be understood as self-contained, self-regulated, and self-transforming entities". In other words, it is the structure itself that determines the significance, meaning, and function of the individual elements of a system. Structuralism has contributed to our understanding of language and social systems. Saussure's theory of language highlights the decisive role of meaning and signification in structuring human life more generally.
Poststructuralism (Foucault)
Following the perceived limitations of the modern era, emerged postmodern theory. Postmodern theorists rejected modernist claims that there was one theoretical approach that explained all aspects of society. Rather, postmodernist theorists were interested in examining the variety of experiences of individuals and groups and emphasized differences over similarities and shared experiences.
In contrast to modernist theory, postmodern theory is pessimistic regarding universal truths and realities. Hence, it has attempted to be fluid, allowing for individual differences as it rejects the notion of social laws. Postmodern theorists shifted away from truth-seeking and sought answers to how truths are produced and sustained. Postmodernists contended that truth and knowledge are plural, contextual, and historically produced through discourses. Postmodern researchers, therefore, embarked on analyzing discourses such as texts, language, policies, and practices.
Foucault
In the works of the philosopher Michel Foucault, a discourse is "an entity of sequences, of signs, in that they are enouncements (énoncés)." The enouncement (l’énoncé, "the statement") is a linguistic construct that allows the writer and the speaker to assign meaning to words and to communicate repeatable semantic relations to, between, and among the statements, objects, or subjects of the discourse. Internal ties exist between the signs (semiotic sequences) . The term discursive formation identifies and describes written and spoken statements with semantic relations that produce discourses. As a researcher, Foucault applied the discursive formation to analyses of large bodies of knowledge, e.g. political economy and natural history.
In The Archaeology of Knowledge (1969), a treatise about the methodology and historiography of systems of thought ("epistemes") and knowledge ("discursive formations"), Michel Foucault developed the concepts of discourse. The sociologist Iara Lessa summarizes Foucault's definition of discourse as "systems of thoughts composed of ideas, attitudes, courses of action, beliefs, and practices that systematically construct the subjects and the worlds of which they speak." Foucault traces the role of discourse in the legitimation of society's power to construct contemporary truths, to maintain said truths, and to determine what relations of power exist among the constructed truths; therefore discourse is a communications medium through which power relations produce men and women who can speak.
The interrelation between power and knowledge renders every human relationship into a power negotiation, Because power is always present and so produces and constrains the truth. Power is exercised through rules of exclusion (discourses) that determine what subjects people can discuss; when, where, and how a person may speak; and determines which persons are allowed to speak. That knowledge is both the creator of power and the creation of power, Foucault coined "power/knowledge" to show that it is "an abstract force which determines what will be known, rather than assuming that individual thinkers develop ideas and knowledge."
Interdiscourse studies the external semantic relations among discourses, as discourses exists in relation to other discourses.
Discourse analysis
There is more than one type of discourse analysis, and the definition of "discourse" shifts slightly between types. Generally speaking, discourse analyses can be divided into those concerned with "little d" discourse and "big D" Discourse. The former ("little d") refers to language-in-use, such as spoken communication; the latter ("big D") refers to sociopolitical discourses (language plus social and cultural contexts).
Common forms of discourse analysis include:
Critical discourse analysis
Conversation analysis
Foucauldian discourse analysis
Genre analysis
Narrative analysis
Formal semantics and pragmatics
In formal semantics and pragmatics, discourse is often viewed as the process of refining the information in a common ground. In some theories of semantics, such as discourse representation theory, sentences' denotations themselves are equated with functions that update a common ground.
See also
References
Further reading
— (1980). "Two Lectures", in Power/Knowledge: Selected Interviews, edited by C. Gordon. New York; Pantheon Books.
Howard, Harry. (2017). "Discourse 2." Brain and Language, Tulane University. [PowerPoint slides].
External links
DiscourseNet, an international association for discourse studies.
Beyond open access: open discourse, the next great equalizer, Retrovirology 2006, 3:55
Discourse (Lun) in the Chinese tradition
Discourse analysis
Semantics
Sociolinguistics
Anthropology
Concepts in social philosophy
Debating | 0.780498 | 0.995315 | 0.776841 |
Philosophy of biology | The philosophy of biology is a subfield of philosophy of science, which deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, and Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s, associated with the research of David Hull. Philosophers of science then began paying increasing attention to biology, from the rise of Neodarwinism in the 1930s and 1940s to the discovery of the structure of DNA in 1953 to more recent advances in genetic engineering.
Other key ideas include the reduction of all life processes to biochemical reactions, and the incorporation of psychology into a broader neuroscience.
Overview
Philosophers of biology examine the practices, theories, and concepts of biologists with a view toward better understanding biology as a scientific discipline (or group of scientific fields). Scientific ideas are philosophically analyzed and their consequences are explored. Philosophers of biology have also explored how our understanding of biology relates to epistemology, ethics, aesthetics, and metaphysics and whether progress in biology should compel modern societies to rethink traditional values concerning all aspects of human life. It is sometimes difficult to separate the philosophy of biology from theoretical biology.
"What is a biological species?"
"What is natural selection, and how does it operate in nature?"
"How should we distinguish disease states from non-disease states?"
"What is life?"
"What makes humans uniquely human?"
"What is the basis of moral thinking?"
"Is biological materialism & deterministic molecular biology compatible with free will?"
"How is rationality possible, given our biological origins?"
"Is evolution compatible with Christianity or other religious systems?"
"Are there laws of biology like the laws of physics?"
Ideas drawn from philosophical ontology and logic are being used by biologists in the domain of bioinformatics. Ontologies such as the Gene Ontology are being used to annotate the results of biological experiments in model organisms in order to create logically tractable bodies of data for reasoning and search. The ontologies are species-neutral graph-theoretical representations of biological types joined together by formally defined relations.
Philosophy of biology has become a visible, well-organized discipline, with its own journals, conferences, and professional organizations. The largest of the latter is the International Society for the History, Philosophy, and Social Studies of Biology (ISHPSSB).
Biological laws and autonomy of biology
A prominent question in the philosophy of biology is whether biology can be reduced to lower-level sciences such as chemistry and physics. Materialism is the view that every biological system including organisms consists of nothing except the interactions of molecules; it is opposed to vitalism. As a methodology, reduction would mean that biological systems should be studied at the level of chemistry and molecules. In terms of epistemology, reduction means that knowledge of biological processes can be reduced to knowledge of lower-level processes, a controversial claim.
Holism in science is the view that emphasizes higher-level processes, phenomena at a larger level that occur due to the pattern of interactions between the elements of a system over time. For example, to explain why one species of finch survives a drought while others die out, the holistic method looks at the entire ecosystem. Reducing an ecosystem to its parts in this case would be less effective at explaining overall behavior (in this case, the decrease in biodiversity). As individual organisms must be understood in the context of their ecosystems, holists argue, so must lower-level biological processes be understood in the broader context of the living organism in which they take part. Proponents of this view cite our growing understanding of the multidirectional and multilayered nature of gene modulation (including epigenetic changes) as an area where a reductionist view is inadequate for full explanatory power.
All processes in organisms obey physical laws, but some argue that the difference between inanimate and biological processes is that the organisation of biological properties is subject to control by coded information. This has led biologists and philosophers such as Ernst Mayr and David Hull to return to the strictly philosophical reflections of Charles Darwin to resolve some of the problems which confronted them when they tried to employ a philosophy of science derived from classical physics. The old positivist approach used in physics emphasised a strict determinism and led to the discovery of universally applicable laws, testable in the course of experiment. It was difficult for biology to use this approach. Standard philosophy of science seemed to leave out a lot of what characterised living organisms - namely, a historical component in the form of an inherited genotype.
Philosophers of biology have also examined the notion of teleology in biology. Some have argued that scientists have had no need for a notion of cosmic teleology that can explain and predict evolution, since one was provided by Darwin. But teleological explanations relating to purpose or function have remained useful in biology, for example, in explaining the structural configuration of macromolecules and the study of co-operation in social systems. By clarifying and restricting the use of the term 'teleology' to describe and explain systems controlled strictly by genetic programmes or other physical systems, teleological questions can be framed and investigated while remaining committed to the physical nature of all underlying organic processes. While some philosophers claim that the ideas of Charles Darwin ended the last remainders of teleology in biology, the matter continues to be debated. Debates in these areas of philosophy of biology turn on how one views reductionism more generally.
Ethical implications of biology
Sharon Street claims that contemporary evolutionary biological theory creates what she calls a “Darwinian Dilemma” for realists. She argues that this is because it is unlikely that our evaluative judgements about morality are tracking anything true about the world. Rather, she says, it is likely that moral judgements and intuitions that promote our reproductive fitness were selected for, and there is no reason to think that it is the truth of these moral intuitions which accounts for their selection. She notes that a moral intuition most people share, that someone being a close family member is a prima facie good reason to help them, happens to be an intuition likely to increase reproductive fitness, while a moral intuition almost no one has, that someone being a close family member is a reason not to help them, is likely to decrease reproductive fitness.
David Copp responded to Street by arguing that realists can avoid this so-called dilemma by accepting what he calls a “quasi-tracking” position. Copp explains that what he means by quasi tracking is that it is likely that moral positions in a given society would have evolved to be at least somewhat close to the truth. He justifies this by appealing to the claim that the purpose of morality is to allow a society to meet certain basic needs, such as social stability, and a society with a successful moral codes would be better at doing this.
Other perspectives
One perspective on the philosophy of biology is how developments in modern biological research and biotechnologies have influenced traditional philosophical ideas about the distinction between biology and technology, as well as implications for ethics, society, and culture. An example is the work of philosopher Eugene Thacker in his book Biomedia. Building on current research in fields such as bioinformatics and biocomputing, as well as on work in the history of science (particularly the work of Georges Canguilhem, Lily E. Kay, and Hans-Jörg Rheinberger), Thacker defines biomedia as entailing "the informatic recontextualization of biological components and processes, for ends that may be medical or non-medical...biomedia continuously make the dual demand that information materialize itself as gene or protein compounds. This point cannot be overstated: biomedia depend upon an understanding of biological as informational but not immaterial."
Some approaches to the philosophy of biology incorporate perspectives from science studies and/or science and technology studies, anthropology, sociology of science, and political economy. This includes work by scholars such as Melinda Cooper, Luciana Parisi, Paul Rabinow, Nikolas Rose, and Catherine Waldby.
Philosophy of biology was historically associated very closely with theoretical evolutionary biology, but more recently there have been more diverse movements, such as to examine molecular biology.
Scientific discovery process
Research in biology continues to be less guided by theory than it is in other sciences. This is especially the case where the availability of high throughput screening techniques for the different "-omics" fields such as genomics, whose complexity makes them predominantly data-driven. Such data-intensive scientific discovery is by some considered to be the fourth paradigm, after empiricism, theory and computer simulation. Others reject the idea that data driven research is about to replace theory. As Krakauer et al. put it: "machine learning is a powerful means of preprocessing data in preparation for mechanistic theory building, but should not be considered the final goal of a scientific inquiry." In regard to cancer biology, Raspe et al. state: "A better understanding of tumor biology is fundamental for extracting the relevant information from any high throughput data." The journal Science chose cancer immunotherapy as the breakthrough of 2013. According to their explanation a lesson to be learned from the successes of cancer immunotherapy is that they emerged from decoding of basic biology.
Theory in biology is to some extent less strictly formalized than in physics. Besides 1) classic mathematical-analytical theory, as in physics, there is 2) statistics-based, 3) computer simulation and 4) conceptual/verbal analysis. Dougherty and Bittner argue that for biology to progress as a science, it has to move to more rigorous mathematical modeling, or otherwise risk to be "empty talk".
In tumor biology research, the characterization of cellular signaling processes has largely focused on identifying the function of individual genes and proteins. Janes showed however the context-dependent nature of signaling driving cell decisions demonstrating the need for a more system based approach. The lack of attention for context dependency in preclinical research is also illustrated by the observation that preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug.
The Darwinian dynamic and the origin of life
Organisms that exist today, from viruses to humans, possess a self-replicating informational molecule (genome) that is either DNA (most organisms) or RNA (as in some viruses), and such an informational molecule is likely intrinsic to life. Probably the earliest forms of life were likewise based on a self-replicating informational molecule (genome), perhaps RNA or an informational molecule more primitive than RNA or DNA. It has been argued that the evolution of order in living systems and in particular physical systems obey a common fundamental principle that was termed the Darwinian dynamic. This principal was formulated by first considering how macroscopic order is generated in a simple non-biological system far from thermodynamic equilibrium, and subsequently extending consideration to short, replicating RNA molecules. The underlying order-generating process was concluded to be basically similar for both types of systems.
Journals and professional organizations
Journals
History and Philosophy of the Life Sciences
Journal of the History of Biology
Biology & Philosophy
Biological Theory
Philosophy, Theory, and Practice in Biology
Studies in History and Philosophy of Science
Professional organizations
International Society for the History, Philosophy, and Social Studies of Biology
See also
References
External links
Philosophy of biology | 0.796746 | 0.974958 | 0.776793 |
Theory | A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory's assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.
In modern science, the term "theory" refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction ("falsify") of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge, in contrast to more common uses of the word "theory" that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis). Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.
Theories guide the enterprise of finding facts rather than of reaching goals, and are neutral concerning alternatives among values. A theory can be a body of knowledge, which may or may not be associated with particular explanatory models. To theorize is to develop this body of knowledge.
The word theory or "in theory" is sometimes used outside of science to refer to something which the speaker did not experience or test before. In science, this same concept is referred to as a hypothesis, and the word "hypothetically" is used both inside and outside of science. In its usage outside of science, the word "theory" is very often contrasted to "practice" (from Greek praxis, πρᾶξις) a Greek term for doing, which is opposed to theory. A "classical example" of the distinction between "theoretical" and "practical" uses the discipline of medicine: medical theory involves trying to understand the causes and nature of health and sickness, while the practical side of medicine is trying to make people healthy. These two things are related but can be independent, because it is possible to research health and sickness without curing specific patients, and it is possible to cure a patient without knowing how the cure worked.
Ancient usage
The English word theory derives from a technical term in philosophy in Ancient Greek. As an everyday word, theoria, , meant "looking at, viewing, beholding", but in more technical contexts it came to refer to contemplative or speculative understandings of natural things, such as those of natural philosophers, as opposed to more practical ways of knowing things, like that of skilled orators or artisans. English-speakers have used the word theory since at least the late 16th century. Modern uses of the word theory derive from the original definition, but have taken on new shades of meaning, still based on the idea of a theory as a thoughtful and rational explanation of the general nature of things.
Although it has more mundane meanings in Greek, the word apparently developed special uses early in the recorded history of the Greek language. In the book From Religion to Philosophy, Francis Cornford suggests that the Orphics used the word theoria to mean "passionate sympathetic contemplation". Pythagoras changed the word to mean "the passionless contemplation of rational, unchanging truth" of mathematical knowledge, because he considered this intellectual pursuit the way to reach the highest plane of existence. Pythagoras emphasized subduing emotions and bodily desires to help the intellect function at the higher plane of theory. Thus, it was Pythagoras who gave the word theory the specific meaning that led to the classical and modern concept of a distinction between theory (as uninvolved, neutral thinking) and practice.
Aristotle's terminology, as already mentioned, contrasts theory with praxis or practice, and this contrast exists till today. For Aristotle, both practice and theory involve thinking, but the aims are different. Theoretical contemplation considers things humans do not move or change, such as nature, so it has no human aim apart from itself and the knowledge it helps create. On the other hand, praxis involves thinking, but always with an aim to desired actions, whereby humans cause change or movement themselves for their own ends. Any human movement that involves no conscious choice and thinking could not be an example of praxis or doing.
Formality
Theories are analytical tools for understanding, explaining, and making predictions about a given subject matter. There are theories in many and varied fields of study, including the arts and sciences. A formal theory is syntactic in nature and is only meaningful when given a semantic component by applying it to some content (e.g., facts and relationships of the actual historical world as it is unfolding). Theories in various fields of study are often expressed in natural language, but can be constructed in such a way that their general form is identical to a theory as it is expressed in the formal language of mathematical logic. Theories may be expressed mathematically, symbolically, or in common language, but are generally expected to follow principles of rational thought or logic.
Theory is constructed of a set of sentences that are thought to be true statements about the subject under consideration. However, the truth of any one of these statements is always relative to the whole theory. Therefore, the same statement may be true with respect to one theory, and not true with respect to another. This is, in ordinary language, where statements such as "He is a terrible person" cannot be judged as true or false without reference to some interpretation of who "He" is and for that matter what a "terrible person" is under the theory.
Sometimes two theories have exactly the same explanatory power because they make the same predictions. A pair of such theories is called indistinguishable or observationally equivalent, and the choice between them reduces to convenience or philosophical preference.
The form of theories is studied formally in mathematical logic, especially in model theory. When theories are studied in mathematics, they are usually expressed in some formal language and their statements are closed under application of certain procedures called rules of inference. A special case of this, an axiomatic theory, consists of axioms (or axiom schemata) and rules of inference. A theorem is a statement that can be derived from those axioms by application of these rules of inference. Theories used in applications are abstractions of observed phenomena and the resulting theorems provide solutions to real-world problems. Obvious examples include arithmetic (abstracting concepts of number), geometry (concepts of space), and probability (concepts of randomness and likelihood).
Gödel's incompleteness theorem shows that no consistent, recursively enumerable theory (that is, one whose theorems form a recursively enumerable set) in which the concept of natural numbers can be expressed, can include all true statements about them. As a result, some domains of knowledge cannot be formalized, accurately and completely, as mathematical theories. (Here, formalizing accurately and completely means that all true propositions—and only true propositions—are derivable within the mathematical system.) This limitation, however, in no way precludes the construction of mathematical theories that formalize large bodies of scientific knowledge.
Underdetermination
A theory is underdetermined (also called indeterminacy of data to theory) if a rival, inconsistent theory is at least as consistent with the evidence. Underdetermination is an epistemological issue about the relation of evidence to conclusions.
A theory that lacks supporting evidence is generally, more properly, referred to as a hypothesis.
Intertheoretic reduction and elimination
If a new theory better explains and predicts a phenomenon than an old theory (i.e., it has more explanatory power), we are justified in believing that the newer theory describes reality more correctly. This is called an intertheoretic reduction because the terms of the old theory can be reduced to the terms of the new one. For instance, our historical understanding about sound, light and heat have been reduced to wave compressions and rarefactions, electromagnetic waves, and molecular kinetic energy, respectively. These terms, which are identified with each other, are called intertheoretic identities. When an old and new theory are parallel in this way, we can conclude that the new one describes the same reality, only more completely.
When a new theory uses new terms that do not reduce to terms of an older theory, but rather replace them because they misrepresent reality, it is called an intertheoretic elimination. For instance, the obsolete scientific theory that put forward an understanding of heat transfer in terms of the movement of caloric fluid was eliminated when a theory of heat as energy replaced it. Also, the theory that phlogiston is a substance released from burning and rusting material was eliminated with the new understanding of the reactivity of oxygen.
Versus theorems
Theories are distinct from theorems. A theorem is derived deductively from axioms (basic assumptions) according to a formal system of rules, sometimes as an end in itself and sometimes as a first step toward being tested or applied in a concrete situation; theorems are said to be true in the sense that the conclusions of a theorem are logical consequences of the axioms. Theories are abstract and conceptual, and are supported or challenged by observations in the world. They are 'rigorously tentative', meaning that they are proposed as true and expected to satisfy careful examination to account for the possibility of faulty inference or incorrect observation. Sometimes theories are incorrect, meaning that an explicit set of observations contradicts some fundamental objection or application of the theory, but more often theories are corrected to conform to new observations, by restricting the class of phenomena the theory applies to or changing the assertions made. An example of the former is the restriction of classical mechanics to phenomena involving macroscopic length scales and particle speeds much lower than the speed of light.
Theory–practice relationship
Theory is often distinguished from practice or praxis. The question of whether theoretical models of work are relevant to work itself is of interest to scholars of professions such as medicine, engineering, law, and management.
The gap between theory and practice has been framed as a knowledge transfer where there is a task of translating research knowledge to be application in practice, and ensuring that practitioners are made aware of it. Academics have been criticized for not attempting to transfer the knowledge they produce to practitioners. Another framing supposes that theory and knowledge seek to understand different problems and model the world in different words (using different ontologies and epistemologies). Another framing says that research does not produce theory that is relevant to practice.
In the context of management, Van de Van and Johnson propose a form of engaged scholarship where scholars examine problems that occur in practice, in an interdisciplinary fashion, producing results that create both new practical results as well as new theoretical models, but targeting theoretical results shared in an academic fashion. They use a metaphor of "arbitrage" of ideas between disciplines, distinguishing it from collaboration.
Scientific
In science, the term "theory" refers to "a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment." Theories must also meet further requirements, such as the ability to make falsifiable predictions with consistent accuracy across a broad area of scientific inquiry, and production of strong evidence in favor of the theory from multiple independent sources (consilience).
The strength of a scientific theory is related to the diversity of phenomena it can explain, which is measured by its ability to make falsifiable predictions with respect to those phenomena. Theories are improved (or replaced by better theories) as more evidence is gathered, so that accuracy in prediction improves over time; this increased accuracy corresponds to an increase in scientific knowledge. Scientists use theories as a foundation to gain further scientific knowledge, as well as to accomplish goals such as inventing technology or curing diseases.
Definitions from scientific organizations
The United States National Academy of Sciences defines scientific theories as follows:The formal scientific definition of "theory" is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics) ... One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.
From the American Association for the Advancement of Science:
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory." It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.
The term theory is not appropriate for describing scientific models or untested, but intricate hypotheses.
Philosophical views
The logical positivists thought of scientific theories as deductive theories—that a theory's content is based on some formal system of logic and on basic axioms. In a deductive theory, any sentence which is a logical consequence of one or more of the axioms is also a sentence of that theory. This is called the received view of theories.
In the semantic view of theories, which has largely replaced the received view, theories are viewed as scientific models. A model is a logical framework intended to represent reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country. In this approach, theories are a specific category of models that fulfill the necessary criteria. (See Theories as models for further discussion.)
In physics
In physics the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries, like equality of locations in space or in time, or identity of electrons, etc.)—which is capable of producing experimental predictions for a given category of physical systems. One good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered adequately tested, with new ones always in the making and perhaps untested.
Regarding the term "theoretical"
Certain tests may be infeasible or technically difficult. As a result, theories may make predictions that have not been confirmed or proven incorrect. These predictions may be described informally as "theoretical". They can be tested later, and if they are incorrect, this may lead to revision, invalidation, or rejection of the theory.
Mathematical
In mathematics, the term theory is used differently than its use in science ─ necessarily so, since mathematics contains no explanations of natural phenomena per se, even though it may help provide insight into natural systems or be inspired by them. In the general sense, a mathematical theory is a branch of mathematics devoted to some specific topics or methods, such as set theory, number theory, group theory, probability theory, game theory, control theory, perturbation theory, etc., such as might be appropriate for a single textbook.
In mathematical logic, a theory has a related but different sense: it is the collection of the theorems that can be deduced from a given set of axioms, given a given set of inference rules.
Philosophical
A theory can be either descriptive as in science, or prescriptive (normative) as in philosophy. The latter are those whose subject matter consists not of empirical data, but rather of ideas. At least some of the elementary theorems of a philosophical theory are statements whose truth cannot necessarily be scientifically tested through empirical observation.
A field of study is sometimes named a "theory" because its basis is some initial set of assumptions describing the field's approach to the subject. These assumptions are the elementary theorems of the particular theory, and can be thought of as the axioms of that field. Some commonly known examples include set theory and number theory; however literary theory, critical theory, and music theory are also of the same form.
Metatheory
One form of philosophical theory is a metatheory or meta-theory. A metatheory is a theory whose subject matter is some other theory or set of theories. In other words, it is a theory about theories. Statements made in the metatheory about the theory are called metatheorems.
Political
A political theory is an ethical theory about the law and government. Often the term "political theory" refers to a general view, or specific ethic, political belief or attitude, thought about politics.
Jurisprudential
In social science, jurisprudence is the philosophical theory of law. Contemporary philosophy of law addresses problems internal to law and legal systems, and problems of law as a particular social institution.
Examples
Most of the following are scientific theories. Some are not, but rather encompass a body of knowledge or art, such as Music theory and Visual Arts Theories.
Anthropology:
Carneiro's circumscription theory
Astronomy:
Alpher–Bethe–Gamow theory —
B2FH Theory —
Copernican theory —
Newton's theory of gravitation —
Hubble's law —
Kepler's laws of planetary motion Ptolemaic theory
Biology:
Cell theory —
Chemiosmotic theory —
Evolution —
Germ theory —
Symbiogenesis
Chemistry:
Molecular theory —
Kinetic theory of gases —
Molecular orbital theory —
Valence bond theory —
Transition state theory —
RRKM theory —
Chemical graph theory —
Flory–Huggins solution theory —
Marcus theory —
Lewis theory (successor to Brønsted–Lowry acid–base theory) —
HSAB theory —
Debye–Hückel theory —
Thermodynamic theory of polymer elasticity —
Reptation theory —
Polymer field theory —
Møller–Plesset perturbation theory —
density functional theory —
Frontier molecular orbital theory —
Polyhedral skeletal electron pair theory —
Baeyer strain theory —
Quantum theory of atoms in molecules —
Collision theory —
Ligand field theory (successor to Crystal field theory) —
Variational transition-state theory —
Benson group increment theory —
Specific ion interaction theory
Climatology:
Climate change theory (general study of climate changes)
anthropogenic climate change (ACC)/
anthropogenic global warming (AGW) theories (due to human activity)
Computer Science:
Automata theory —
Queueing theory
Cosmology:
Big Bang Theory —
Cosmic inflation —
Loop quantum gravity —
Superstring theory —
Supergravity —
Supersymmetric theory —
Multiverse theory —
Holographic principle —
Quantum gravity —
M-theory
Economics:
Macroeconomic theory —
Microeconomic theory —
Law of Supply and demand
Education:
Constructivist theory —
Critical pedagogy theory —
Education theory —
Multiple intelligence theory —
Progressive education theory
Engineering:
Circuit theory —
Control theory —
Signal theory —
Systems theory —
Information theory
Film:
Film theory
Geology:
Plate tectonics
Humanities:
Critical theory
Jurisprudence or 'Legal theory':
Natural law —
Legal positivism —
Legal realism —
Critical legal studies
Law: see Jurisprudence; also Case theory
Linguistics:
X-bar theory —
Government and Binding —
Principles and parameters —
Universal grammar
Literature:
Literary theory
Mathematics:
Approximation theory —
Arakelov theory —
Asymptotic theory —
Bifurcation theory —
Catastrophe theory —
Category theory —
Chaos theory —
Choquet theory —
Coding theory —
Combinatorial game theory —
Computability theory —
Computational complexity theory —
Deformation theory —
Dimension theory —
Ergodic theory —
Field theory —
Galois theory —
Game theory —
Gauge theory —
Graph theory —
Group theory —
Hodge theory —
Homology theory —
Homotopy theory —
Ideal theory —
Intersection theory —
Invariant theory —
Iwasawa theory —
K-theory —
KK-theory —
Knot theory —
L-theory —
Lie theory —
Littlewood–Paley theory —
Matrix theory —
Measure theory —
Model theory —
Module theory —
Morse theory —
Nevanlinna theory —
Number theory —
Obstruction theory —
Operator theory —
Order theory —
PCF theory —
Perturbation theory —
Potential theory —
Probability theory —
Ramsey theory —
Rational choice theory —
Representation theory —
Ring theory —
Set theory —
Shape theory —
Small cancellation theory —
Spectral theory —
Stability theory —
Stable theory —
Sturm–Liouville theory —
Surgery theory —
Twistor theory —
Yang–Mills theory
Music:
Music theory
Philosophy:
Proof theory —
Speculative reason —
Theory of truth —
Type theory —
Value theory —
Virtue theory
Physics:
Acoustic theory —
Antenna theory —
Atomic theory —
BCS theory —
Conformal field theory —
Dirac hole theory —
Dynamo theory —
Landau theory —
M-theory —
Perturbation theory —
Theory of relativity (successor to classical mechanics) —
Gauge theory —
Quantum field theory —
Scattering theory —
String theory —
Quantum information theory
Psychology:
Theory of mind —
Cognitive dissonance theory —
Attachment theory —
Object permanence —
Poverty of stimulus —
Attribution theory —
Self-fulfilling prophecy —
Stockholm syndrome
Public Budgeting:
Incrementalism —
Zero-based budgeting
Public Administration:
Organizational theory
Semiotics:
Intertheoricity –
Transferogenesis
Sociology:
Critical theory —
Engaged theory —
Social theory —
Sociological theory –
Social capital theory
Statistics:
Extreme value theory
Theatre:
Performance theory
Visual Arts:
Aesthetics —
Art educational theory —
Architecture —
Composition —
Anatomy —
Color theory —
Perspective —
Visual perception —
Geometry —
Manifolds
Other:
Obsolete scientific theories
See also
Falsifiability
Hypothesis testing
Physical law
Predictive power
Testability
Theoretical definition
Notes
References
Citations
Sources
Davidson Reynolds, Paul (1971). A primer in theory construction. Boston: Allyn and Bacon.
Guillaume, Astrid (2015). « Intertheoricity: Plasticity, Elasticity and Hybridity of Theories. Part II: Semiotics of Transferogenesis », in Human and Social studies, Vol.4, N°2 (2015), éd.Walter de Gruyter, Boston, Berlin, pp. 59–77.
Guillaume, Astrid (2015). « The Intertheoricity : Plasticity, Elasticity and Hybridity of Theories », in Human and Social studies, Vol.4, N°1 (2015), éd.Walter de Gruyter, Boston, Berlin, pp. 13–29.
Hawking, Stephen (1996). A Brief History of Time (Updated and expanded ed.). New York: Bantam Books, p. 15.
.
Popper, Karl (1963), Conjectures and Refutations, Routledge and Kegan Paul, London, UK, pp. 33–39. Reprinted in Theodore Schick (ed., 2000), Readings in the Philosophy of Science, Mayfield Publishing Company, Mountain View, California, USA, pp. 9–13.
Zima, Peter V. (2007). "What is theory? Cultural theory as discourse and dialogue". London: Continuum (translated from: Was ist Theorie? Theoriebegriff und Dialogische Theorie in der Kultur- und Sozialwissenschaften. Tübingen: A. Franke Verlag, 2004).
Further reading
Eisenhardt, K. M., & Graebner, M. E. (2007). Theory building from cases: Opportunities and challenges. Academy of management journal, 50(1), 25-32.
External links
"How science works: Even theories change", Understanding Science by the University of California Museum of Paleontology.
What is a Theory?
Abstraction
Systems
Inductive reasoning
Ontology | 0.778408 | 0.997831 | 0.77672 |
Basic research | Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development.
In addition to innovations, basic research also serves to provide insight into nature around us and allows us to respect its innate value. The development of this respect is what drives conservation efforts. Through learning about the environment, conservation efforts can be strengthened using research as a basis. Technological innovations can unintentionally be created through this as well, as seen with examples such as kingfishers' beaks affecting the design for high speed bullet trains in Japan.
Overview
Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common.
Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future.
History
By country
In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important.
Basic versus applied science
Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities.
A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards.
The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science.
See also
Blue skies research
Hard and soft science
Metascience
Normative science
Physics
Precautionary principle
Pure mathematics
Pure Chemistry
References
Further reading
Research | 0.780057 | 0.995632 | 0.77665 |
Subjectivity and objectivity (philosophy) | The distinction between subjectivity and objectivity is a basic idea of philosophy, particularly epistemology and metaphysics. The understanding of this distinction has evolved through the work of countless philosophers over the centuries. There are many different definitions that have been employed to compare and contrast subjectivity and objectivity. A general distinction can be extracted from these discussions:
Something is subjective if it is dependent on a mind (biases, perception, emotions, opinions, imagination, or conscious experience). If a claim is true exclusively when considering the claim from the viewpoint of a sentient being, it is subjectively true. For example, one person may consider the weather to be pleasantly warm, and another person may consider the same weather to be too hot; both views are subjective.
Something is objective if it can be confirmed independently of a mind. If a claim is true even when considering it outside the viewpoint of a sentient being, then it is labelled objectively true.
Both ideas have been given various and ambiguous definitions by differing sources as the distinction is often a given but not the specific focal point of philosophical discourse. The two words are usually regarded as opposites, though complications regarding the two have been explored in philosophy: for example, the view of particular thinkers that objectivity is an illusion and does not exist at all, or that a spectrum joins subjectivity and objectivity with a gray area in-between, or that the problem of other minds is best viewed through the concept of intersubjectivity, developing since the 20th century.
The distinction between subjectivity and objectivity is often related to discussions of consciousness, agency, personhood, philosophy of mind, philosophy of language, reality, truth, and communication (for example in narrative communication and journalism).
Etymology
The root of the words subjectivity and objectivity are subject and object, philosophical terms that mean, respectively, an observer and a thing being observed. The word subjectivity comes from subject in a philosophical sense, meaning an individual who possesses unique conscious experiences, such as perspectives, feelings, beliefs, and desires, or who (consciously) acts upon or wields power over some other entity (an object).
In Ancient philosophy
Aristotle's teacher Plato considered geometry to be a condition of his idealist philosophy concerned with universal truth. In Plato's Republic, Socrates opposes the sophist Thrasymachus's relativistic account of justice, and argues that justice is mathematical in its conceptual structure, and that ethics was therefore a precise and objective enterprise with impartial standards for truth and correctness, like geometry. The rigorous mathematical treatment Plato gave to moral concepts set the tone for the western tradition of moral objectivism that came after him. His contrasting between objectivity and opinion became the basis for philosophies intent on resolving the questions of reality, truth, and existence. He saw opinions as belonging to the shifting sphere of sensibilities, as opposed to a fixed, eternal and knowable incorporeality. Where Plato distinguished between how we know things and their ontological status, subjectivism such as George Berkeley's depends on perception. In Platonic terms, a criticism of subjectivism is that it is difficult to distinguish between knowledge, opinions, and subjective knowledge.
Platonic idealism is a form of metaphysical objectivism, holding that the ideas exist independently from the individual. Berkeley's empirical idealism, on the other hand, holds that things only exist as they are perceived. Both approaches boast an attempt at objectivity. Plato's definition of objectivity can be found in his epistemology, which is based on mathematics, and his metaphysics, where knowledge of the ontological status of objects and ideas is resistant to change.
In Western philosophy
In Western philosophy, the idea of subjectivity is thought to have its roots in the works of the European Enlightenment thinkers Descartes and Kant though it could also stem as far back as the Ancient Greek philosopher Aristotle's work relating to the soul. The idea of subjectivity is often seen as a peripheral to other philosophical concepts, namely skepticism, individuals and individuality, and existentialism. The questions surrounding subjectivity have to do with whether or not people can escape the subjectivity of their own human existence and whether or not there is an obligation to try to do so.
Important thinkers who focused on this area of study include Descartes, Locke, Kant, Hegel, Kierkegaard, Husserl, Foucault, Derrida, Nagel, and Sartre.
Subjectivity was rejected by Foucault and Derrida in favor of constructionism, but Sartre embraced and continued Descartes' work in the subject by emphasizing subjectivity in phenomenology. Sartre believed that, even within the material force of human society, the ego was an essentially transcendent being—posited, for instance, in his opus Being and Nothingness through his arguments about the 'being-for-others' and the 'for-itself' (i.e., an objective and subjective human being).
The innermost core of subjectivity resides in a unique act of what Fichte called "self-positing", where each subject is a point of absolute autonomy, which means that it cannot be reduced to a moment in the network of causes and effects.
Religion
One way that subjectivity has been conceptualized by philosophers such as Kierkegaard is in the context of religion. Religious beliefs can vary quite extremely from person to person, but people often think that whatever they believe is the truth. Subjectivity as seen by Descartes and Sartre was a matter of what was dependent on consciousness, so, because religious beliefs require the presence of a consciousness that can believe, they must be subjective. This is in contrast to what has been proven by pure logic or hard sciences, which does not depend on the perception of people, and is therefore considered objective. Subjectivity is what relies on personal perception regardless of what is proven or objective.
Many philosophical arguments within this area of study have to do with moving from subjective thoughts to objective thoughts with many different methods employed to get from one to the other along with a variety of conclusions reached. This is exemplified by Descartes deductions that move from reliance on subjectivity to somewhat of a reliance on God for objectivity. Foucault and Derrida denied the idea of subjectivity in favor of their ideas of constructs in order to account for differences in human thought. Instead of focusing on the idea of consciousness and self-consciousness shaping the way humans perceive the world, these thinkers would argue that it is instead the world that shapes humans, so they would see religion less as a belief and more as a cultural construction.
Phenomenology
Others like Husserl and Sartre followed the phenomenological approach. This approach focused on the distinct separation of the human mind and the physical world, where the mind is subjective because it can take liberties like imagination and self-awareness where religion might be examined regardless of any kind of subjectivity. The philosophical conversation around subjectivity remains one that struggles with the epistemological question of what is real, what is made up, and what it would mean to be separated completely from subjectivity.
In epistemology
In opposition to philosopher René Descartes' method of personal deduction, natural philosopher Isaac Newton applied the relatively objective scientific method to look for evidence before forming a hypothesis. Partially in response to Kant's rationalism, logician Gottlob Frege applied objectivity to his epistemological and metaphysical philosophies. If reality exists independently of consciousness, then it would logically include a plurality of indescribable forms. Objectivity requires a definition of truth formed by propositions with truth value. An attempt of forming an objective construct incorporates ontological commitments to the reality of objects.
The importance of perception in evaluating and understanding objective reality is debated in the observer effect of quantum mechanics. Direct or naïve realists rely on perception as key in observing objective reality, while instrumentalists hold that observations are useful in predicting objective reality. The concepts that encompass these ideas are important in the philosophy of science. Philosophies of mind explore whether objectivity relies on perceptual constancy.
In historiography
History as a discipline has wrestled with notions of objectivity from its very beginning. While its object of study is commonly thought to be the past, the only thing historians have to work with are different versions of stories based on individual perceptions of reality and memory.
Several history streams developed to devise ways to solve this dilemma: Historians like Leopold von Ranke (19th century) have advocated for the use of extensive evidence –especially archived physical paper documents– to recover the bygone past, claiming that, as opposed to people's memories, objects remain stable in what they say about the era they witnessed, and therefore represent a better insight into objective reality. In the 20th century, the Annales School emphasized the importance of shifting focus away from the perspectives of influential men –usually politicians around whose actions narratives of the past were shaped–, and putting it on the voices of ordinary people. Postcolonial streams of history challenge the colonial-postcolonial dichotomy and critique Eurocentric academia practices, such as the demand for historians from colonized regions to anchor their local narratives to events happening in the territories of their colonizers to earn credibility.
All the streams explained above try to uncover whose voice is more or less truth-bearing and how historians can stitch together versions of it to best explain what "actually happened."
Trouillot
The anthropologist Michel-Rolph Trouillot developed the concepts of historicity 1 and 2 to explain the difference between the materiality of socio-historical processes (H1) and the narratives that are told about the materiality of socio-historical processes (H2). This distinction hints that H1 would be understood as the factual reality that elapses and is captured with the concept of "objective truth", and that H2 is the collection of subjectivities that humanity has stitched together to grasp the past. Debates about positivism, relativism, and postmodernism are relevant to evaluating these concepts' importance and the distinction between them.
In his book "Silencing the past", Trouillot wrote about the power dynamics at play in history-making, outlining four possible moments in which historical silences can be created: (1) making of sources (who gets to know how to write, or to have possessions that are later examined as historical evidence), (2) making of archives (what documents are deemed important to save and which are not, how to classify materials, and how to order them within physical or digital archives), (3) making of narratives (which accounts of history are consulted, which voices are given credibility), and (4) the making of history (the retrospective construction of what The Past is).
Because history (official, public, familial, personal) informs current perceptions and how we make sense of the present, whose voice gets to be included in it –and how– has direct consequences in material socio-historical processes. Thinking of current historical narratives as impartial depictions of the totality of events unfolded in the past by labeling them as "objective" risks sealing historical understanding. Acknowledging that history is never objective and always incomplete has a meaningful opportunity to support social justice efforts. Under said notion, voices that have been silenced are placed on an equal footing to the grand and popular narratives of the world, appreciated for their unique insight of reality through their subjective lens.
In social sciences
Subjectivity is an inherently social mode that comes about through innumerable interactions within society. As much as subjectivity is a process of individuation, it is equally a process of socialization, the individual never being isolated in a self-contained environment, but endlessly engaging in interaction with the surrounding world.
Culture is a living totality of the subjectivity of any given society constantly undergoing transformation. Subjectivity is both shaped by it and shapes it in turn, but also by other things like the economy, political institutions, communities, as well as the natural world.
Though the boundaries of societies and their cultures are indefinable and arbitrary, the subjectivity inherent in each one is palatable and can be recognized as distinct from others. Subjectivity is in part a particular experience or organization of reality, which includes how one views and interacts with humanity, objects, consciousness, and nature, so the difference between different cultures brings about an alternate experience of existence that forms life in a different manner. A common effect on an individual of this disjunction between subjectivities is culture shock, where the subjectivity of the other culture is considered alien and possibly incomprehensible or even hostile.
Political subjectivity is an emerging concept in social sciences and humanities. Political subjectivity is a reference to the deep embeddedness of subjectivity in the socially intertwined systems of power and meaning. "Politicality", writes Sadeq Rahimi in Meaning, Madness and Political Subjectivity, "is not an added aspect of the subject, but indeed the mode of being of the subject, that is, precisely what the subject is."
Scientific objectivity is practicing science while intentionally reducing partiality, biases, or external influences. Moral objectivity is the concept of moral or ethical codes being compared to one another through a set of universal facts or a universal perspective and not through differing conflicting perspectives.
Journalistic objectivity is the reporting of facts and news with minimal personal bias or in an impartial or politically neutral manner.
See also
Dogma
Factual relativism
Intersubjectivity
Journalistic objectivity
Naïve realism
Objectivity (science)
Objectivism
Omniscience
Phenomenology (philosophy)
Phenomenology (psychology)
Political subjectivity
Q methodology
Relativism
Subject (philosophy)
Transcendental subjectivity
"Subjectivity is Truth", an existential interpretation of subjectivity by Søren Kierkegaard
Self
Vertiginous question
References
Further reading
Bachelard, Gaston. La formation de l'esprit scientifique: contribution à une psychanalyse de la connaissance. Paris: Vrin, 2004. .
Beiser, Frederick C. (2002). German Idealism: The Struggle Against Subjectivism, 1781–1801. Harvard University Press.
Block, Ned; Flanagan, Owen J.; & Gzeldere, Gven (Eds.) The Nature of Consciousness: Philosophical Debates. Cambridge, MA: MIT Press.
Bowie, Andrew (1990). Aesthetics and Subjectivity : From Kant to Nietzsche. Manchester: Manchester University Press.
Castillejo, David. The Formation of Modern Objectivity. Madrid: Ediciones de Arte y Bibliofilia, 1982.
Dallmayr, Winfried Reinhard (1981). Twilight of Subjectivity: Contributions to a Post-Individualist Theory Politics. Amherst, MA: University of Massachusetts Press.
Ellis, C. & Flaherty, M. (1992). Investigating Subjectivity: Research on Lived Experience. Newbury Park, CA: Sage.
Farrell, Frank B. (1994). Subjectivity, Realism, and Postmodernism: The Recovery of the World in Recent Philosophy. Cambridge – New York: Cambridge University Press.
Gaukroger, Stephen. (2012). Objectivity. Oxford University Press.
Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1996, 3rd ed. .
Lauer, Quentin (1958). The Triumph of Subjectivity: An Introduction to Transcendental Phenomenology. Fordham University Press.
Megill, Allan. Rethinking Objectivity. London: Duke UP, 1994.
Nagel, Ernest. The Structure of Science. New York: Brace and World, 1961.
Nagel, Thomas. The View from Nowhere. Oxford: Oxford UP, 1986
Nozick, Robert. Invariances: the structure of the objective world. Cambridge: Harvard UP, 2001.
Popper, Karl. R. Objective Knowledge: An Evolutionary Approach. Oxford University Press, 1972. .
Rescher, Nicholas. Objectivity: the obligations of impersonal reason. Notre Dame: Notre Dame Press, 1977.
Rorty, Richard. Objectivity, Relativism, and Truth. Cambridge: Cambridge University Press, 1991
Rousset, Bernard. La théorie kantienne de l'objectivité, Paris: Vrin, 1967.
Scheffler, Israel. Science and Subjectivity. Hackett, 1982. Voices of Wisdom; a multicultural philosophy reader. Kessler
External links
Subjectivity and Objectivity—by Pete Mandik
Concepts in epistemology
Concepts in metaphilosophy
Concepts in political philosophy
Concepts in the philosophy of mind
Concepts in the philosophy of science
Metaphysical properties
Metaphysics of mind
Ontology
Philosophy of psychology
Subjective experience | 0.779377 | 0.995986 | 0.776248 |
Scientism | Scientism is the belief that science and the scientific method are the best or only way to render truth about the world and reality.
While the term was defined originally to mean "methods and attitudes typical of or attributed to natural scientists", some scholars, as well as political and religious leaders, have also adopted it as a pejorative term with the meaning "an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation (as in philosophy, the social sciences, and the humanities)".
Overview
Francis Bacon has been viewed by some scholars as an early proponent of scientism, but this is a modern assertion as Bacon was a devout Anglican, writing in his Essays, "a little philosophy inclineth man's mind to atheism, but depth in philosophy bringeth men's minds about to religion."
With respect to the philosophy of science, the term scientism frequently implies a critique of the more extreme expressions of logical positivism and has been used by social scientists such as Friedrich Hayek, philosophers of science such as Karl Popper, and philosophers such as Mary Midgley, the later Hilary Putnam, and Tzvetan Todorov to describe (for example) the dogmatic endorsement of scientific methods and the reduction of all knowledge to only that which is measured or confirmatory.
More generally, scientism is often interpreted as science applied "in excess". This use of the term scientism has two senses:
The improper use of science or scientific claims. This usage applies equally in contexts where science might not apply, such as when the topic is perceived as beyond the scope of scientific inquiry, and in contexts where there is insufficient empirical evidence to justify a scientific conclusion. It includes an excessive deference to the claims of scientists or an uncritical eagerness to accept any result described as scientific. This can be a counterargument to appeals to scientific authority. It can also address attempts to apply natural science methods and claims of certainty to the social sciences, which Friedrich Hayek described in The Counter-Revolution of Science (1952) as being impossible, because those methods attempt to eliminate the "human factor", while social sciences (including his own topic of economics) mainly concern the study of human action.
"The belief that the methods of natural science, or the categories and things recognized in natural science, form the only proper elements in any philosophical or other inquiry", or that "science, and only science, describes the world as it is in itself, independent of perspective" with a concomitant "elimination of the psychological [and spiritual] dimensions of experience". Tom Sorell provides this definition: "Scientism is a matter of putting too high a value on natural science in comparison with other branches of learning or culture." Philosophers such as Alexander Rosenberg have also adopted "scientism" as a name for the opinion that science is the only reliable source of knowledge.
It is also sometimes used to describe the universal applicability of the scientific method, and the opinion that empirical science constitutes the most authoritative worldview or the most valuable part of human learning, sometimes to the complete exclusion of other opinions, such as historical, philosophical, economic or cultural opinions. It has been defined as "the view that the characteristic inductive methods of the natural sciences are the only source of genuine factual knowledge and, in particular, that they alone can yield true knowledge about man and society". The term scientism is also used by historians, philosophers, and cultural critics to highlight the possible dangers of lapses towards excessive reductionism with respect to all topics of human knowledge.
For social theorists practising the tradition of Max Weber, such as Jürgen Habermas and Max Horkheimer, the concept of scientism relates significantly to the philosophy of positivism, but also to the cultural rationalization for modern Western civilization. Ernesto Sabato, physicist and essayist, wrote in his 1951 essay ("Man and mechanism") of the "superstition of science" as the most contradictory of all superstitions, since this would be the "superstition that one should not be superstitious". He wrote: "science had become a new magic and the man in the street believed in it the more the less he understood it".
Definitions
Reviewing the references to scientism in the works of contemporary scholars in 2003, Gregory R. Peterson detected two main general themes:
It is used to criticize a totalizing opinion of science as if it were capable of describing all reality and knowledge, or as if it were the only true method to acquire knowledge about reality and the nature of things;
It is used, often pejoratively, to denote violations by which the theories and methods of one (scientific) discipline are applied inappropriately to another (scientific or non-scientific) discipline and its domain. An example of this second usage is to term as scientism any attempt to claim science as the only or primary source of human values (a traditional domain of ethics) or as the source of meaning and purpose (a traditional domain of religion and related worldviews).
The term scientism was popularized by F. A. Hayek, who defined it in 1942 as the "slavish imitation of the method and language of Science".
Mathematician Alexander Grothendieck, in his 1971 essay "The New Universal Church", characterized scientism as a religion-like ideology that advocates scientific reductionism, scientific authoritarianism, political technocracy and technological salvation, while denying the epistemological validity of feelings and experiences such as love, emotion, beauty and fulfillment. He predicted that "in coming years, the chief political dividing line will fall less and less among the traditional division between 'right' and 'left', but increasingly between the adherents of scientism, who advocate 'technological progress at any price', and their opponents, i.e., roughly speaking, those who regard the enhancement of life, in all its richness and variety, as being the supreme value".
E. F. Schumacher, in his A Guide for the Perplexed (1977), criticized scientism as an impoverished world view confined solely to what can be counted, measured and weighed. "The architects of the modern worldview, notably Galileo and Descartes, assumed that those things that could be weighed, measured, and counted were more true than those that could not be quantified. If it couldn't be counted, in other words, it didn't count."
In 1979, Karl Popper defined scientism as "the aping of what is widely mistaken for the method of science".
In 2003, Mikael Stenmark proposed the expression scientific expansionism as a synonym of scientism. In the Encyclopedia of Science and Religion, he wrote that, while the doctrines that are described as scientism have many possible forms and varying degrees of ambition, they share the idea that the boundaries of science (that is, typically the natural sciences) could and should be expanded so that something that has not been previously considered as a subject pertinent to science can now be understood as part of science (usually with science becoming the sole or the main arbiter regarding this area or dimension). According to Stenmark, the strongest form of scientism states that science does not have any boundaries and that all human problems and all aspects of human endeavor, with due time, will be dealt with and solved by science alone. This idea has also been termed the myth of progress.
Intellectual historian T. J. Jackson Lears argued in 2013 that there has been a recent reemergence of "nineteenth-century positivist faith that a reified 'science' has discovered (or is about to discover) all the important truths about human life. Precise measurement and rigorous calculation, in this view, are the basis for finally settling enduring metaphysical and moral controversies." Lears specifically identified Harvard psychologist Steven Pinker's work as falling in this category. Philosophers John N. Gray and Thomas Nagel have made similar criticisms against popular works by moral psychologist Jonathan Haidt, atheist author Sam Harris, and writer Malcolm Gladwell.
Strong and weak scientism
There are various ways of classifying kinds of scientism. Some authors distinguish between strong and weak scientism, as follows:
: "of all the knowledge we have, scientific knowledge is the only 'real knowledge'" (Moti Mizrahi), or, "the view that some proposition or theory is true and/or rational to believe if and only if it is a scientific proposition or theory" (J. P. Moreland), or, "only science yields epistemically credible data" (Michael W. Austin)
: "of all the knowledge we have, scientific knowledge is the best knowledge" (Moti Mizrahi), or, "science is the most valuable, most serious, and most authoritative sector of human learning" (J. P. Moreland), or, "scientific knowledge claims are the most credible knowledge claims" (Michael W. Austin)
Relevance to debates about science and religion
Both religious and non-religious scholars have applied the term scientism to individuals associated with New Atheism. Theologian John Haught argued that philosopher Daniel Dennett and other New Atheists subscribe to a belief system of scientific naturalism, which includes the dogma that "only nature, including humans and our creations, is real: that God does not exist; and that science alone can give us complete and reliable knowledge of reality." Haught argued that this belief system is self-refuting since it requires its adherents to assent to beliefs that violate its own stated requirements for knowledge. Christian philosopher Peter Williams argued in 2013 that it is only by conflating science with scientism that New Atheists feel qualified to "pontificate on metaphysical issues". Daniel Dennett responded to religious criticism of his 2006 book Breaking the Spell: Religion as a Natural Phenomenon by saying that accusations of scientism "[are] an all-purpose, wild-card smear ... When someone puts forward a scientific theory that [religious critics] really don't like, they just try to discredit it as 'scientism'. But when it comes to facts, and explanations of facts, science is the only game in town".
Non-religious scholars have also associated New Atheist thought with scientism and/or with positivism. Atheist philosopher Thomas Nagel argued that philosopher Sam Harris conflated all empirical knowledge with scientific knowledge. Marxist literary critic Terry Eagleton argued that Christopher Hitchens possessed an "old-fashioned scientistic notion of what counts as evidence" that reduces knowledge to what can and cannot be proven by scientific procedure. Agnostic philosopher Anthony Kenny has also criticized New Atheist philosopher Alexander Rosenberg's The Atheist's Guide to Reality for resurrecting a self-refuting epistemology of logical positivism and reducing all knowledge of the universe to the discipline of physics.
Michael Shermer, founder of The Skeptics Society, discussed resemblances between scientism and traditional religions, indicating the cult of personality that develops for some scientists. He defined scientism as a worldview that encompasses natural explanations, eschews supernatural and paranormal speculations, and embraces empiricism and reason.
The Iranian scholar Seyyed Hossein Nasr has stated that in the Western world, many will accept the ideology of modern science, not as "simple ordinary science", but as a replacement for religion.
Gregory R. Peterson wrote that "for many theologians and philosophers, scientism is among the greatest of intellectual sins". Genetic biologist Austin L. Hughes wrote in the conservative journal The New Atlantis that scientism has much in common with superstition: "the stubborn insistence that something ... has powers which no evidence supports."
Repeating common criticisms of logical positivism and verificationism, philosopher of religion Keith Ward has said that scientism is philosophically inconsistent or even self-refuting, as the truth of the two statements "no statements are true unless they can be proven scientifically (or logically)" and "no statements are true unless they can be shown empirically to be true" cannot themselves be proven scientifically, logically, or empirically.
Philosophy of science
Anti-scientism
Philosopher Paul Feyerabend, who was an enthusiastic proponent of scientism during his youth, later came to characterize science as "an essentially anarchic enterprise" and argued emphatically that science merits no exclusive monopoly of "dealing in knowledge" and that scientists have never operated within a distinct and narrowly self-defined tradition. In his essay Against Method he depicted the process of contemporary scientific education as a mild form of indoctrination, intended for "making the history of science duller, simpler, more uniform, more 'objective' and more easily accessible to treatment by strict and unchanging rules".
Pro-scientism
Physicist and philosopher Mario Bunge used the term scientism with a favorable rather than pejorative sense in numerous books published during several decades, and in articles with titles such as "In defense of realism and scientism" and "In defense of scientism". Bunge said that scientism should not be equated with inappropriate reductionism, and he dismissed critics of science such as Hayek and Habermas as dogmatists and obscurantists:
In 2018, philosophers Maarten Boudry and Massimo Pigliucci co-edited a book titled Science Unlimited? The Challenges of Scientism in which a number of chapters by philosophers and scientists defended scientism. In his chapter "Two Cheers for Scientism", Taner Edis wrote:
Rhetoric of science
Thomas M. Lessl argued that religious themes persist in what he terms scientism, the public rhetoric of science. There are two methods of describing this idea of scientism: the epistemological method (the assumption that the scientific method trumps other ways of knowing) and the ontological method (that the rational mind represents the world and both operate in knowable ways). According to Lessl, the ontological method is an attempt to "resolve the conflict between rationalism and skepticism". Lessl also argued that without scientism, there would not be a scientific culture.
Rationalization and modernity
In the introduction to his collected works on the sociology of religion, Max Weber asked why "the scientific, the artistic, the political, or the economic development [elsewhere] ... did not enter upon that path of rationalization which is peculiar to the Occident?" According to the German social theorist Jürgen Habermas, "For Weber, the intrinsic (that is, not merely contingent) relationship between modernity and what he called 'Occidental rationalism' was still self-evident." Weber described a process of rationalisation, disenchantment and the "disintegration of religious world views" that resulted in modern secular societies and capitalism.
Habermas is critical of pure instrumental rationality, arguing that the "Social Life–World" of subjective experiencing is better suited to literary expression, whereas the sciences deal with "intersubjectively accessible experiences" that can be generalized in a formal language, while the literary arts "must generate an intersubjectivity of mutual understanding in each concrete case". Habermas quoted writer Aldous Huxley in support of this duality of literature and science:
See also
Anti-technology
Antireductionism
Cargo cult science
Conflict thesis
Consequentialism
Déformation professionnelle
Demarcation problem
Eliminative materialism
Francis Bacon
Greedy reductionism
High modernism
Materialism
Non-overlapping magisteria
Pseudoskepticism
Radical empiricism
Relativism
Science and the Catholic Church
Science of morality
Science wars
Scientific management
Scientific mythology
Scientific realism
Scientific reductionism
Scientific imperialism
Scientific skepticism
Scientistic materialism
Sokal affair
Technological dystopia
New Frontier
Post-scarcity economy
Technocentrism
Technological utopianism
Techno-progressivism
Progress
Worldview
References
Bibliography
.
.
External links
.
.
.
19th century in philosophy
20th century in philosophy
21st century in philosophy
Empiricism
Scientific method
Naturalism (philosophy)
Metatheory of science
Political theories
Lifestyle
Religion and science
Postmodernism
Criticism of science
Political pejoratives
New Atheism | 0.779809 | 0.995246 | 0.776102 |
Enculturation | Enculturation is the process by which people learn the dynamics of their surrounding culture and acquire values and norms appropriate or necessary to that culture and its worldviews.
Definition and history of research
The term enculturation was used first by sociologist of science Harry Collins to describe one of the models whereby scientific knowledge is communicated among scientists, and is contrasted with the 'algorithmic' mode of communication.
The ingredients discussed by Collins for enculturation are
Learning by Immersion: whereby aspiring scientists learn by engaging in the daily activities of the laboratory, interacting with other scientists, and participating in experiments and discussions.
Tacit Knowledge: highlighting the importance of tacit knowledge—knowledge that is not easily codified or written down but is acquired through experience and practice.
Socialization: where individuals learn the social norms, values, and behaviours expected within the scientific community.
Language and Discourse: Scientists must become fluent in the terminology, theoretical frameworks, and modes of argumentation specific to their discipline.
Community Membership: recognition of the individual as a legitimate member of the scientific community.
The problem tackled in the article of Harry Collins was the early experiments for the detection of gravitational waves.
Enculturation is mostly studied in sociology and anthropology. The influences that limit, direct, or shape the individual (whether deliberately or not) include parents, other adults, and peers. If successful, enculturation results in competence in the language, values, and rituals of the culture. Growing up, everyone goes through their own version of enculturation. Enculturation helps form an individual into an acceptable citizen. Culture impacts everything that an individual does, regardless of whether they know about it. Enculturation is a deep-rooted process that binds together individuals. Even as a culture undergoes changes, elements such as central convictions, values, perspectives, and young raising practices remain similar. Enculturation paves way for tolerance which is highly needed for peaceful co-habitance.
The process of enculturation, most commonly discussed in the field of anthropology, is closely related to socialization, a concept central to the field of sociology. Both roughly describe the adaptation of an individual into social groups by absorbing the ideas, beliefs and practices surrounding them. In some disciplines, socialization refers to the deliberate shaping of the individual. As such, the term may cover both deliberate and informal enculturation.
The process of learning and absorbing culture need not be social, direct or conscious. Cultural transmission can occur in various forms, though the most common social methods include observing other individuals, being taught or being instructed. Less obvious mechanisms include learning one's culture from the media, the information environment and various social technologies, which can lead to cultural transmission and adaptation across societies. A good example of this is the diffusion of hip-hop culture into states and communities beyond its American origins.
Enculturation has often been studied in the context of non-immigrant African Americans.
Conrad Phillip Kottak (in Window on Humanity) writes:
Enculturation is referred to as acculturation in some academic literature. However, more recent literature has signalled a difference in meaning between the two. Whereas enculturation describes the process of learning one's own culture, acculturation denotes learning a different culture, for example, that of a host. The latter can be linked to ideas of a culture shock, which describes an emotionally-jarring disconnect between one's old and new culture cues.
Famously, the sociologist Talcott Parsons once described children as "barbarians" of a sort, since they are fundamentally uncultured.
How enculturation occurs
When minorities come into the U.S., these people might fully associate with their racial legacy prior to taking part in processing enculturation. Enculturation can happen in several ways. Direct education implies that your family, instructors, or different individuals from the general public unequivocally show you certain convictions, esteems, or anticipated standards of conduct. Parents may play a vital role in teaching their children standard behavior for their culture, including table manners and some aspects of polite social interactions. Strict familial and societal teaching, which often uses different forms of positive and negative reinforcement to shape behavior, can lead a person to adhere closely to their religious convictions and customs. Schools also provide a formal setting to learn national values, such as honoring a country's flag, national anthem, and other significant patriotic symbols.
Participatory learning occurs as individuals take an active role of interacting with their environment and culture. Through their own engagement in meaningful activities, they learn socio-cultural norms for their area and may adopt related qualities and values. For example, if your school organizes an outing to gather trash at a public park, this action assists with ingraining the upsides of regard for nature and ecological protection. Strict customs frequently stress participatory learning - for example, kids who take part in the singing of psalms during Christmas will assimilate the qualities and practices of the occasion.
Observational learning is when knowledge is gained essentially by noticing and emulating others. As much as an individual related to a model accepts that emulating the model will prompt good results and feels that one is fit for mimicking the way of behaving, learning can happen with no unequivocal instruction. For example, a youngster who is sufficiently fortunate to be brought into the world by guardians in a caring relationship will figure out how to be tender and mindful in their future connections.
See also
Civil society
Dual inheritance theory
Education
Educational anthropology
Ethnocentrism
Indoctrination
Intercultural competence
Mores
Norm (philosophy)
Norm (sociology)
Peer pressure
Transculturation
References
Bibliography
Further reading
External links
Enculturation and Acculturation
Community empowerment
Concepts of moral character, historical and contemporary (Stanford Encyclopedia of Philosophy)
Cultural concepts
Cultural studies
Interculturalism | 0.783846 | 0.990046 | 0.776044 |
Nyaya | Nyāya (Sanskrit:न्यायः, IAST:'nyāyaḥ'), literally meaning "justice", "rules", "method" or "judgment", is one of the six orthodox (Āstika) schools of Hindu philosophy. Nyāya's most significant contributions to Indian philosophy were systematic development of the theory of logic, methodology, and its treatises on epistemology.
Nyāya school's epistemology accepts four out of six Pramanas as reliable means of gaining knowledge – Pratyakṣa (perception), Anumāṇa (inference), Upamāna (comparison and analogy) and Śabda (word, testimony of past or present reliable experts). In its metaphysics, Nyāya school is closer to the Vaisheshika school of Hinduism than others. It holds that human suffering results from mistakes/defects produced by activity under wrong knowledge (notions and ignorance). Moksha (liberation), it states, is gained through right knowledge. This premise led Nyāya to concern itself with epistemology, that is the reliable means to gain correct knowledge and to remove wrong notions. False knowledge is not merely ignorance to Naiyyayikas, it includes delusion. Correct knowledge is discovering and overcoming one's delusions, and understanding true nature of soul, self and reality.
Naiyyayika scholars approached philosophy as a form of direct realism, stating that anything that really exists is in principle humanly knowable. To them, correct knowledge and understanding is different from simple, reflexive cognition; it requires Anuvyavasaya (अनुव्यवसाय, cross-examination of cognition, reflective cognition of what one thinks one knows). An influential collection of texts on logic and reason is the Nyāya Sūtras, attributed to Aksapada Gautama, variously estimated to have been composed between 6th-century BCE and 2nd-century CE.
Nyāya school shares some of its methodology and human suffering foundations with Buddhism; however, a key difference between the two is that Buddhism believes that there is neither a soul nor self; Nyāya school like some other schools of Hinduism such as Dvaita and Viśiṣṭādvaita believes that there is a soul and self, with liberation (mokṣa) as a state of removal of ignorance, wrong knowledge, the gain of correct knowledge, and unimpeded continuation of self.
Etymology
Nyaya (न्याय) is a Sanskrit word which means justice, equality for all being, specially a collection of general or universal rules. In some contexts, it means model, axiom, plan, legal proceeding, judicial sentence, or judgment. Nyaya could also mean, "that which shows the way" tracing its Sanskrit etymology. In the theory of logic, and Indian texts discussing it, the term also refers to an argument consisting of an enthymeme or sometimes for any syllogism. In philosophical context, Nyaya encompasses propriety, logic and method.
Panini, revered Sanskrit grammarian, derives the "Nyaya" from the root "i" which conveys the same meaning as "gam" – to go. "Nyaya" signifying logic is there etymologically identical with "nigama" the conclusion of a syllogism.
Nyaya is related to several other concepts and words used in Indian philosophies: Hetu-vidya (science of causes), Anviksiki (science of inquiry, systematic philosophy), Pramana-sastra (epistemology, science of correct knowledge), Tattva-sastra (science of categories), Tarka-vidya (science of reasoning, innovation, synthesis), Vadartha (science of discussion) and Phakkika-sastra (science of uncovering sophism, fraud, error, finding fakes). Some of these subsume or deploy the tools of Nyaya.
Development
The historical development of Nyāya school is unclear, although Nasadiya hymns of Book 10 Chapter 129 of Rigveda recite its spiritual questions in logical propositions. In early centuries BCE, states Clooney, the early Nyāya scholars began compiling the science of rational, coherent inquiry and pursuit of knowledge.
Foundational Text
By the 2nd century CE, Aksapada Gautama had composed Nyāya Sūtras, a foundational text for Nyāya, that primarily discusses logic, methodology and epistemology. Gautama is also known as Aksapada and Dirghatapas. The names Gotama and Gautama points to the family to which he belonged while the names Aksapada and Dirghatapas refer respectively to his meditative habit and practice of long penance. The people of Mithila (modern Darbhanga in North Bihar) ascribe the foundation of Nyāya philosophy to Gautama, husband of Ahalya, and point out as the place of his birth a village named Gautamasthana where a fair is held every year on the 9th day of the lunar month of Chaitra (March–April). It is situated 28 miles north-east of Darbhanga.
Commentarial Tradition
Concepts in the foundational text, the Nyaya Sutras, were clarified through a tradition of commentaries. Commentaries were also a means to defend the philosophy from misinterpretations by scholars of other traditions.
The Nyāya scholars that followed refined, expanded, and applied the Nyaya Sutras to spiritual questions. While the early Nyaya scholars published little to no analysis on whether supernatural power or God exists, they did apply their insights into reason and reliable means to knowledge to the questions of nature of existence, spirituality, happiness and moksha. Later Nyāya scholars, such as Udayana, examined various arguments on theism and attempted to prove existence of God. Other Nyāya scholars offered arguments to disprove the existence of God.
The most important contribution made by the Nyāya school to Hindu thought has been its treatises on epistemology and system of logic that, subsequently, has been adopted by the majority of the other Indian schools.
Sixteen categories (padārthas)
The Nyāya metaphysics recognizes sixteen padarthas or categories and includes all six (or seven) categories of the Vaisheshika in the second one of them, called prameya.
These sixteen categories are:
Methods and objects of inquiry
pramāṇa (valid means of knowledge or knowledge sources),
prameya (objects of valid knowledge),
Conditions and the components of inquiry
saṁśaya (doubt),
prayojana (aim),
dṛṣṭānta (example),
siddhānta (conclusion or accepted position),
avayava (members of syllogism or inferential components),
tarka (hypothetical/suppositional reasoning),
nirṇaya (settlement or certainty),
Forms of and strategies for debate
vāda (truth-directed debate),
jalpa (victory-directed debate),
vitaṇḍā (destructive debate),
hetvābhāsa (defective reasons),
chala (tricks),
jāti (sophisticated refutation or misleading/futile objections) and
nigrahasthāna (point of defeat or clinchers).
According to Matthew Dasti and Stephen Phillips, it may be useful to interpret the word jnana as cognition rather than knowledge when studying the Nyāya system.
Epistemology
The Nyāya school of Hinduism developed and refined many treatises on epistemology that widely influenced other schools of Hinduism. In Nyaya philosophy, knowledge is a type of "awareness event that is in accordance with its object by virtue of having been produced by a well-functioning epistemic instrument." Pramāṇa, a Sanskrit word, literally is "means of knowledge". It encompasses one or more reliable and valid means by which human beings gain accurate, true knowledge. The focus of Pramana is how correct knowledge can be acquired, how one knows, how one doesn't, and to what extent knowledge pertinent about someone or something can be acquired. By definition, pramāṇas are factive i.e. they cannot produce false belief. So, while statements can be false, testimony cannot be false.
Nyāya scholars accepted four valid means (pramāṇa) of obtaining valid knowledge (prameya) –
perception (pratyakṣa),
inference (anumāna),
comparison (upamāna), and
word/testimony of reliable sources (śabda).
The Nyāya scholars, along with those from other schools of Hinduism, also developed a theory of error, to methodically establish means to identify errors and the process by which errors are made in human pursuit of knowledge. These include saṁśaya (problems, inconsistencies, doubts) and viparyaya (contrariness, errors) which can be corrected or resolved by a systematic process of tarka (reasoning, technique).
Pratyaksha (perception)
Pratyakṣa (perception) occupies the foremost position in the Nyāya epistemology. Perception can be of two types, laukika (ordinary) and alaukika (extraordinary). Ordinary perception is defined by Akṣapāda Gautama in his Nyāya Sutra (I, i.4) as a 'non-erroneous cognition which is produced by the intercourse of sense-organs with the objects'.
Indian texts identify four requirements for correct perception: Indriyarthasannikarsa (direct experience by one's sensory organ(s) with the object, whatever is being studied), Avyapadesya (non-verbal; correct perception is not through hearsay, according to ancient Indian scholars, where one's sensory organ relies on accepting or rejecting someone else's perception), Avyabhicara (does not wander; correct perception does not change, nor is it the result of deception because one's sensory organ or means of observation is drifting, defective, suspect) and Vyavasayatmaka (definite; correct perception excludes judgments of doubt, either because of one's failure to observe all the details, or because one is mixing inference with observation and observing what one wants to observe, or not observing what one does not want to observe).
Ordinary perception to Nyāya scholars was based on direct experience of reality by eyes, ears, nose, touch and taste. Extraordinary perception included yogaja or pratibha (intuition), samanyalaksanapratyaksa (a form of induction from perceived specifics to a universal), and jnanalaksanapratyaksa (a form of perception of prior processes and previous states of a 'topic of study' by observing its current state).
Determinate and indeterminate perception
The Naiyyayika maintains two modes or stages in perception. The first is called nirvikalpa (indeterminate), when one just perceives an object without being able to know its features, and the second savikalpa (determinate), when one is able to clearly know an object. All laukika and alaukika pratyakshas are savikalpa, but it is necessarily preceded by an earlier stage when it is indeterminate. Vātsāyana says that if an object is perceived with its name we have determinate perception but if it is perceived without a name, we have indeterminate perception. Jayanta Bhatta says that indeterminate perception apprehends substance, qualities and actions and universals as separate and indistinct, without any association with any names, whereas determinate perception apprehends them all together with a name. There is yet another stage called Pratyabhijñā, when one is able to re-recognise something on the basis of memory.
Anumāna (inference)
Anumāna (inference) is one of the most important contributions of the Nyāya. It can be of two types: inference for oneself (Svarthanumana, where one does not need any formal procedure, and at the most the last three of their 5 steps), and inference for others (Parathanumana, which requires a systematic methodology of 5 steps). Inference can also be classified into 3 types: Purvavat (inferring an unperceived effect from a perceived cause), Sheshavat (inferring an unperceived cause from a perceived effect) and Samanyatodrishta (when inference is not based on causation but on uniformity of co-existence). A detailed analysis of error is also given, explaining when anumana could be false.
Theory of inference
The methodology of inference involves a combination of induction and deduction by moving from particular to particular via generality. It has five steps, as in the example shown:
There is fire on the hill (called Pratijñā, required to be proved)
Because there is smoke there (called Hetu, reason)
Wherever there is smoke, there is fire, e.g. in a kitchen (called Udāhārana, example of vyāpti)
The hill has smoke that is pervaded by fire (called Upanaya, reaffirmation or application)
Therefore, there is fire on the hill (called Nigamana, conclusion)
In Nyāya terminology for this example, the hill would be the paksha (minor term), the fire is the sādhya (major term), the smoke is hetu, and the relationship between the smoke and the fire is vyapti(middle term).
Hetu further has five characteristics
It must be present in the Paksha (the case under consideration),
It must be present in all positive instances (sapaksha, or homologues),
It must be absent in all negative instances
It must not be incompatible with an established truth, (abādhitatva)
Absence of another evidence for the opposite thesis (asatpratipakshitva)
Inference Fallacies (hetvābhasa)
The fallacies in Anumana (hetvābhasa) may occur due to the following
Asiddha: It is the unproved hetu that results in this fallacy.
Ashrayasiddha: If Paksha [minor term] itself is unreal, then there cannot be locus of the hetu. e.g. The sky-lotus is fragrant, because it is a lotus like any other lotus.
Svarupasiddha: Hetu cannot exist in paksa at all. E.g. Sound is a quality, because it is visible.
Vyapyatvasiddha: Conditional hetu. `Wherever there is fire, there is smoke'. The presence of smoke is due to wet fuel.
Savyabhichara: This is the fallacy of irregular hetu.
Sadharana: The hetu is too wide. It is present in both sapaksa and vipaksa. `The hill has fire because it is knowable'.
Asadharana: The hetu is too narrow. It is only present in the Paksha, it is not present in the Sapaksa and in the Vipaksha. `Sound is eternal because it is audible'.
Anupasamhari: Here the hetu is non-exclusive. The hetu is all-inclusive and leaves nothing by way of sapaksha or vipaksha. e.g. 'All things are non-ternal, because they are knowable'.
Satpratipaksa: Here the hetu is contradicted by another hetu. If both have equal force, then nothing follows. 'Sound is eternal, because it is audible', and 'Sound is non-eternal, because it is produced'. Here 'audible' is counterbalanced by 'produced' and both are of equal force.
Badhita: When another proof (as by perception) definitely contradicts and disproves the middle term (hetu). 'Fire is cold because it is a substance'.
Viruddha: Instead of proving something it is proving the opposite. 'Sound is eternal because it is produced'.
Upamāna (comparison, analogy)
Upamāna (उपमान) means comparison and analogy. Upamāna, states Lochtefeld, may be explained with the example of a traveller who has never visited lands or islands with endemic population of wildlife. He or she is told, by someone who has been there, that in those lands you see an animal that sort of looks like a cow, grazes like cow but is different from a cow in such and such way. Such use of analogy and comparison is, state the Indian epistemologists, a valid means of conditional knowledge, as it helps the traveller identify the new animal later. The subject of comparison is formally called upameyam, the object of comparison is called upamānam, while the attribute(s) are identified as sāmānya. Thus, explains Monier Williams, if a boy says "her face is like the moon in charmingness", "her face" is upameyam, the moon is upamānam, and charmingness is sāmānya. The 7th century text Bhaṭṭikāvya in verses 10.28 through 10.63 discusses many types of comparisons and analogies, identifying when this epistemic method is more useful and reliable, and when it is not. In various ancient and medieval texts of Hinduism, 32 types of Upamāna and their value in epistemology are debated.
Śabda (word, testimony)
Śabda (शब्द) means relying on word, testimony of past or present reliable experts. Hiriyanna explains Sabda-pramana as a concept which means testimony of a reliable and trustworthy person (āptavākya). The schools of Hinduism which consider it epistemically valid suggest that a human being needs to know numerous facts, and with the limited time and energy available, he can learn only a fraction of those facts and truths directly. He must rely on others, his parent, family, friends, teachers, ancestors and kindred members of society to rapidly acquire and share knowledge and thereby enrich each other's lives. This means of gaining proper knowledge is either spoken or written, but through Sabda (words). The reliability of the source is important, and legitimate knowledge can only come from the Sabda of reliable sources. The disagreement between the schools of Hinduism has been on how to establish reliability. Some schools, such as Carvaka, state that this is never possible, and therefore Sabda is not a proper pramana. Other schools debate means to establish reliability.
Testimony can be of two types, Vaidika (Vedic), which are the words of the four sacred Vedas, and Laukika, or words and writings of trustworthy human beings. Vaidika testimony is preferred over Laukika testimony. Laukika-sourced knowledge must be questioned and revised as more trustworthy knowledge becomes available.
Direct Realism
In Nyaya philosophy, direct realism asserts that our cognitions are informational states revealing external objects. According to Nyaya, the world consists of stable, three-dimensional objects, and their system of categories accurately mirrors reality's structure. Nyaya philosophy emphasizes the importance of universals, qualities, and relations in understanding the organization of the world. These foundational elements are believed to play essential roles in determining the phenomenological, causal, and logical organization of the world, playing a crucial role in the classification of objects.
Comparison with other schools of Hinduism
Each school of Hinduism has its own treatises on epistemology, with different number of Pramanas. For example, compared to Nyāya school's four pramanas, Carvaka school has just one (perception), while Advaita Vedanta school recognizes six means to reliable knowledge.
Theory of causation
A cause is defined as an unconditional and invariable antecedent of an effect and an effect as an unconditional and invariable consequent of a cause. The same cause produces the same effect; and the same effect is produced by the same cause. The cause is not present in any hidden form whatsoever in its effect.
The following conditions should be met:
The cause must be antecedent [Purvavrtti]
Invariability [Niyatapurvavrtti]
Unconditionality [Ananyathasiddha]
Nyaya recognizes five kinds of accidental antecedents [Anyathasiddha]
Mere accidental antecedent. E.g., The colour of the potter's cloth.
Remote cause is not a cause because it is not unconditional. E.g., The father of the potter.
The co-effects of a cause are not causally related.
Eternal substances, or eternal conditions are not unconditional antecedents, e.g. space.
Unnecessary things, e.g. the donkey of the potter.
Nyaya recognizes three kinds of cause:
Samavayi, material cause, e.g. thread of a cloth.
Asamavayi, colour of the thread which gives the colour of the cloth.
Nimitta, efficient cause, e.g. the weaver of the cloth.
Anyathakhyativada (theory of error)
The Nyāya theory of error is similar to that of Kumarila's Viparita-khyati (see Mimamsa). The Naiyyayikas also believe, like Kumarila, that error is due to a wrong synthesis of the presented and represented objects. The represented object is confused with the presented one. The word 'anyatha' means 'elsewise' and 'elsewhere' and both of these meanings are brought out in error. The presented object is perceived elsewise and the represented object exists elsewhere. They further maintain that knowledge is not intrinsically valid but becomes so due to extraneous conditions (paratah pramana during both validity and invalidity).
On God and liberation
Early Naiyyayikas wrote very little about Ishvara (literally, the Supreme Soul). Evidence available suggests that early Nyāya scholars were non-theistic or atheists. Later, and over time, Nyāya scholars tried to apply some of their epistemological insights and methodology to the question: does God exist? Some offered arguments against and some in favor.
Arguments that God does not exist
In Nyāya Sūtra's Book 4, Chapter 1, verses 19–21, postulates God exists, states a consequence, then presents contrary evidence, and from contradiction concludes that the postulate must be invalid.
A literal interpretation of the three verses suggests that Nyāya school rejected the need for a God for the efficacy of human activity. Since human action and results do not require assumption or need of the existence of God, sutra IV.1.21 is seen as a criticism of the "existence of God and theism postulate". The context of the above verses includes various efficient causes. Nyāya Sūtra verses IV.1.22 to IV.1.24, for example, examine the hypothesis that "random chance" explains the world, after these Indian scholars had rejected God as the efficient cause.
Arguments that God exists
In Nyayakusumanjali, Udayana gives the following nine arguments to prove the existence of creative God and also refutes the existing objections and questions by atheistic systems of Carvaka, Mimamsa, Buddhists, Jains and Samkhya:
Kāryāt (lit. "from effect"): The world is an effect. All effects have efficient cause. Hence the world must have an efficient cause. That efficient cause is God.
Āyojanāt (lit., from combination): Atoms are inactive. To form a substance, they must combine. To combine, they must move. Nothing moves without intelligence and source of motion. Since we perceive substance, some intelligent source must have moved the inactive atoms. That intelligent source is God.
Dhŗtyādéḥ (lit., from support): Something sustains this world. Something destroys this world. Unintelligent Adrsta (unseen principles of nature) cannot do this. We must infer that something intelligent is behind. That is God.
Padāt (lit., from word): Each word has meaning and represents an object. This representational power of words has a cause. That cause is God.
Pratyayataḥ (lit, from faith): Vedas are infallible. Human beings are fallible. Infallible Vedas cannot have been authored by fallible human beings. Someone authored the infallible Vedas. That author is God.
Shrutéḥ (lit., from scriptures): The infallible Vedas testify to the existence of God. Thus God exists.
Vākyāt (lit., from precepts): Vedas deal with moral laws. These are divine. Divine injunctions and prohibitions can only come from a divine creator of laws. That divine creator is God.
Samkhyāviśeşāt (lit., from the specialty of numbers): By rules of perception, only number "one" can ever be directly perceived. All other numbers other than one, are inferences and concepts created by consciousness. When man is born, his mind is incapable of inferences and concepts. He develops consciousness as he develops. The consciousness development is self-evident and proven because of man's ability with perfect numerical conception. This ability to conceive numerically perfect concepts must depend on something. That something is divine consciousness. So God must exist.
Adŗşţāt (lit., from the unforeseen): Everybody reaps the fruits of his own actions. Merits and demerits accrue from his own actions. An Unseen Power keeps a balance sheet of the merit and demerit. But since this Unseen Power is Unintelligent, it needs intelligent guidance to work. That intelligent guide is God.
Naiyyayikas characterize Ishvara as absent of adharma, false knowledge, and error; and possessing dharma, right knowledge, and equanimity. Additionally, Ishvara is omnipotent and acts in a way that is good for his creatures.
Liberation
The Naiyyayikas believe that the bondage of the world is due to false knowledge, which can be removed by constantly thinking of its opposite (pratipakshabhavana), namely, the true knowledge. The opening aphorism of the states that only the true knowledge leads to niḥśreyasa (liberation). However, the Nyāya school also maintains that God's grace is essential for obtaining true knowledge. Jayanta, in his Nyayamanjari describes salvation as a passive stage of the self in its natural purity, unassociated with pleasure, pain, knowledge and willingness.
Literature
In the Yājñavalkya Smṛti, Nyāya is mentioned as one of the fourteen principal branches of learning. The Matsya-Purāṇa states that knowledge of Nyāya came from the mouth of Brahmā. The Mahābhārata also mentions principles of Nyāya.
The earliest text of the Nyāya School is the of . The text is divided into five books, each having two sections. 's is a classic commentary on the . Udyotakara's (6th century CE) is written to defend against the attacks made by Dignāga. 's (9th century CE) is the next major exposition of this school. Two other texts, and are also attributed to him. Udayana's (984 CE) is an important commentary on 's treatise. His is the first systematic account of theistic . His other works include , and . Jayanta Bhatta's (10th century CE) is basically an independent work. 's (10th century CE) is a survey of philosophy.
The later works on accepted the categories and 's (12th century CE) is a notable treatise of this syncretist school. 's (13th century CE) is another important work of this school.
's (14th century CE) is the first major treatise of the new school of . His son, 's , though a commentary on Udayana's , incorporated his father's views. Jayadeva wrote a commentary on known as (14th century CE). 's (16th century CE) is first great work of Navadvipa school of . 's and are the next important works of this school. 's (17th century CE) is also a notable work. The Commentaries on by Jagadish Tarkalankar (17th century CE) and Gadadhar Bhattacharya (17th century CE) are the last two notable works of this school.
(17th century CE) tried to develop a consistent system by combining the ancient and the new schools, and and to develop the school. His and are the popular manuals of this school.
Commentaries on the Nyaya-Sutra
Numerous commentaries have been written on Nyāya-Sutra since its composition. Some of these commentaries are available on www.archive.org for reference. A few of the commentaries are mentioned below:
Nyaya-Sutra by Gotama or Aksapada
Nyaya-Bhasya by Vatsyayana
Nyaya-Varttika by Udyotakar
Nyaya-Varttika tatparya-tika by Vacaspati Misra
Nyaya-Varttika-tatparayatika-parisuddhi by Udayans
Parisuddhiprakasa by Vardhamana
Vardhamanedu by Padmanabha Misra
Nyayalankara by Srikantha
Nyayalankara Vrtti by Jayanta
Nyaya-manjari by Jayanta
Nyaya-Vrtti by Abhayatilakopadhyaya
Nyaya-Vrtti by Visvanatha
Mitabhasini Vrtti by Mahadeva Vedanti
Nyayaprakasa by Kesava Misra
Nyayabodhini by Govardhana
Nyaya Sutra Vyakhya by Mathuranatha
Differences from Western Philosophy
A priori knowledge
Nyaya philosophy does not establish a category of a priori knowledge. This choice may be due to only considering de re knowledge, not de dicto knowledge.
Logic
The essential features of logic in the Western tradition are well captured in the following statement by a famous logician Alonzo Church:
Thus, the basic features of Western logic are: It deals with a study of ‘propositions’, specially their ‘logical form’ as abstracted from their ‘content’ or ‘matter’. It deals with ‘general conditions of valid inference’, wherein the truth or otherwise of the premises have no bearing on the ‘logical soundness or validity’ of an inference. It achieves this by taking recourse to a symbolic language that has little to do with natural languages. The main concern of Western logic, in its entire course of development, has been one of systematising patterns of mathematical reasoning, with the mathematical objects being thought of as existing either in an independent ideal world or in a formal domain. Indian logic however, does not deal with ideal entities, such as propositions, logical truth as distinguished from material truth, or with purely symbolic languages that apparently have nothing to do with natural languages.
The central concern of Indian logic as founded in nyāya is epistemology, or the theory of knowledge. Thus Indian logic is not concerned merely with making arguments in formal mathematics rigorous and precise, but attends to the much larger issue of providing rigour to the arguments encountered in natural sciences (including mathematics, which in Indian tradition has the attributes of a natural science and not that of a collection of context free formal statements), and in philosophical discourse. Inference in Indian logic is ‘deductive and inductive’, ‘formal as well as material’. In essence, it is the method of scientific enquiry. Indian ‘formal logic’ is thus not ‘formal’, in the sense generally understood: in Indian logic ‘form’ cannot be entirely separated from ‘content’. In fact, great care is exercised to exclude from logical discourse terms, which have no referential content. No statement, which is known to be false, is admitted as a premise in a valid argument. Thus, the ‘method of indirect proof’ (reductio ad absurdum) is not accepted as a valid method−neither in Indian philosophy nor in Indian mathematics−for proving the existence of an entity whose existence is not demonstrable (even in principle) by other (direct) means of proof.
Indian logic does not make any attempt to develop a purely symbolic and content independent or ‘formal language’ as the vehicle of logical analysis. Instead, what Indian logic, especially in its later phase of Navya-Nyāya starting with the work of Gāngeśa Upādhyāya of the 14th century, has developed is a technical language, which is based on the natural language Sanskrit, yet avoids ‘inexactness’ and ‘misleading irregularities’ by various technical devices. This technical language, being based on the natural language Sanskrit, inherits a certain natural structure and interpretation, and sensitivity to the context of enquiry. On the other hand, the symbolic formal systems of Western logic, though considerably influenced in their structure (say, in quantification, etc.) by the basic patterns discernible in European languages, are professedly purely symbolic, carrying no interpretation whatsoever−such interpretations are supposed to be supplied separately in the specific context of the particular field of enquiry ‘employing’ the symbolic formal system.
See also
Nyāya Sūtras
Ancient Mithila University
Gautama Buddha
Gautama Maharishi
Hindu philosophy
List of teachers of Nyaya
Neti neti "not this", "neither this" (neti is sandhi from na-iti "not so").
Śāstra pramāṇam in Hinduism
Tarka-Sangraha
Padārtha
Vaisheshika#The Categories or Padārtha
Categories (Aristotle)
References
Further reading
Kisor Kumar Chakrabarti (1995), Definition and induction: a historical and comparative study, University of Hawaii Press, ,
Gangesa (2010), Classical Indian philosophy of induction: the Nyāya viewpoint, (Translator: Kisor Kumar Chakrabarti), ,
Gangesa (2020), Tattva-cintā-maṇi, (“Jewel”), translated by Stephen Phillips, Jewel of Reflection on the Truth about Epistemology. 3 volumes, London: Bloomsbury.
Gopi Kaviraj (1961), Gleanings from the history and bibliography of the Nyaya-Vaisesika literature, Indian Studies: Past & Present,
Arthur Keith (1921), Indian logic and atomism: an exposition of the Nyāya and Vaiçeṣika systems, Greenwood Press,
Bimal Matilal (1977), A History of Indian Literature – Nyāya-Vaiśeṣika, Otto Harrassowitz Verlag, ,
Stephen Phillips (2012), Epistemology in classical India: the knowledge sources of the Nyāya school, Routledge, ,
Karl Potter (1977), Indian metaphysics and epistemology: the tradition of Nyāya-Vaiśeṣika up to Gaṅgeśa, Princeton University Press,
Navya-Nyaya school
Bimal Matilal, The Navya-nyāya doctrine of negation: the semantics and ontology of negative statements, Harvard University Press,
Daniel H.H. Ingalls, Materials for the study of Navya-nyāya logic, Harvard University Press,
External links
Lectures on Nyaya The Oxford Centre for Hindu Studies, Oxford University
Ganeri, Jonardon, Edward N. Zalta (ed.), "Analytic Philosophy in Early Modern India", Stanford Encyclopedia of Philosophy.
Schools and traditions in ancient Indian philosophy
Atomism
Hindu philosophy
History of logic
Āstika | 0.779889 | 0.994795 | 0.77583 |
Virtue ethics | Virtue ethics (also aretaic ethics, from Greek []) is a philosophical approach that treats virtue and character as the primary subjects of ethics, in contrast to other ethical systems that put consequences of voluntary acts, principles or rules of conduct, or obedience to divine authority in the primary role.
Virtue ethics is usually contrasted with two other major approaches in ethics, consequentialism and deontology, which make the goodness of outcomes of an action (consequentialism) and the concept of moral duty (deontology) central. While virtue ethics does not necessarily deny the importance to ethics of goodness of states of affairs or of moral duties, it emphasizes virtue, and sometimes other concepts, like , to an extent that other ethics theories do not.
Key concepts
Virtue and vice
In virtue ethics, a virtue is a characteristic disposition to think, feel, and act well in some domain of life. In contrast, a vice is a characteristic disposition to think, feel, and act poorly. Virtues are not everyday habits; they are character traits, in the sense that they are central to someone’s personality and what they are like as a person.
In early versions and some modern versions of virtue ethics, a virtue is defined as a character trait that promotes or exhibits human "flourishing and wellbeing" in the person who exhibits it. Some modern versions of virtue ethics do not define virtues in terms of well being or flourishing, and some go so far as to define virtues as traits that tend to promote some other good that is defined independently of the virtues, thereby subsuming virtue ethics under (or somehow merging it with) consequentialist ethics.
To Aristotle, a virtue was not a skill that made you better able to achieve but was itself an expression of — in activity.
In contrast with consequentialist and deontological ethical systems, in which one may be called upon to do the right thing even though it is not in one's own interests (one is to do it instead for the greater good, or out of duty), in virtue ethics, one does the right thing because it is in one's own interests. Part of training in practical virtue ethics is to come to see the coincidence of one's enlightened self-interest and the practice of the virtues, so that one is virtuous willingly, gladly, and enthusiastically because one knows that being virtuous is the best thing one can do with oneself.
Virtue and emotion
In ancient Greek and modern eudaimonic virtue ethics, virtues and vices are complex dispositions that involve both affective and intellectual components. That is, they are dispositions that involve both being able to reason well about the right thing to do (see below on ), and also to engage emotions and feelings correctly.
For example, a generous person can reason well about when and how to help people, and such a person also helps people with pleasure and without conflict. In this, virtuous people are contrasted not only with vicious people (who reason poorly about what to do and are emotionally attached to the wrong things) and with the incontinent (who are tempted by their feelings into doing the wrong thing even though they know what is right), but also with the merely continent (whose emotions tempt them toward doing the wrong thing but whose strength of will lets them do what they know is right).
According to Rosalind Hursthouse, in Aristotelian virtue ethics, the emotions have moral significance because "virtues (and vices) are all dispositions not only to act, but to feel emotions, as reactions as well as impulses to action... [and] In the person with the virtues, these emotions will be felt on the right occasions, toward the right people or objects, for the right reasons, where 'right' means 'correct'..."
and
(; prudence, practical virtue, or practical wisdom) is an acquired trait that enables its possessor to identify the best thing to do in any given situation. Unlike theoretical wisdom, practical reason results in action or decision. As John McDowell puts it, practical wisdom involves a "perceptual sensitivity" to what a situation requires.
is a state variously translated from Greek as 'well-being', 'happiness', 'blessedness', and in the context of virtue ethics, 'human flourishing'. in this sense is not a subjective, but an objective, state. It characterizes the well-lived life.
According to Aristotle, the most prominent exponent of in the Western philosophical tradition, defines the goal of human life. It consists of exercising the characteristic human quality—reason—as the soul's most proper and nourishing activity. In his Nicomachean Ethics, Aristotle, like Plato before him, argued that the pursuit of is an "activity of the soul in accordance with perfect virtue", which further could only properly be exercised in the characteristic human community—the or city-state.
Although was first popularized by Aristotle, it now belongs to the tradition of virtue theories generally. For the virtue theorist, describes that state achieved by the person who lives the proper human life, an outcome that can be reached by practicing the virtues. A virtue is a habit or quality that allows the bearer to succeed at his, her, or its purpose. The virtue of a knife, for example, is sharpness; among the virtues of a racehorse is speed. Thus, to identify the virtues for human beings, one must have an account of what is the human purpose.
Not all modern virtue ethics theories are eudaimonic; some place another end in place of , while others are non-teleological: that is, they do not account for virtues in terms of the results that the practice of the virtues produce or tend to produce.
History of virtue
Like much of the Western tradition, virtue theory originated in ancient Greek philosophy.
Virtue ethics began with Socrates, and was subsequently developed further by Plato, Aristotle, and the Stoics. Virtue ethics concentrates on the character of the individual, rather than the acts (or consequences thereof) of the individual. There is debate among adherents of virtue ethics concerning what specific virtues are praiseworthy. However, most theorists agree that ethics is demonstrated by the practice of virtues.
Plato and Aristotle's treatments of virtues are not the same. Plato believes virtue is effectively an end to be sought, for which a friend might be a useful means. Aristotle states that the virtues function more as means to safeguard human relations, particularly authentic friendship, without which one's quest for happiness is frustrated.
Discussion of what were known as the four cardinal virtues—wisdom, justice, fortitude, and temperance—can be found in Plato's Republic. The virtues also figure prominently in Aristotle's ethical theory found in Nicomachean Ethics.
Virtue theory was inserted into the study of history by moralistic historians such as Livy, Plutarch, and Tacitus. The Greek idea of the virtues was passed on in Roman philosophy through Cicero and later incorporated into Christian moral theology by Ambrose of Milan. During the scholastic period, the most comprehensive consideration of the virtues from a theological perspective was provided by Thomas Aquinas in his Summa Theologiae and his Commentaries on the Nicomachean Ethics.
After the Reformation, Aristotle's Nicomachean Ethics continued to be the main authority for the discipline of ethics at Protestant universities until the late seventeenth century, with over fifty Protestant commentaries published on the Nicomachean Ethics before 1682.
Though the tradition receded into the background of European philosophical thought in the past few centuries, the term "virtue" remained current during this period, and in fact appears prominently in the tradition of classical republicanism or classical liberalism. This tradition was prominent in the intellectual life of 16th-century Italy, as well as 17th- and 18th-century Britain and America; indeed the term "virtue" appears frequently in the work of Tomás Fernández de Medrano, Niccolò Machiavelli, David Hume, the republicans of the English Civil War period, the 18th-century English Whigs, and the prominent figures among the Scottish Enlightenment and the American Founding Fathers.
Contemporary "aretaic turn"
Although some Enlightenment philosophers (e.g. Hume) continued to emphasise the virtues, with the ascendancy of utilitarianism and deontological ethics, virtue theory moved to the margins of Western philosophy. The contemporary revival of virtue theory is frequently traced to the philosopher Elizabeth Anscombe's 1958 essay "Modern Moral Philosophy". Following this:
In the 1976 paper "The Schizophrenia of Modern Ethical Theories", Michael Stocker summarises the main aretaic criticisms of deontological and consequentialist ethics.
Philosopher, psychologist, and encyclopedist Mortimer Adler appealed to Aristotelian ethics, and the virtue theory of happiness or throughout his published work.
Philippa Foot, published a collection of essays in 1978 entitled Virtues and Vices.
Alasdair MacIntyre made an effort to reconstruct a virtue-based theory in dialogue with the problems of modern and postmodern thought; his works include After Virtue and Three Rival Versions of Moral Enquiry.
Paul Ricoeur accorded an important place to Aristotelian teleological ethics in his hermeneutical phenomenology of the subject, most notably in his book Oneself as Another.
Theologian Stanley Hauerwas found the language of virtue helpful in his own project.
Roger Crisp and Michael Slote edited a collection of important essays titled Virtue Ethics.
Martha Nussbaum and Amartya Sen employed virtue theory in theorising the capability approach to international development.
Julia Annas wrote The Morality of Happiness (1993).
Lawrence C. Becker identified current virtue theory with Greek Stoicism in A New Stoicism. (1998).
Rosalind Hursthouse published On Virtue Ethics (1999).
Psychologist Martin Seligman drew on classical virtue ethics in conceptualizing positive psychology.
Psychologist Daniel Goleman opens his book on Emotional Intelligence with a challenge from Aristotle's Nicomachean Ethics.
Michael Sandel discusses Aristotelian ethics to support his ethical theory of justice in his book Justice: What's the Right Thing to Do?
The aretaic turn in moral philosophy is paralleled by analogous developments in other philosophical disciplines. One of these is epistemology, where a distinctive virtue epistemology was developed by Linda Zagzebski and others. In political theory, there has been discussion of "virtue politics", and in legal theory, there is a small but growing body of literature on virtue jurisprudence. The aretaic turn also exists in American constitutional theory, where proponents argue for an emphasis .
Aretaic approaches to morality, epistemology, and jurisprudence have been the subject of intense debates. One criticism focuses on the problem of guidance; opponents, such as Robert Louden in his article "Some Vices of Virtue Ethics", question whether the idea of a virtuous moral actor, believer, or judge can provide the guidance necessary for action, belief formation, or the resolution of legal disputes.
Lists of virtues
There are several lists of virtues. Socrates argued that virtue is knowledge, which suggests that there is really only one virtue. The Stoics identified four cardinal virtues: wisdom, justice, courage, and temperance. Wisdom is subdivided into good sense, good calculation, quick-wittedness, discretion, and resourcefulness. Justice is subdivided into piety, honesty, equity, and fair dealing. Courage is subdivided into endurance, confidence, high-mindedness, cheerfulness, and industriousness. Temperance or moderation is subdivided into good discipline, seemliness, modesty, and self-control.
John McDowell argues that virtue is a "perceptual capacity" to identify how one ought to act, and that all particular virtues are merely "specialized sensitivities" to a range of reasons for acting.
Aristotle's list
Aristotle identifies approximately 18 virtues that demonstrate a person is performing their human function well. He distinguished virtues pertaining to emotion and desire from those relating to the mind. The first he calls moral virtues, and the second intellectual virtues (though both are "moral" in the modern sense of the word).
Moral virtues
Aristotle suggested that each moral virtue was a mean (see golden mean) between two corresponding vices, one of excess and one of deficiency. Each intellectual virtue is a mental skill or habit by which the mind arrives at truth, affirming what is or denying what is not. In the Nicomachean Ethics he discusses about 11 moral virtues:
Intellectual virtues
(intelligence), which apprehends fundamental truths (such as definitions, self-evident principles)
(science), which is skill with inferential reasoning (such as proofs, syllogisms, demonstrations)
(theoretical wisdom), which combines fundamental truths with valid, necessary inferences to reason well about unchanging truths.
Aristotle also mentions several other traits:
(good sense) – passing judgment, "sympathetic understanding"
(understanding) – comprehending what others say, does not issue commands
(practical wisdom) – knowledge of what to do, knowledge of changing truths, issues commands
(art, craftsmanship)
Aristotle's list is not the only list, however. As Alasdair MacIntyre observed in After Virtue, thinkers as diverse as Homer, the authors of the New Testament, Thomas Aquinas, and Benjamin Franklin have all proposed lists.
Criticisms
Regarding which are the most important virtues, Aristotle proposed the following nine: wisdom; prudence; justice; fortitude; courage; liberality; magnificence; magnanimity; temperance. In contrast, philosopher Walter Kaufmann proposed as the four cardinal virtues ambition/humility, love, courage, and honesty.
Proponents of virtue theory sometimes argue that a central feature of a virtue is its universal applicability. In other words, any character trait defined as a virtue must reasonably be universally regarded as a virtue for all people. According to this view, it is inconsistent to claim, for example, servility as a female virtue, while at the same time not proposing it as a male one.
Other proponents of virtue theory, notably Alasdair MacIntyre, respond to this objection by arguing that any account of the virtues must indeed be generated out of the community in which those virtues are to be practiced: the very word ethics implies . That is to say that the virtues are, and necessarily must be, grounded in a particular time and place. What counts as a virtue in Athens would be a ludicrous guide to proper behaviour in Toronto and vice versa. To take this view does not necessarily commit one to the argument that accounts of the virtues must therefore be static: moral activity—that is, attempts to contemplate and practice the virtues—can provide the cultural resources that allow people to change, albeit slowly, the of their own societies.
MacIntyre appears to take this position in his seminal work on virtue ethics, After Virtue.
Another objection to virtue theory is that virtue ethics does not focus on what sorts of actions are morally permitted and which ones are not, but rather on what sort of qualities someone ought to foster in order to become a good person. In other words, while some virtue theorists may not condemn, for example, murder as an inherently immoral or impermissible sort of action, they may argue that someone who commits a murder is severely lacking in several important virtues, such as compassion and fairness. Still, antagonists of the theory often object that this particular feature of the theory makes virtue ethics useless as a universal norm of acceptable conduct suitable as a base for legislation. Some virtue theorists concede this point, but respond by opposing the very notion of legitimate legislative authority instead, effectively advocating some form of anarchism as the political ideal. Other virtue theorists argue that laws should be made by virtuous legislators, and still another group argue that it is possible to base a judicial system on the moral notion of virtues rather than rules. Aristotle himself saw his Nicomachean Ethics as a prequel for his Politics and felt that the point of politics was to create the fertile soil for a virtuous citizenry to develop in, and that one purpose of virtue was that it helps you to contribute to a healthy .
Some virtue theorists might respond to this overall objection with the notion of a "bad act" also being an act characteristic of vice. That is to say that those acts that do not aim at virtue, or that stray from virtue, would constitute our conception of "bad behavior". Although not all virtue ethicists agree to this notion, this is one way the virtue ethicist can re-introduce the concept of the "morally impermissible". One could raise an objection that he is committing an argument from ignorance by postulating that what is not virtuous is unvirtuous. In other words, just because an action or person 'lacks of evidence' for virtue does not, all else constant, imply that said action or person is unvirtuous.
Subsumed in deontology and utilitarianism
Martha Nussbaum suggested that while virtue ethics is often considered to be anti-Enlightenment, "suspicious of theory and respectful of the wisdom embodied in local practices", it is actually neither fundamentally distinct from, nor does it qualify as a rival approach to deontology and utilitarianism. She argues that philosophers from these two Enlightenment traditions often include theories of virtue. She pointed out that Kant's "Doctrine of Virtue" (in The Metaphysics of Morals) "covers most of the same topics as do classical Greek theories", "that he offers a general account of virtue, in terms of the strength of the will in overcoming wayward and selfish inclinations; that he offers detailed analyses of standard virtues such as courage and self-control, and of vices, such as avarice, mendacity, servility, and pride; that, although in general, he portrays inclination as inimical to virtue, he also recognizes that sympathetic inclinations offer crucial support to virtue, and urges their deliberate cultivation."
Nussbaum also points to considerations of virtue by utilitarians such as Henry Sidgwick (The Methods of Ethics), Jeremy Bentham (The Principles of Morals and Legislation), and John Stuart Mill, who writes of moral development as part of an argument for the moral equality of women (The Subjection of Women). She argues that contemporary virtue ethicists such as Alasdair MacIntyre, Bernard Williams, Philippa Foot, and John McDowell have few points of agreement and that the common core of their work does not represent a break from Kant.
Kantian critique
Immanuel Kant's position on virtue ethics is contested. Those who argue that Kantian deontology conflicts with virtue ethics include Alasdair MacIntyre, Philippa Foot, and Bernard Williams. In the Groundwork of the Metaphysics of Morals and the Critique of Practical Reason, Immanuel Kant offers many different criticisms of ethical frameworks and against moral theories before him. Kant rarely mentioned Aristotle by name but did not exclude his moral philosophy of virtue ethics from his critique. Many Kantian arguments against virtue ethics claim that virtue ethics is inconsistent, or sometimes that it is not a real moral theory at all.
In "What Is Virtue Ethics All About?", Gregory Velazco y Trianosky identified the key points of divergence between virtue ethicists and what he called "neo-Kantianism", in the form these nine neo-Kantian moral assertions:
The crucial moral question is "what is it right/obligatory to do?"
Moral judgments are those that concern the rightness of actions.
Such judgments take the form of rules or principles.
Such rules or principles are universal, not respecting persons.
They are not based on some concept of human good that is independent of moral goodness.
They take the form of categorical imperatives that can be justified independently of the desires of the person they apply to.
They are motivating; they can compel action in an agent, also independently of that agent's desires.
An action, in order to be morally virtuous, must be motivated by this sort of moral judgment (not, for example, merely coincidentally aligned with it).
The virtuousness of a character trait, or virtue, derives from the relationship that trait has to moral judgments, rules, and principles.
Trianosky says that modern sympathizers with virtue ethics almost all reject neo-Kantian claim #1, and many of them also reject certain of the other claims.
Utopianism and pluralism
Robert B. Louden criticizes virtue ethics on the basis that it promotes a form of unsustainable utopianism. Trying to arrive at a single set of virtues is immensely difficult in contemporary societies as, according to Louden, they contain "more ethnic, religious, and class groups than did the moral community which Aristotle theorized about" with each of these groups having "not only its own interests but its own set of virtues as well". Louden notes in passing that MacIntyre, a supporter of virtue-based ethics, has grappled with this in After Virtue but that ethics cannot dispense with building rules around acts and rely only on discussing the moral character of persons.
Topics in virtue ethics
Virtue ethics as a category
Virtue contrasts with deontological and consequentialist ethics (the three being together the most predominant contemporary normative ethical theories).
Deontological ethics, sometimes referred to as duty ethics, places the emphasis on adhering to ethical principles or duties. How these duties are defined, however, is often a point of contention and debate in deontological ethics. One predominant rule scheme used by deontologists is divine command theory. Deontology also depends upon meta-ethical realism, in that it postulates the existence of moral absolutes that make an action moral, regardless of circumstances. Immanuel Kant is considered one of the foremost theorists of deontological ethics.
The next predominant school of thought in normative ethics is consequentialism. While deontology places the emphasis on doing one's duty, consequentialism bases the morality of an action upon its outcome. Instead of saying that one has a moral duty to abstain from murder, a consequentialist would say that we should abstain from murder because it causes undesirable effects. The main contention here is what outcomes should/can be identified as objectively desirable.
The greatest happiness principle of John Stuart Mill is a commonly adopted criteria for what is objectively desirable. Mill asserts that the desirability of an action is the net amount of happiness it brings, the number of people it brings it to, and the duration of the happiness. He tries to delineate classes of happiness, some preferable to others, but there is a great deal of difficulty in classifying such concepts.
A virtue ethicist identifies virtues, desirable characteristics, that an excellent person embodies. Exhibiting these virtues is the aim of ethics, and one's actions are a reflection of one's virtues. To the virtue philosopher, action cannot be used as a demarcation of morality, because a virtue encompasses more than just a simple selection of action. Instead, a virtue is a way of being that leads the person exhibiting the virtue to make certain "virtuous" types of choices consistently in each situation. There is a great deal of disagreement within virtue ethics over what are virtues and what are not. There are also difficulties in identifying what is the "virtuous" action to take in all circumstances, and how to define a virtue.
Consequentialist and deontological theories often still employ the term virtue, but in a restricted sense, namely as a tendency or disposition to adhere to the system's principles or rules. In other words, in those theories, virtue is secondary, and the principles or rules are primary. These very different senses of what constitutes virtue, hidden behind the same word, are a potential source of confusion.
This disagreement over the meaning of virtue points to a larger conflict between virtue theory and its philosophical rivals. A system of virtue theory is only intelligible if it is teleological: that is, if it includes an account of the purpose (telos) of human life, or in popular language, the meaning of life. Obviously, strong claims about the purpose of human life, or of what the good life for human beings is, will be controversial. Virtue theory's necessary commitment to a teleological account of human life thus puts the tradition in tension with other dominant approaches to normative ethics, which, because they focus on actions, do not bear this burden.
Virtue and politics
Virtue theory emphasises Aristotle's belief in the as the acme of political organisation, and the role of the virtues in enabling human beings to flourish in that environment. Classical republicanism in contrast emphasises Tacitus' concern that power and luxury can corrupt individuals and destroy liberty, as Tacitus perceived in the transformation of the Roman Republic into the Roman Empire; virtue for classical republicans is a shield against this sort of corruption and a means to preserve the good life one has, rather than a means by which to achieve the good life one does not yet have. Another way to put the distinction between the two traditions is that virtue ethics relies on Aristotle's fundamental distinction between the human-being-as-he-is from the human-being-as-he-should-be, while classical republicanism relies on the Tacitean distinction of the risk-of-becoming.
Virtue ethics has a number of contemporary applications.
Social and political philosophy
Within the field of social ethics, Deirdre McCloskey argues that virtue ethics can provide a basis for a balanced approach to understanding capitalism and capitalist societies.
Education
Within the field of philosophy of education, James Page argues that virtue ethics can provide a rationale and foundation for peace education.
Health care and medical ethics
Thomas Alured Faunce argued that whistleblowing in the healthcare setting would be more respected within clinical governance pathways if it had a firmer academic foundation in virtue ethics. He called for whistleblowing to be expressly supported in the UNESCO Universal Declaration on Bioethics and Human Rights. Barry Schwartz argues that "practical wisdom" is an antidote to much of the inefficient and inhumane bureaucracy of modern health care systems.
Technology and the virtues
In her book Technology and the Virtues, Shannon Vallor proposed a series of 'technomoral' virtues that people need to cultivate in order to flourish in our socio-technological world: Honesty (Respecting Truth), Self-control (Becoming the Author of Our Desires), Humility (Knowing What We Do Not Know), Justice (Upholding Rightness), Courage (Intelligent Fear and Hope), Empathy (Compassionate Concern for Others), Care (Loving Service to Others), Civility (Making Common Cause), Flexibility (Skillful Adaptation to Change), Perspective (Holding on to the Moral Whole), and Magnanimity (Moral Leadership and Nobility of Spirit).
See also
Notes
References
Further reading
External links
Virtue Ethics – summary, criticisms and how to apply the theory
Legal theory lexicon: Virtue ethics by Larry Solum.
The Virtue Ethics Research Hub
The Four Stoic Virtues
Analytic philosophy
Aristotelianism
Ethical theories
Justice
Philosophy of law
Morality
Normative ethics
Platonism
Socrates
Ethics | 0.776927 | 0.998527 | 0.775782 |
Value (ethics and social sciences) | In ethics and social sciences, value denotes the degree of importance of some thing or action, with the aim of determining which actions are best to do or what way is best to live (normative ethics in ethics), or to describe the significance of different actions. Value systems are proscriptive and prescriptive beliefs; they affect the ethical behavior of a person or are the basis of their intentional activities. Often primary values are strong and secondary values are suitable for changes. What makes an action valuable may in turn depend on the ethical values of the objects it increases, decreases, or alters. An object with "ethic value" may be termed an "ethic or philosophic good" (noun sense).
Values can be defined as broad preferences concerning appropriate courses of actions or outcomes. As such, values reflect a person's sense of right and wrong or what "ought" to be. "Equal rights for all", "Excellence deserves admiration", and "People should be treated with respect and dignity" are representatives of values. Values tend to influence attitudes and behavior and these types include ethical/moral values, doctrinal/ideological (religious, political) values, social values, and aesthetic values. It is debated whether some values that are not clearly physiologically determined, such as altruism, are intrinsic, and whether some, such as acquisitiveness, should be classified as vices or virtues.
Fields of study
Ethical issues that value may be regarded as a study under ethics, which, in turn, may be grouped as philosophy. Similarly, ethical value may be regarded as a subgroup of a broader field of philosophic value sometimes referred to as axiology. Ethical value denotes something's degree of importance, with the aim of determining what action or life is best to do, or at least attempt to describe the value of different actions.
The study of ethical value is also included in value theory. In addition, values have been studied in various disciplines: anthropology, behavioral economics, business ethics, corporate governance, moral philosophy, political sciences, social psychology, sociology and theology.
Similar concepts
Ethical value is sometimes used synonymously with goodness. However, "goodness" has many other meanings and may be regarded as more ambiguous.
Social value is a concept used in the public sector to cover the social, environmental and economic impacts of individual and collective actions.
Types of value
Personal versus cultural
Personal values exist in relation to cultural values, either in agreement with or divergence from prevailing norms. A culture is a social system that shares a set of common values, in which such values permit social expectations and collective understandings of the good, beautiful and constructive. Without normative personal values, there would be no cultural reference against which to measure the virtue of individual values and so cultural identity would disintegrate.
Relative or absolute
Relative values differ between people, and on a larger scale, between people of different cultures. On the other hand, there are theories of the existence of absolute values, which can also be termed noumenal values (and not to be confused with mathematical absolute value). An absolute value can be described as philosophically absolute and independent of individual and cultural views, as well as independent of whether it is known or apprehended or not. Ludwig Wittgenstein was pessimistic about the idea that an elucidation would ever happen regarding the absolute values of actions or objects; "we can speak as much as we want about "life" and "its meaning", and believe that what we say is important. But these are no more than expressions and can never be facts, resulting from a tendency of the mind and not the heart or the will".
Intrinsic or extrinsic
Philosophic value may be split into instrumental value and intrinsic values. An instrumental value is worth having as a means towards getting something else that is good (e.g., a radio is instrumentally good in order to hear music). An intrinsically valuable thing is worth for itself, not as a means to something else. It is giving value intrinsic and extrinsic properties.
An ethic good with instrumental value may be termed an ethic mean, and an ethic good with intrinsic value may be termed an end-in-itself. An object may be both a mean and end-in-itself.
Summation
Intrinsic and instrumental goods are not mutually exclusive categories. Some objects are both good in themselves, and also good for getting other objects that are good. "Understanding science" may be such a good, being both worthwhile in and of itself, and as a means of achieving other goods. In these cases, the sum of instrumental (specifically the all instrumental value) and intrinsic value of an object may be used when putting that object in value systems, which is a set of consistent values and measures.
Universal values
S. H. Schwartz, along with a number of psychology colleagues, has carried out empirical research investigating whether there are universal values, and what those values are. Schwartz defined 'values' as "conceptions of the desirable that influence the way people select action and evaluate events". He hypothesised that universal values would relate to three different types of human need: biological needs, social co-ordination needs, and needs related to the welfare and survival of groups
Intensity
The intensity of philosophic value is the degree it is generated or carried out, and may be regarded as the prevalence of the good, the object having the value.
It should not be confused with the amount of value per object, although the latter may vary too, e.g. because of instrumental value conditionality. For example, taking a fictional life-stance of accepting waffle-eating as being the end-in-itself, the intensity may be the speed that waffles are eaten, and is zero when no waffles are eaten, e.g. if no waffles are present. Still, each waffle that had been present would still have value, no matter if it was being eaten or not, independent on intensity.
Instrumental value conditionality in this case could be exampled by every waffle not present, making them less valued by being far away rather than easily accessible.
In many life stances it is the product of value and intensity that is ultimately desirable, i.e. not only to generate value, but to generate it in large degree. Maximizing life-stances have the highest possible intensity as an imperative.
Positive and negative value
There may be a distinction between positive and negative philosophic or ethic value. While positive ethic value generally correlates with something that is pursued or maximized, negative ethic value correlates with something that is avoided or minimized.
Protected value
A protected value (also sacred value) is one that an individual is unwilling to trade off no matter what the benefits of doing so may be. For example, some people may be unwilling to kill another person, even if it means saving many other individuals. Protected values tend to be "intrinsically good", and most people can in fact imagine a scenario when trading off their most precious values would be necessary. If such trade-offs happen between two competing protected values such as killing a person and defending your family they are called tragic trade-offs.
Protected values have been found to be play a role in protracted conflicts (e.g., the Israeli-Palestinian conflict) because they can hinder businesslike (''utilitarian'') negotiations. A series of experimental studies directed by Scott Atran and Ángel Gómez among combatants on the ISIS front line in Iraq and with ordinary citizens in Western Europe suggest that commitment to sacred values motivate the most "devoted actors" to make the costliest sacrifices, including willingness to fight and die, as well as a readiness to forsake close kin and comrades for those values if necessary. From the perspective of utilitarianism, protected values are biases when they prevent utility from being maximized across individuals.
According to Jonathan Baron and Mark Spranca, protected values arise from norms as described in theories of deontological ethics (the latter often being referred to in context with Immanuel Kant). The protectedness implies that people are concerned with their participation in transactions rather than just the consequences of it.
Economic versus philosophic value
Philosophical value is distinguished from economic value, since it is independent from some other desired condition or commodity. The economic value of an object may rise when the exchangeable desired condition or commodity, e.g. money, become high in supply, and vice versa when supply of money becomes low.
Nevertheless, economic value may be regarded as a result of philosophical value. In the subjective theory of value, the personal philosophic value a person puts in possessing something is reflected in what economic value this person puts on it. The limit where a person considers to purchase something may be regarded as the point where the personal philosophic value of possessing something exceeds the personal philosophic value of what is given up in exchange for it, e.g. money. In this light, everything can be said to have a "personal economic value" in contrast to its "societal economic value."
Personal values
Personal values provide an internal reference for what is good, beneficial, important, useful, beautiful, desirable and constructive. Values are one of the factors that generate behavior (besides needs, interests and habits) and influence the choices made by an individual.
Values may help common human problems for survival by comparative rankings of value, the results of which provide answers to questions of why people do what they do and in what order they choose to do them. Moral, religious, and personal values, when held rigidly, may also give rise to conflicts that result from a clash between differing world views.
Over time the public expression of personal values that groups of people find important in their day-to-day lives, lay the foundations of law, custom and tradition. Recent research has thereby stressed the implicit nature of value communication. Consumer behavior research proposes there are six internal values and three external values. They are known as List of Values (LOV) in management studies. They are self respect, warm relationships, sense of accomplishment, self-fulfillment, fun and enjoyment, excitement, sense of belonging, being well respected, and security. From a functional aspect these values are categorized into three and they are interpersonal relationship area, personal factors, and non-personal factors. From an ethnocentric perspective, it could be assumed that a same set of values will not reflect equally between two groups of people from two countries. Though the core values are related, the processing of values can differ based on the cultural identity of an individual.
Individual differences
Schwartz proposed a theory of individual values based on surveys data. His model groups values in terms of growth versus protection, and personal versus social focus. Values are then associated with openness to change (which Schwartz views as related to personal growth), self-enhancement (which Schwartz views as mostly to do with self-protection), conservation (which Schwartz views as mostly related to social-protection), and self-transcendence (which Schwartz views as a form of social growth). Within this Schwartz places 10 universal values: self-direction, stimulation and hedonism (related to openness growth), achievement and power (related to self enhancement), security, conformity and tradition (related to conservation), and humility, benevolence and universalism (relate to self-transcendence).
Personality traits using the big 5 measure correlate with Schwartz's value construct. Openness and extraversion correlates with the values related to openness-to-change (openness especially with self-direction, extraversion especially with stimulation); agreeableness correlates with self-transcendence values (especially benevolence); extraversion is correlated with self-enhancement and negatively with traditional values. Conscientiousness correlates with achievement, conformity and security.
Men are found to value achievement, self-direction, hedonism, and stimulation more than women, while women value benevolence, universality and tradition higher.
The order of Schwartz's traits are substantially stability amongst adults over time. Migrants values change when they move to a new country, but the order of preferences is still quite stable. Motherhood causes women to shift their values towards stability and away from openness-to-change but not fathers.
Moral foundations theory
Moral foundation theory identifies five forms of moral foundation: harm/care, fairness/reciprocity, in-group/loyalty, authority/respect, and purity/sanctity. The first two are often termed individualizing foundations, with the remaining three being binding foundations. The moral foundations were found to be correlated with the theory of basic human values. The strong correlations are between conservatives values and binding foundations.
Cultural values
Individual cultures emphasize values which their members broadly share. Values of a society can often be identified by examining the level of honor and respect received by various groups and ideas.
Values clarification differs from cognitive moral education:Respect
Value clarification consists of "helping people clarify what their lives are for and what is worth working for. It encourages students to define their own values and to understand others' values."
Cognitive moral education builds on the belief that students should learn to value things like democracy and justice as their moral reasoning develops.
Values relate to the norms of a culture, but they are more global and intellectual than norms. Norms provide rules for behavior in specific situations, while values identify what should be judged as good or evil. While norms are standards, patterns, rules and guides of expected behavior, values are abstract concepts of what is important and worthwhile. Flying the national flag on a holiday is a norm, but it reflects the value of patriotism. Wearing dark clothing and appearing solemn are normative behaviors to manifest respect at a funeral. Different cultures represent values differently and to different levels of emphasis. "Over the last three decades, traditional-age college students have shown an increased interest in personal well-being and a decreased interest in the welfare of others." Values seemed to have changed, affecting the beliefs, and attitudes of the students.
Members take part in a culture even if each member's personal values do not entirely agree with some of the normative values sanctioned in that culture. This reflects an individual's ability to synthesize and extract aspects valuable to them from the multiple subcultures they belong to.
If a group member expresses a value that seriously conflicts with the group's norms, the group's authority may carry out various ways of encouraging conformity or stigmatizing the non-conforming behavior of that member. For example, imprisonment can result from conflict with social norms that the state has established as law.
Furthermore, cultural values can be expressed at a global level through institutions participating in the global economy. For example, values important to global governance can include leadership, legitimacy, and efficiency. Within our current global governance architecture, leadership is expressed through the G20, legitimacy through the United Nations, and efficiency through member-driven international organizations. The expertise provided by international organizations and civil society depends on the incorporation of flexibility in the rules, to preserve the expression of identity in a globalized world.
Nonetheless, in warlike economic competition, differing views may contradict each other, particularly in the field of culture. Thus audiences in Europe may regard a movie as an artistic creation and grant it benefits from special treatment, while audiences in the United States may see it as mere entertainment, whatever its artistic merits. EU policies based on the notion of "cultural exception" can become juxtaposed with the liberal policy of "cultural specificity" in English-speaking countries. Indeed, international law traditionally treats films as property and the content of television programs as a service. Consequently, cultural interventionist policies can find themselves opposed to the Anglo-Saxon liberal position, causing failures in international negotiations.
Development and transmission
Values are generally received through cultural means, especially diffusion and transmission or socialization from parents to children. Parents in different cultures have different values. For example, parents in a hunter–gatherer society or surviving through subsistence agriculture value practical survival skills from a young age. Many such cultures begin teaching babies to use sharp tools, including knives, before their first birthdays. Italian parents value social and emotional abilities and having an even temperament. Spanish parents want their children to be sociable. Swedish parents value security and happiness. Dutch parents value independence, long attention spans, and predictable schedules. American parents are unusual for strongly valuing intellectual ability, especially in a narrow "book learning" sense. The Kipsigis people of Kenya value children who are not only smart, but who employ that intelligence in a responsible and helpful way, which they call ng'om. Luos of Kenya value education and pride which they call "nyadhi".
Factors that influence the development of cultural values are summarized below.
The Inglehart–Welzel cultural map of the world is a two-dimensional cultural map showing the cultural values of the countries of the world along two dimensions: The traditional versus secular-rational values reflect the transition from a religious understanding of the world to a dominance of science and bureaucracy. The second dimension named survival values versus self-expression values represents the transition from industrial society to post-industrial society.
Cultures can be distinguished as tight and loose in relation to how much they adhere to social norms and tolerates deviance. Tight cultures are more restrictive, with stricter disciplinary measures for norm violations while loose cultures have weaker social norms and a higher tolerance for deviant behavior. A history of threats, such as natural disasters, high population density, or vulnerability to infectious diseases, is associated with greater tightness. It has been suggested that tightness allows cultures to coordinate more effectively to survive threats.
Studies in evolutionary psychology have led to similar findings. The so-called regality theory finds that war and other perceived collective dangers have a profound influence on both the psychology of individuals and on the social structure and cultural values. A dangerous environment leads to a hierarchical, authoritarian, and warlike culture, while a safe and peaceful environment fosters an egalitarian and tolerant culture.
Value system
A value system is a set of consistent values used for the purpose of ethical or ideological integrity.
Consistency
As a member of a society, group or community, an individual can hold both a personal value system and a communal value system at the same time. In this case, the two value systems (one personal and one communal) are externally consistent provided they bear no contradictions or situational exceptions between them.
A value system in its own right is internally consistent when
its values do not contradict each other and
its exceptions are or could be
abstract enough to be used in all situations and
consistently applied.
Conversely, a value system by itself is internally inconsistent if:
its values contradict each other and
its exceptions are
highly situational and
inconsistently applied.
Value exceptions
Abstract exceptions serve to reinforce the ranking of values. Their definitions are generalized enough to be relevant to any and all situations. Situational exceptions, on the other hand, are ad hoc and pertain only to specific situations. The presence of a type of exception determines one of two more kinds of value systems:
An idealized value system is a listing of values that lacks exceptions. It is, therefore, absolute and can be codified as a strict set of proscriptions on behavior. Those who hold to their idealized value system and claim no exceptions (other than the default) are called absolutists.
A realized value system contains exceptions to resolve contradictions between values in practical circumstances. This type is what people tend to use in daily life.
The difference between these two types of systems can be seen when people state that they hold one value system yet in practice deviate from it, thus holding a different value system. For example, a religion lists an absolute set of values while the practice of that religion may include exceptions.
Implicit exceptions bring about a third type of value system called a formal value system. Whether idealized or realized, this type contains an implicit exception associated with each value: "as long as no higher-priority value is violated". For instance, a person might feel that lying is wrong. Since preserving a life is probably more highly valued than adhering to the principle that lying is wrong, lying to save someone's life is acceptable. Perhaps too simplistic in practice, such a hierarchical structure may warrant explicit exceptions.
Conflict
Although sharing a set of common values, like hockey is better than baseball or ice cream is better than fruit, two different parties might not rank those values equally. Also, two parties might disagree as to certain actions are right or wrong, both in theory and in practice, and find themselves in an ideological or physical conflict. Ethonomics, the discipline of rigorously examining and comparing value systems, enables us to understand politics and motivations more fully in order to resolve conflicts.
An example conflict would be a value system based on individualism pitted against a value system based on collectivism. A rational value system organized to resolve the conflict between two such value systems might take the form below. Added exceptions can become recursive and often convoluted.
Individuals may act freely unless their actions harm others or interfere with others' freedom or with functions of society that individuals need, provided those functions do not themselves interfere with these proscribed individual rights and were agreed to by a majority of the individuals.
A society (or more specifically the system of order that enables the workings of a society) exists for the purpose of benefiting the lives of the individuals who are members of that society. The functions of a society in providing such benefits would be those agreed to by the majority of individuals in the society.
A society may require contributions from its members in order for them to benefit from the services provided by the society. The failure of individuals to make such required contributions could be considered a reason to deny those benefits to them, although a society could elect to consider hardship situations in determining how much should be contributed.
A society may restrict behavior of individuals who are members of the society only for the purpose of performing its designated functions agreed to by the majority of individuals in the society, only insofar as they violate the aforementioned values. This means that a society may abrogate the rights of any of its members who fails to uphold the aforementioned values.
See also
Attitude (psychology)
Axiological ethics
Axiology
Clyde Kluckhohn and his value orientation theory
Hofstede's Framework for Assessing Culture
Instrumental and intrinsic value
Intercultural communication
Meaning of life
Paideia
Rokeach Value Survey
Spiral Dynamics
The Right and the Good
Value judgment
World Values Survey
Western values
References
Further reading
see https://www.researchgate.net/publication/290349218_The_political_algebra_of_global_value_change_General_models_and_implications_for_the_Muslim_world
External links
Concepts in ethics
Concepts in metaphysics
Codes of conduct
Moral psychology
Motivation
Social philosophy
Social psychology
Social systems | 0.777923 | 0.997074 | 0.775647 |
Environmental philosophy | Environmental philosophy is the branch of philosophy that is concerned with the natural environment and humans' place within it. It asks crucial questions about human environmental relations such as "What do we mean when we talk about nature?" "What is the value of the natural, that is non-human environment to us, or in itself?" "How should we respond to environmental challenges such as environmental degradation, pollution and climate change?" "How can we best understand the relationship between the natural world and human technology and development?" and "What is our place in the natural world?" Environmental philosophy includes environmental ethics, environmental aesthetics, ecofeminism, environmental hermeneutics, and environmental theology. Some of the main areas of interest for environmental philosophers are:
Defining environment and nature
How to value the environment
Moral status of animals and plants
Endangered species
Environmentalism and deep ecology
Aesthetic value of nature
Intrinsic value
Wilderness
Restoration of nature
Consideration of future generations
Ecophenomenology
Contemporary issues
Modern issues within environmental philosophy include but are not restricted to the concerns of environmental activism, questions raised by science and technology, environmental justice, and climate change. These include issues related to the depletion of finite resources and other harmful and permanent effects brought on to the environment by humans, as well as the ethical and practical problems raised by philosophies and practices of environmental conservation, restoration, and policy in general. Another question that has settled on the minds of modern environmental philosophers is "Do rivers have rights?" At the same time environmental philosophy deals with the value human beings attach to different kinds of environmental experience, particularly how experiences in or close to non-human environments contrast with urban or industrialized experiences, and how this varies across cultures with close attention paid to indigenous people.
Modern history
Environmental philosophy emerged as a branch of philosophy in 1970s. Early environmental philosophers include Seyyed Hossein Nasr, Richard Routley, Arne Næss, and J. Baird Callicott. The movement was an attempt to connect with humanity's sense of alienation from nature in a continuing fashion throughout history. This was very closely related to the development at the same time of ecofeminism, an intersecting discipline. Since then its areas of concern have expanded significantly.
The field is today characterized by a notable diversity of stylistic, philosophical and cultural approaches to human environmental relationships, from personal and poetic reflections on environmental experience and arguments for panpsychism to Malthusian applications of game theory or the question of how to put an economic value on nature's services. A major debate arose in the 1970s and 80s was that of whether nature has intrinsic value in itself independent of human values or whether its value is merely instrumental, with ecocentric or deep ecology approaches emerging on the one hand versus consequentialist or pragmatist anthropocentric approaches on the other.
Another debate that arose at this time was the debate over whether there really is such a thing as wilderness or not, or whether it is merely a cultural construct with colonialist implications as suggested by William Cronon. Since then, readings of environmental history and discourse have become more critical and refined. In this ongoing debate, a diversity of dissenting voices have emerged from different cultures around the world questioning the dominance of Western assumptions, helping to transform the field into a global area of thought.
In recent decades, there has been a significant challenge to deep ecology and the concepts of nature that underlie it, some arguing that there is not really such a thing as nature at all beyond some self-contradictory and even politically dubious constructions of an ideal other that ignore the real human-environmental interactions that shape our world and lives. This has been alternately dubbed the postmodern, constructivist, and most recently post-naturalistic turn in environmental philosophy. Environmental aesthetics, design and restoration have emerged as important intersecting disciplines that keep shifting the boundaries of environmental thought, as have the science of climate change and biodiversity and the ethical, political and epistemological questions they raise.
Social ecology movement
In 1982, Murray Bookchin described his philosophy of Social Ecology which provides a framework for understanding nature, our relationship with nature, and our relationships to each other.
According to this philosophy, defining nature as "unspoiled wilderness" denies that humans are biological creatures created by natural evolution. It also takes issue with the attitude that "everything that exists is natural", as this provides us with no framework for judging a landfill as less natural than a forest. Instead, social ecology defines nature as a tendency in healthy ecosystems toward greater levels of diversity, complementarity, and freedom. Practices that are congruent with these principles are more natural than those that are not.
Building from this foundation, Bookchin argues that "The ecological crisis is a social crisis":
Practices which simplify biodiversity and dominate nature (monocropping, overfishing, clearcutting, etc.) are linked to societal tendencies to simplify and dominate humanity.
Such societies create cultural institutions like poverty, racism, patriarchy, homophobia, and genocide from this same desire to simplify and dominate.
In turn, Social Ecology suggests addressing the root causes of environmental degradation requires creating a society that promotes decentralization, interdependence, and direct democracy rather than profit extraction.
Deep ecology movement
In 1984, George Sessions and Arne Næss articulated the principles of the new Deep Ecology Movement.
These basic principles are:
The well-being and flourishing of human and non-human life have value.
Richness and diversity of life forms contribute to the realization of these values and are also values in themselves.
Humans have no right to reduce this richness and diversity except to satisfy vital needs.
The flourishing of human life and cultures is compatible with a substantial decrease in the human population.
Present human interference with the nonhuman world is excessive, and the situation is rapidly worsening.
Policies must therefore be changed. These policies affect basic economic, technological, and ideological structures. The resulting state of affairs will be deeply different from the present.
The ideological change is mainly that of appreciating life quality (dwelling in situations of inherent value), rather than adhering to an increasingly higher standard of living. There will be a profound awareness of the difference between big and great.
Those who subscribe to the foregoing points have an obligation directly or indirectly to try to implement the necessary changes.
Resacralization of nature
See also
Environmental Philosophy (journal)
Environmental Values
Environmental Ethics (journal)
List of environmental philosophers
Environmental hermeneutics
References
Notes
Further reading
Armstrong, Susan, Richard Botzler. Environmental Ethics: Divergence and Convergence, McGraw-Hill, Inc., New York, New York. .
Auer, Matthew, 2019. Environmental Aesthetics in the Age of Climate Change, Sustainability, 11 (18), 5001.
Benson, John, 2000. Environmental Ethics: An Introduction with Readings, Psychology Press.
Callicott, J. Baird, and Michael Nelson, 1998. The Great New Wilderness Debate, University of Georgia Press.
Conesa-Sevilla, J., 2006. The Intrinsic Value of the Whole: Cognitive and Utilitarian Evaluative Processes as they Pertain to Ecocentric, Deep Ecological, and Ecopsychological "Valuing", The Trumpeter, 22 (2), 26-42.
Derr, Patrick, G, Edward McNamara, 2003. Case Studies in Environmental Ethics, Bowman & Littlefield Publishers.
DesJardins, Joseph R., Environmental Ethics Wadsworth Publishing Company, ITP, An International Thomson Publishing Company, Belmont, California. A Division of Wadsworth, Inc.
Devall, W. and G. Sessions. 1985. Deep Ecology: Living As if Nature Mattered, Salt Lake City: Gibbs M. Smith, Inc.
Drengson, Inoue, 1995. "The Deep Ecology Movement", North Atlantic Books, Berkeley, California.
Foltz, Bruce V., Robert Frodeman. 2004. Rethinking Nature, Indiana University Press, 601 North Morton Street, Bloomington, IN 47404-3797
Gade, Anna M. 2019. Muslim Environmentalisms: Religious and Social Foundations, Columbia University Press, New York
Keulartz, Jozef, 1999. The Struggle for Nature: A Critique of Environmental Philosophy, Routledge.
LaFreniere, Gilbert F, 2007. The Decline of Nature: Environmental History and the Western Worldview, Academica Press, Bethesda, MD
Light, Andrew, and Eric Katz,1996. Environmental Pragmatism, Psychology Press.
Mannison, D., M. McRobbie, and R. Routley (ed), 1980. Environmental Philosophy, Australian National University
Matthews, Steve, 2002. [https://core.ac.uk/download/pdf/48856927.pdf A Hybrid Theory of Environmentalism, Essays in Philosophy, 3.
Næss, A. 1989. Ecology, Community and Lifestyle: Outline of an Ecosophy, Translated by D. Rothenberg. Cambridge: Cambridge University Press.
Oelschlaeger, Max, 1993. The Idea of Wilderness: From Prehistory to the Age of Ecology, New Haven: Yale University Press,
Pojman, Louis P., Paul Pojman. Environmental Ethics, Thomson-Wadsworth, United States
Sarvis, Will. Embracing Philanthropic Environmentalism: The Grand Responsibility of Stewardship, (McFarland, 2019).
Sherer, D., ed, Thomas Attig. 1983. Ethics and the Environment, Prentice-Hall, Inc., Englewood Cliffs, New Jersey 07632.
VanDeVeer, Donald, Christine Pierce. The Environmental Ethics and Policy Book, Wadsworth Publishing Company. An International Thomson Publishing Company
Vogel, Steven, 1999. Environmental Philosophy After the End of Nature, Environmental Ethics 24 (1):23-39
Weston, 1999. An Invitation to Environmental Philosophy, Oxford University Press, New York, New York.
Zimmerman, Michael E., J. Baird Callicott, George Sessions, Karen J. Warren, John Clark. 1993.Environmental Philosophy: From Animal Rights to Radical Ecology, Prentice-Hall, Inc., Englewood Cliffs, New Jersey 07632
External links | 0.784448 | 0.988601 | 0.775505 |
Wisdom | Wisdom (sapience, sagacity) is the use of one's knowledge and experience to make good judgements. Wisdom is the interpretating and understanding of knowledge that leads to greater insight (e.g., common sense). Wisdom is a pragmatic kind of "praxis (process)" where one is constantly using metacognition.
Overview
The wise ones have equanimity through tough times and an acceptance of reality. Wise ones use active and reflective listening, temperance (virtue), and a wise rhetoric.
Wisdom is associated with compromise, intellectual humility, acceptance of uncertainty, and a cosmopolitanism of what is Good. Wisdom contains virtues such as ethics and benevolence. Wisdom is personified as femininity (i.e., Sophia).
Wisdom has been defined in many different ways, and there are several distinct approaches to assessing the characteristics attributed to wisdom.
Charles Haddon Spurgeon defined wisdom as "the right use of knowledge". Robert I. Sutton and Andrew Hargadon defined the "attitude of wisdom" as "acting with knowledge while doubting what one knows".
In social and psychological sciences, several distinct approaches to wisdom exist, along with techniques of operationalization and measurement of wisdom as a psychological construct. Wisdom is the capacity to have foreknowledge of something, to know the consequences (positive and negative) of the available courses of action, and take the best of the available options.
Sapience
Sapience ("" in Greek) is "transcendent wisdom", "ultimate reality", or the ultimate truth of things. This more cosmic, "big picture" definition is often how wisdom ("true wisdom" or "Wisdom" with a capital W) is considered in a religious context. It transcends mere practical wisdom and may include deep understanding of self, interconnectedness, conditioned origination, and phenomenological insight. A person with this type of wisdom can act with appropriate judgment, a broad understanding of situations, and greater appreciation/compassion towards other living beings.
The word sapience is derived from the Latin , meaning "wisdom". The corresponding verb has the original meaning of "to taste", hence "to perceive, to discern" and "to know"; its present participle was chosen by Carl Linnaeus for the Latin binomial for the human species, Homo sapiens.
Perennial wisdom
Perennial wisdom seeks unity through nondualism.
Heuristic
The wisdom of the crowd is a common strategy (i.e., heuristic). The Socratic method is a heuristic of epistemology.
Mythological perspectives
Buddhist mythology
Buddhist traditions provide comprehensive guidance on how to develop wisdom.
Monomyth fiction
Jedi
In the Star Wars universe, wisdom is valued. George Lucas incorporated spirituality and morals, recurrent in mythological and philosophical themes, into the films; one of his inspirations was Joseph Campbell's The Hero of a Thousand Faces. The character Master Yoda from the films evokes the trope of the wise sage or "Oriental Monk", and he is frequently quoted, analogously to Chinese thinkers or Eastern sages in general. Psychologist D. W. Kreger's book The Tao of Yoda adapts the wisdom of the Tao Te Ching in relation to Yoda's thinking. Knowledge is canonically considered one of the pillars of the films' Jedi knights, something expanded upon in the non-canon book The Jedi Path, and wisdom can serve as a tenet for Jediism. The Jedi Code states: "Ignorance, yet knowledge." In a psychology populational study published by Grossmann and team in 2019, respondents considered Yoda to be wiser than Spock, a fictional character from the Star Trek series, due to Spock's blind spot for emotion, which was positively associated with wise reasoning in people: "Yoda embraces his emotions and aims to achieve a balance between them. Yoda is known to be emotionally expressive, to share a good joke with others, but also to recognize sorrow and his past mistakes".
Wisdom tooth
In many cultures, the name for third molars, which are the last teeth to grow, is etymologically linked with wisdom, as in the English wisdom tooth. This nickname originated from the classical tradition—the Hippocratic writings used the term (in Greek, related to the meaning of moderation or teaching a lesson), and in Latin (wisdom tooth).
Greek mythology
Athena and metis
The ancient Greeks considered wisdom to be an important virtue, personified as the goddesses Metis and Athena. Metis was the first wife of Zeus, who, according to Hesiod's Theogony, had devoured her pregnant; Zeus earned the title of Mêtieta ("The Wise Counselor") after that, as Metis was the embodiment of wisdom, and he gave birth to Athena, who is said to have sprung from his head. Athena was portrayed as strong, fair, merciful, and chaste.
Apollo
Apollo was also considered a god of wisdom, designated as the conductor of the Muses (Musagetes), who were personifications of the sciences and of the inspired and poetic arts. According to Plato in his Cratylus, the name of Apollo could also mean "" (archer) and "" (unifier of poles [divine and earthly]), since this god was responsible for divine and true inspirations, thus considered an archer who was always right in healing and oracles: "he is an ever-darting archer". Apollo prophesied through the priestesses (Pythia) in the Temple of Apollo (Delphi), where the aphorism "know thyself" was inscribed (one of the Delphic maxims). He was contrasted with Hermes, who was related to the sciences and technical wisdom, and, in the first centuries after Christ, was associated with Thoth in an Egyptian syncretism, under the name Hermes Trimegistus. Greek tradition recorded the earliest introducers of wisdom in the Seven Sages of Greece.
To Socrates and Plato, philosophy was literally the love of wisdom. This permeates Plato's dialogues; in The Republic the leaders of his proposed utopia are philosopher kings who understand the Form of the Good and possess the courage to act accordingly. Aristotle, in Metaphysics, defined wisdom as understanding why things are a certain way (causality), which is deeper than merely knowing things are a certain way. He was the first to make the distinction between and .
According to Plato and Xenophon, the Pythia of the Delphic Oracle answered the question "who is the wisest man in Greece?" by stating Socrates was the wisest. According to Plato's Apology, Socrates decided to investigate the people who might be considered wiser than him, concluding they lacked true knowledge:
This became immortalized in the phrase "I know that I know nothing" an aphorism suggesting that it is wise to recognize one's own ignorance and to value epistemic humility.
Roman mythology
The ancient Romans also valued wisdom, which was personified as Minerva or Pallas. She also represents skillful knowledge and the virtues, especially chastity. Her symbol was the owl, which is still a popular representation of wisdom, because it can see in darkness. She was said to have been born from Jupiter's forehead.
Norse mythology
Odin is known for his wisdom, often as acquired through various hardships and ordeals involving pain and self-sacrifice. In one instance he plucked out an eye and offered it to Mímir, guardian of the well of knowledge and wisdom, in return for a drink from the well.
In another famous account, Odin hanged himself for nine nights from Yggdrasil, the World Tree that unites all the realms of existence, suffering from hunger and thirst and finally wounding himself with a spear until he gained the knowledge of runes for use in casting powerful magic. He was also able to acquire the mead of poetry from the giants, a drink of which could grant the power of a scholar or poet, for the benefit of gods and mortals alike.
Egyptian mythology
Sia was the personification of perception and thoughtfulness in the mythology of Ancient Egypt. Thoth, married to Maat (in ancient Egyptian: order, righteousness, truth), was regarded as the being who introduced wisdom to the nation.
Academia
Theories and models
The Berlin Wisdom Paradigm is an expertise model of life wisdom.
The Balance Theory of Wisdom
The Self-transcendence Wisdom Theory
The Three-dimensional Wisdom Theory
The H.E.R.O.(E.) Model of Wisdom
The Process View of Wisdom
The Integrating Virtue and Wit Theory of Wisdom
Educational perspectives
Public schools in the U.S. sometimes nod at "character education" which would include training in wisdom.
Maxwell's educational philosophy
Nicholas Maxwell, a philosopher in the United Kingdom, believes academia ought to alter its focus from the acquisition of knowledge to seeking and promoting wisdom. This he defines as the capacity to realize what is of value in life, for oneself and others. He teaches that new knowledge and technological know-how increase our power to act. Without wisdom though, Maxwell claims this new knowledge may cause human harm as well as human good. He argues that the pursuit of knowledge is indeed valuable and good, but that it should be considered a part of the broader task of improving wisdom.
Psychological perspectives
The three major psychological categories for wisdom are personality, development, and expertise.
Psychologists have begun to gather data on commonly held beliefs or folk theories about wisdom. Initial analyses indicate that although "there is an overlap of the implicit theory of wisdom with intelligence, perceptiveness, spirituality, and shrewdness, it is evident that wisdom is an expertise in dealing with difficult questions of life and adaptation to the complex requirements."
The field of psychology has also developed explicit theories and empirical research on the psychological processes underlying wisdom.
Opinions on the psychological definition of wisdom vary, but there is some consensus that critical to wisdom are certain meta-cognitive processes that afford life reflection and judgment about critical life matters. These processes include recognizing the limits of one's own knowledge, acknowledging uncertainty and change, attention to context and the bigger picture, and integrating different perspectives of a situation. Cognitive scientists suggest that wisdom requires coordinating such reasoning processes for insight into managing one's life. Reasoning of this sort is both theoretically and empirically distinct from general (fluid or crystallized) intelligence. Researchers have shown empirically that wise reasoning is distinct from IQ.
Baltes and colleagues defined wisdom as "the ability to deal with the contradictions of a specific situation and to assess the consequences of an action for themselves and for others. It is achieved when in a concrete situation, a balance between intrapersonal, interpersonal and institutional interests can be prepared". Balance appears to be a critical criterion of wisdom. Empirical research provides some support for this idea, showing that wisdom-related reasoning is associated with achieving balance between intrapersonal and interpersonal interests when facing personal life challenges, and when setting goals for managing interpersonal conflicts.
Researchers also explore the role of emotions in wisdom. Most agree that emotions and emotion regulation are key to effectively managing the kinds of complex and arousing situations that most call for wisdom. Much empirical research has focused on the cognitive or meta-cognitive aspects of wisdom, assuming that an ability to reason through difficult situations is paramount. So although emotions likely play a role in how wisdom plays out in real events (and in reflecting on past events), empirical studies were late to develop on how emotions affect a person's ability to deal wisely with complex events. One study found a positive relationship between diversity of emotional experience and wise reasoning, irrespective of emotional intensity.
Positive psychology
Researchers have defined wisdom as the coordination of "knowledge and experience" and "its deliberate use to improve well being." Under this definition, wisdom is further refined as having the following facets:
Problem-solving with self-knowledge and actions.
Tolerance towards uncertainty in life with .
to understand one's own emotions including the ability to see oneself as part of a larger whole.
This theoretical model has not been tested empirically.
Grossman
Grossmann and colleagues summarized prior psychological literature to conclude that wisdom involves certain cognitive processes that afford unbiased, sound judgment in the face of ill-defined life situations:
intellectual humility, or recognition of limits of own knowledge
appreciation of perspectives broader than the issue at hand
sensitivity to the possibility of change in social relations
compromise or integration of different perspectives
Grossmann found that habitually speaking and thinking of oneself in the third person increases these characteristics, which means that such a habit makes a person wiser. Grossmann says contextual factors—such as culture, experiences, and social situations—influence the understanding, development, and propensity of wisdom, with implications for training and educational practice. These contextual factors are the focus of continuing research. For instance, Grossmann and Kross identified a phenomenon they called "the Solomon's paradox": that people reflect more wisely on other people's problems than on their own. (It is named after King Solomon, who had legendary sagacity when making judgments about other people's dilemmas but lacked insight when it came to important decisions in his own life.)
Measuring wisdom
A researcher will measure wisdom differently depending on their theoretical position about the nature of wisdom. For example, some view wisdom as a stable personality trait, others as a context-bound process. Those wedded to the former approach often use single-shot questionnaires, which are prone to responses, something that is antithetical to the wisdom construct and fails to study wisdom in the contexts where it is most relevant: complex life challenges. In contrast, researchers who prefer the latter approach measure wisdom-related features of cognition, motivation, and emotion in the context of a specific situation. Such state-level measures provide less-biased responses as well as greater power in explaining meaningful psychological processes. Also, a focus on the situation allows wisdom researchers to develop a fuller understanding of the role of context in producing wisdom. For example, studies have shown evidence of cross-cultural and within-cultural variability, and systematic variability in reasoning wisely across contexts and in daily life.
Many, but not all, studies find that adults' self-ratings of perspective and wisdom do not depend on age. This conflicts with the popular notion that wisdom increases with age. The answer to whether age and wisdom correlate depends on how one defines wisdom and one's experimental technique. The answer to this question also depends on the domain studied, and the role of experience in that domain, with some contexts favoring older adults, others favoring younger adults, and some not differentiating age groups. Rigorous longitudinal work is needed to answer this question, while most studies rely on cross-sectional observations.
The Jeste-Thomas Wisdom Index is based on a 28-question survey (SD-WISE-28) created by researchers at the University of California San Diego to determine how wise a person is. In 2021 Dr. Dilip V. Jeste and his colleagues created a 7-question survey (SD-WISE-7) testing seven components: acceptance of diverse perspectives, decisiveness, emotional regulation, prosocial behaviors, self-reflection, social advising, and (to a lesser degree) spirituality.
Monotheistic perspectives
Zoroastrianism
In the Avesta Gathas, hymns traditionally attributed to Zoroaster, Ahura Mazda means "Lord" (Ahura) and "Wisdom" (Mazda), and is the central deity who embodies goodness, being also called "Good Thought" (Vohu Manah). In Zoroastrianism, the order of the universe and morals is called (in Avestan, truth, righteousness), which is determined by this omniscient Thought and also considered a deity emanating from Ahura (Amesha Spenta). It is related to another ahura deity, Spenta Mainyu (active Mentality). It says in Yazna 31:
Baháʼí Faith
In Baháʼí Faith scripture, "The essence of wisdom is the fear of God, the dread of His scourge and punishment, and the apprehension of His justice and decree." Wisdom is seen as a light that casts away darkness, and "its dictates must be observed under all circumstances". One may obtain knowledge and wisdom through God, his Word, and his Divine Manifestation; the source of all learning is the knowledge of God.
Abrahamic religions
Hebrew Bible and Judaism
The word "wisdom" is mentioned 222 times in the Hebrew Bible. It was regarded as one of the highest virtues among the Israelites along with kindness and justice. The books of Proverbs and Psalms each urge readers to obtain and to increase in wisdom.
In the Hebrew Bible, wisdom is exemplified by Solomon, who asks God for wisdom in . Much of the Book of Proverbs, which is filled with wise sayings, is attributed to Solomon. In , the fear of the Lord is called the beginning of wisdom. Another proverb says that wisdom is gained from God, "For the Lord gives wisdom; from His mouth come knowledge and understanding". In , there is also reference to wisdom personified in female form, "Wisdom calls aloud in the streets, she raises her voice in the marketplaces." In , this personified wisdom is described as being present with God before creation began and even as taking part in creation itself.
King Solomon continues his teachings of wisdom in the book of Ecclesiastes. Solomon discusses his exploration of the meaning of life and fulfillment, as he speaks of life's pleasures, work, and materialism, yet concludes that it is all meaningless. "'Meaningless! Meaningless!" says the Teacher [Solomon]. 'Utterly meaningless! Everything is meaningless'...For with much wisdom comes much sorrow, the more knowledge, the more grief" Solomon concludes that all life's pleasures and riches, and even [human]wisdom, mean nothing if there is no relationship with God.
The Talmud teaches that a wise person can foresee the future. is a Hebrew word for "future," but also the Hebrew word for "birth", so one rabbinic interpretation of the teaching is that a wise person is one who can foresee the consequences of his/her choices (i.e. can "see the future" that he/she "gives birth" to).
Christian theology
In Christian theology, "wisdom" (From Hebrew: transliteration: pronounced: khok-maw', Greek: , Latin: ) describes an aspect of God, or the theological concept regarding the wisdom of God.
Christian thought opposes secular wisdom and embraces Godly wisdom. Paul the Apostle states that worldly wisdom thinks the claims of Christ to be foolishness. However, to those who are "on the path to salvation" Christ represents the wisdom of God. Wisdom is considered one of the seven gifts of the Holy Spirit. gives an alternate list of nine virtues, among which is wisdom.
The Epistle of James is a New Testament analogue of the book of Proverbs, in that it also discusses wisdom. It reiterates the message from Proverbs that wisdom comes from God by stating, "If any of you lacks wisdom, you should ask God, who gives generously to all without finding fault, and it will be given to you". James also explains how wisdom helps one acquire other forms of virtue: "But the wisdom that comes from heaven is first of all pure; then peace-loving, considerate, submissive, full of mercy and good fruit, impartial and sincere." James focuses on using this God-given wisdom to perform acts of service to the less fortunate.
Apart from Proverbs, Ecclesiastes, and James, other main books of wisdom in the Bible are Job, Psalms, and 1 and 2 Corinthians, which give lessons on gaining and using wisdom through difficult situations.
Islam
The Islamic term for wisdom is . Prophets of Islam are believed by Muslims to possess great wisdom. The term occurs a number of times in the Quran, notably in Sura 2:269, Sura 22:46, and Sura 6:151.
The Sufi philosopher Ibn Arabi considers al-Hakim ("The Wise") as one of the names of the Creator. Wisdom and truth, considered divine attributes, were valued in Islamic sciences and philosophy. The first Arab philosopher, Al-Kindi says at the beginning of his book:
Polytheistic perspectives
Inuit religion
In the Inuit tradition, developing wisdom was one of the aims of teaching. An Inuit Elder said that a person became wise when they could see what needed to be done and do it successfully without being told what to do.
Ancient Near East
In Mesopotamian religion and mythology, Enki, also known as Ea, was the god of wisdom and intelligence. Divine wisdom allowed and the ordering of the cosmos, and it was achieved by humans by following s (in Sumerian: order, rite, righteousness) which maintain balance. In addition to hymns to Enki or Ea dating from , there is among the clay tablets of Abu Salabikh from (the oldest dated texts), a "Hymn to Shamash" which includes the following:
The concept of Logos—manifest word of the divine thought—was also present in the philosophy and hymns of Egypt and Ancient Greece. It was important in the thinking of Heraclitus, and in the Abrahamic traditions. It seems to have been derived from Mesopotamian culture.
Hellenistic religion and Gnosticism
Indian religions
Medha is a goddess of wisdom found in the Garuda Purana.
In the Indian traditions, wisdom can be called or .
The Buddhist term was translated into Chinese as
(pinyin , characters 智 "knowledge" and 慧 "bright, intelligent").
In Chinese Buddhism, the idea of wisdom is closely linked to its Indian equivalent as it appears for instance in certain conceptual continuities that exist between Asanga, Vasubandhu and Xuanzang.
Developing wisdom is of central importance in Buddhist traditions, where the ultimate aim is often presented as "seeing things as they are" or as gaining a "penetrative understanding of all phenomena", which in turn is described as ultimately leading to the "complete freedom from suffering". In Buddhism, developing wisdom is accomplished through an understanding of what are known as the Four Noble Truths and by following the Noble Eightfold Path. This path lists mindfulness as one of eight required components for cultivating wisdom.
Buddhist scriptures teach that a wise people conduct themselves well. A wise person does actions that are unpleasant to do but give good results, and does not do actions that are pleasant to do but give bad results. Wisdom is the antidote to the self-chosen poison of ignorance. The Buddha has much to say on the subject of wisdom including:
He who arbitrates a case by force does not thereby become just (established in Dhamma). But the wise man is he who carefully discriminates between right and wrong.
He who leads others by nonviolence, righteously and equitably, is indeed a guardian of justice, wise and righteous.
One is not wise merely because he talks much. But he who is calm, free from hatred and fear, is verily called a wise man.
By quietude alone one does not become a sage if he is foolish and ignorant. But he who, as if holding a pair of scales, takes the good and shuns the evil, is a wise man; he is indeed a by that very reason. He who understands both good and evil as they really are, is called a true sage.
To recover the original supreme wisdom of self-nature (Buddha-nature or Tathagata) concealed by the self-imposed three dusty poisons (the kleshas: greed, anger, ignorance), Buddha taught to his students the threefold training by turning greed into generosity and discipline, anger into kindness and meditation, ignorance into wisdom. As the Sixth Patriarch of Chán Buddhism, Huineng, said in his Platform Sutra, "Mind without dispute is self-nature discipline, mind without disturbance is self-nature meditation, mind without ignorance is self-nature wisdom."
In Mahayana and esoteric Buddhist lineages, Mañjuśrī is considered an embodiment of Buddha wisdom.
In Hinduism, wisdom is considered a state of mind and soul with which a person achieves liberation. The god of wisdom is Ganesha and the goddess of knowledge is Saraswati.
The Sanskrit verse to attain knowledge is:
Wisdom in Hinduism is knowing oneself as the truth, as the basis for the entire Creation: ultimate self-awareness as the one who witnesses the entire creation in all its facets and forms. Further it means realization that an individual may, through right conduct and right living, come to realize their true relationship with the creation and the .
Nontheism
Confucianism
According to the Doctrine of the Mean, Confucius said:
Love of learning is akin to wisdom. To practice with vigor is akin to humanity. To know to be shameful is akin to courage (, , ... three of Mengzi's sprouts of virtue).
Compare this with the Confucian classic Great Learning, which begins with: "The Way of learning to be great consists in manifesting the clear character, loving the people, and abiding in the highest good." This is comprable to the Roman virtue prudence, especially if one interprets "clear character" as "clear conscience". (From Chan's Sources of Chinese Philosophy).
Tao
In Taoism, wisdom is adherence to the three treasures: charity, simplicity, and humility.
"He who knows other men is discerning [智]; he who knows himself is intelligent [明]." (Tao Te Ching 33).
See also
Further reading
Sternberg, R. and Gluck, J. (2021). Wisdom: The Psychology of Wise Thoughts, Words, and Deeds (Cambridge: Cambridge University Press).
Tsai, Cheng-hung (2023). Wisdom: A Skill Theory (Cambridge: Cambridge University Press).
Notes
References
External links
Center for Practical Wisdom at the University of Chicago
Virtue | 0.777262 | 0.997565 | 0.77537 |
Formal semantics (natural language) | Formal semantics is the study of grammatical meaning in natural languages using formal concepts from logic, mathematics and theoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of both linguistics and philosophy of language. It provides accounts of what linguistic expressions mean and how their meanings are composed from the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse-engineering the semantic components of natural languages' grammars.
Overview
Formal semantics studies the denotations of natural language expressions. High-level concerns include compositionality, reference, and the nature of meaning. Key topic areas include scope, modality, binding, tense, and aspect. Semantics is distinct from pragmatics, which encompasses aspects of meaning which arise from interaction and communicative intent.
Formal semantics is an interdisciplinary field, often viewed as a subfield of both linguistics and philosophy, while also incorporating work from computer science, mathematical logic, and cognitive psychology. Within philosophy, formal semanticists typically adopt a Platonistic ontology and an externalist view of meaning. Within linguistics, it is more common to view formal semantics as part of the study of linguistic cognition. As a result, philosophers put more of an emphasis on conceptual issues while linguists are more likely to focus on the syntax–semantics interface and crosslinguistic variation.
Central concepts
Truth conditions
The fundamental question of formal semantics is what you know when you know how to interpret expressions of a language. A common assumption is that knowing the meaning of a sentence requires knowing its truth conditions, or in other words knowing what the world would have to be like for the sentence to be true. For instance, to know the meaning of the English sentence "Nancy smokes" one has to know that it is true when the person Nancy performs the action of smoking.
However, many current approaches to formal semantics posit that there is more to meaning than truth-conditions. In the formal semantic framework of inquisitive semantics, knowing the meaning of a sentence also requires knowing what issues (i.e. questions) it raises. For instance "Nancy smokes, but does she drink?" conveys the same truth-conditional information as the previous example but also raises an issue of whether Nancy drinks. Other approaches generalize the concept of truth conditionality or treat it as epiphenomenal. For instance in dynamic semantics, knowing the meaning of a sentence amounts to knowing how it updates a context.
Pietroski treats meanings as instructions to build concepts.
Compositionality
The Principle of Compositionality is the fundamental assumption in formal semantics. This principle states that the denotation of a complex expression is determined by the denotations of its parts along with their mode of composition. For instance, the denotation of the English sentence "Nancy smokes" is determined by the meaning of "Nancy", the denotation of "smokes", and whatever semantic operations combine the meanings of subjects with the meanings of predicates. In a simplified semantic analysis, this idea would be formalized by positing that "Nancy" denotes Nancy herself, while "smokes" denotes a function which takes some individual x as an argument and returns the truth value "true" if x indeed smokes. Assuming that the words "Nancy" and "smokes" are semantically composed via function application, this analysis would predict that the sentence as a whole is true if Nancy indeed smokes.
Phenomena
Scope
Scope can be thought of as the semantic order of operations. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," the proposition that Paulina drinks beer occurs within the scope of negation, but the proposition that Paulina drinks wine does not. One of the major concerns of research in formal semantics is the relationship between operators' syntactic positions and their semantic scope. This relationship is not transparent, since the scope of an operator need not directly correspond to its surface position and a single surface form can be semantically ambiguous between different scope construals. Some theories of scope posit a level of syntactic structure called logical form, in which an item's syntactic position corresponds to its semantic scope. Others theories compute scope relations in the semantics itself, using formal tools such as type shifters, monads, and continuations.
Binding
Binding is the phenomenon in which anaphoric elements such as pronouns are grammatically associated with their antecedents. For instance in the English sentence "Mary saw herself", the anaphor "herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding was a major component to the government and binding theory paradigm.
Modality
Modality is the phenomenon whereby language is used to discuss potentially non-actual scenarios. For instance, while a non-modal sentence such as "Nancy smoked" makes a claim about the actual world, modalized sentences such as "Nancy might have smoked" or "If Nancy smoked, I'll be sad" make claims about alternative scenarios. The most intensely studied expressions include modal auxiliaries such as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" and "probable". However, modal components have been identified in the meanings of countless natural language expressions including counterfactuals, propositional attitudes, evidentials, habituals and generics. The standard treatment of linguistic modality was proposed by Angelika Kratzer in the 1970s, building on an earlier tradition of work in modal logic.
History
Formal semantics emerged as a major area of research in the early 1970s, with the pioneering work of the philosopher and logician Richard Montague. Montague proposed a formal system now known as Montague grammar which consisted of a novel syntactic formalism for English, a logical system called Intensional Logic, and a set of homomorphic translation rules linking the two. In retrospect, Montague Grammar has been compared to a Rube Goldberg machine, but it was regarded as earth-shattering when first proposed, and many of its fundamental insights survive in the various semantic models which have superseded it.
Montague Grammar was a major advance because it showed that natural languages could be treated as interpreted formal languages. Before Montague, many linguists had doubted that this was possible, and logicians of that era tended to view logic as a replacement for natural language rather than a tool for analyzing it. Montague's work was published during the Linguistics Wars, and many linguists were initially puzzled by it. While linguists wanted a restrictive theory that could only model phenomena that occur in human languages, Montague sought a flexible framework that characterized the concept of meaning at its most general. At one conference, Montague told Barbara Partee that she was "the only linguist who it is not the case that I can't talk to".
Formal semantics grew into a major subfield of linguistics in the late 1970s and early 1980s, due to the seminal work of Barbara Partee. Partee developed a linguistically plausible system which incorporated the key insights of both Montague Grammar and Transformational grammar. Early research in linguistic formal semantics used Partee's system to achieve a wealth of empirical and conceptual results. Later work by Irene Heim, Angelika Kratzer, Tanya Reinhart, Robert May and others built on Partee's work to further reconcile it with the generative approach to syntax. The resulting framework is known as the Heim and Kratzer system, after the authors of the textbook Semantics in Generative Grammar which first codified and popularized it. The Heim and Kratzer system differs from earlier approaches in that it incorporates a level of syntactic representation called logical form which undergoes semantic interpretation. Thus, this system often includes syntactic representations and operations which were introduced by translation rules in Montague's system. However, work by others such as Gerald Gazdar proposed models of the syntax-semantics interface which stayed closer to Montague's, providing a system of interpretation in which denotations could be computed on the basis of surface structures. These approaches live on in frameworks such as categorial grammar and combinatory categorial grammar.
Cognitive semantics emerged as a reaction against formal semantics, but there have been recently several attempts at reconciling both positions.
See also
Alternative semantics
Barbara Partee
Compositionality
Computational semantics
Discourse representation theory
Dynamic semantics
Frame semantics (linguistics)
Inquisitive semantics
Philosophy of language
Pragmatics
Richard Montague
Montague grammar
Traditional grammar
Syntax–semantics interface
References
Further reading
A very accessible overview of the main ideas in the field.
Chapter 10, Formal semantics, contains the best chapter-level coverage of the main technical directions
The most comprehensive reference in the area.
One of the first textbooks. Accessible to undergraduates.
Reinhard Muskens. Type-logical Semantics. Routledge Encyclopedia of Philosophy Online.
Barbara H. Partee. Reflections of a formal semanticist as of Feb 2005. Ample historical information. (An extended version of the introductory essay in Barbara H. Partee: Compositionality in Formal Semantics: Selected Papers of Barbara Partee. Blackwell Publishers, Oxford, 2004.)
Semantics
Formal semantics (natural language)
Grammar | 0.78736 | 0.984767 | 0.775366 |